您现在的位置是:Warning! ChatGPT can fabricate convincing medical data >>正文

Warning! ChatGPT can fabricate convincing medical data

上海品茶网 - 夜上海最新论坛社区 - 上海千花论坛435人已围观

简介By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.New resea...

By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.

New research is revealing that ChatGPT can fabricate convincing medical data, according to a report by Cosmos Magazine published on Sunday.

Warning! ChatGPT can fabricate convincing medical data

The new study’s authors now claim that this will make it easier than ever to publish fraudulent research that can cast doubt over all legitimate work.

“In this paper, we determine how AI-generated chatboxes may be utilized to fabricate research in the medical community, with examples. Furthermore, we compare studies of human detection of AI-based works to gauge the accuracy of identification of fabricated, AI-generated works," the authors noted.

"Additionally, we test the accuracy of free, online AI detectors. The danger of fabricated research is then highlighted, along with the reasons why one would want to fabricate medical research and potential remedies to this looming threat,” they stated in their paper.

See Also

The researchers came to this conclusion after asking ChatGPT to generate an abstract for a scientific paper about the effects of two different drugs on rheumatoid arthritis. They requested that the AI bot use data from 2012 to 2020.

ChatGPT produced a realistic abstract and even gave real numbers. Furthermore, when the researchers prompted it, it claimed that one drug worked better than another, a dangerous affirmation for a chatbot to make.

Since ChatGPT only considers data up to 2019, it couldn’t have any figures from 2020 but it did claim to have taken its supporting numbers from a private database, which requires a fee to access.

“Within one afternoon, one can find themselves with dozens of abstracts that can be submitted to various conferences for publication,” warned the researchers.

“Upon acceptance of an abstract for publication, one can use this same technology to write their manuscript, completely built upon fabricated data and falsified results.”

Positive ways to use AI

Despite their warnings, the researchers did note that there can be positive ways for researchers to use AI.

“Utilizing an AI for research is not an inherently malicious endeavor,” they stated.

“Asking an AI to grammar-check work or write a conclusion for legitimate results found in a study are other uses an AI may incorporate into the research process to cut out busywork that may slow down the scientific research process.”

The issue lies more with the production of non-existent data.

“The issue arises when one utilizes data that are not existent to fabricate results to write research, which may easily bypass human detection and make its way into a publication.

“These published works pollute legitimate research and may affect the generalisability of legitimate works.”

The researchers concluded by noting that in order to continue to use ChatGPT safely, considerations for fraudulent data and its implications need to be considered, according to the report.

The study was published in the journal Patterns.

Study abstract:

Fabricating research within the scientific community has consequences for one’s credibility and undermines honest authors. We demonstrate the feasibility of fabricating research using an AI-based language model chatbot. Human detection versus AI detection will be compared to determine accuracy in identifying fabricated works. The risks of utilizing AI-generated research works will be underscored and reasons for falsifying research will be highlighted.

Tags:

相关文章



友情链接