您现在的位置是:Warning! ChatGPT can fabricate convincing medical data >>正文
Warning! ChatGPT can fabricate convincing medical data
上海品茶网 - 夜上海最新论坛社区 - 上海千花论坛435人已围观
简介By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.New resea...
By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.
New research is revealing that ChatGPT can fabricate convincing medical data, according to a report by Cosmos Magazine published on Sunday.
The new study’s authors now claim that this will make it easier than ever to publish fraudulent research that can cast doubt over all legitimate work.
“In this paper, we determine how AI-generated chatboxes may be utilized to fabricate research in the medical community, with examples. Furthermore, we compare studies of human detection of AI-based works to gauge the accuracy of identification of fabricated, AI-generated works," the authors noted.
"Additionally, we test the accuracy of free, online AI detectors. The danger of fabricated research is then highlighted, along with the reasons why one would want to fabricate medical research and potential remedies to this looming threat,” they stated in their paper.
See AlsoThe researchers came to this conclusion after asking ChatGPT to generate an abstract for a scientific paper about the effects of two different drugs on rheumatoid arthritis. They requested that the AI bot use data from 2012 to 2020.
ChatGPT produced a realistic abstract and even gave real numbers. Furthermore, when the researchers prompted it, it claimed that one drug worked better than another, a dangerous affirmation for a chatbot to make.
Since ChatGPT only considers data up to 2019, it couldn’t have any figures from 2020 but it did claim to have taken its supporting numbers from a private database, which requires a fee to access.
“Within one afternoon, one can find themselves with dozens of abstracts that can be submitted to various conferences for publication,” warned the researchers.
“Upon acceptance of an abstract for publication, one can use this same technology to write their manuscript, completely built upon fabricated data and falsified results.”
Positive ways to use AI
Despite their warnings, the researchers did note that there can be positive ways for researchers to use AI.
“Utilizing an AI for research is not an inherently malicious endeavor,” they stated.
“Asking an AI to grammar-check work or write a conclusion for legitimate results found in a study are other uses an AI may incorporate into the research process to cut out busywork that may slow down the scientific research process.”
The issue lies more with the production of non-existent data.
“The issue arises when one utilizes data that are not existent to fabricate results to write research, which may easily bypass human detection and make its way into a publication.
“These published works pollute legitimate research and may affect the generalisability of legitimate works.”
The researchers concluded by noting that in order to continue to use ChatGPT safely, considerations for fraudulent data and its implications need to be considered, according to the report.
The study was published in the journal Patterns.
Study abstract:
Fabricating research within the scientific community has consequences for one’s credibility and undermines honest authors. We demonstrate the feasibility of fabricating research using an AI-based language model chatbot. Human detection versus AI detection will be compared to determine accuracy in identifying fabricated works. The risks of utilizing AI-generated research works will be underscored and reasons for falsifying research will be highlighted.
Tags:
转载:欢迎各位朋友分享到网络,但转载请说明文章出处“上海品茶网 - 夜上海最新论坛社区 - 上海千花论坛”。http://www.jz08.com.cn/news/852832.html
相关文章
Lockheed Martin's tech may allow quick calibration of space payload
Warning! ChatGPT can fabricate convincing medical dataBy subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.Lockheed...
阅读更多
The US government steps up its effort to nab criminals in the crypto sector
Warning! ChatGPT can fabricate convincing medical dataFollowing the rise of crypto-related crimes, the US is ramping up its enforcement action against cyb...
阅读更多
Bitcoin worth $1B has been tokenized on Ethereum since June
Warning! ChatGPT can fabricate convincing medical dataOver 100k “synthetic” bitcoins or $1.07 billion has been minted on Ethereum as people lo...
阅读更多
热门文章
- LimX Dynamics' first humanoid robot gains real
- There’s now an AI cancer survivor calculator
- Researchers pioneer electronics
- NASA Mars robot technology adapted for Earth: Saving lives in disaster response
- Europe’s first exascale supercomputer JUPITER to launch in 2024
- $115 million raised by Moonfire Ventures for European AI startups