ChatGPT makes up fake data about cancer, doctors warn

*Click the Title above to view complete article on https://24newshd.tv/.

2023-04-05T12:47:00+05:00 News Desk

Doctors are warning against using ChatGPT for medical advice after a study found it made up health data when asked for information about cancer, Mail Online reported.

The AI chatbot answered one in ten questions about breast cancer screening wrongly and correct answers were not as ‘comprehensive’ as those found through a simple Google search.

Researchers said in some cases the AI chatbot even used fake journal articles to support its claims.

It comes amid warnings that users should treat the software with caution as it has a tendency to ‘hallucinate’ – in other words make things up.

Researchers from the University of Maryland School of Medicine asked ChatGPT to answer 25 questions related to advice on getting screened for breast cancer.

With the chatbot known to vary its response, each question was asked three separate times. The results were then analyzed by three radiologists trained in mammography.

The ‘vast majority’ - 88 percent - of the answers were appropriate and easy to understand. But some of the answers, however, were ‘inaccurate or even fictitious’, they warned.

One answer for example was based on outdated information. It advised the delay of a mammogram due for four to six weeks after getting a Covid-19 vaccination, however this advice was changed over a year ago to recommend women don’t wait.

ChatGPT also provided inconsistent responses to questions about the risk of getting breast cancer and where to get a mammogram. The study found answers ‘varied significantly’ each time the same question was posed.

Study co-author Dr Paul Yi said: ‘We’ve seen in our experience that ChatGPT sometimes makes up fake journal articles or health consortiums to support its claims.

‘Consumers should be aware that these are new, unproven technologies, and should still rely on their doctor, rather than ChatGPT, for advice.’

The findings – published in the journal Radiology - also found that a simple Google search still provided a more comprehensive answer.

Lead author Dr Hana Haver said ChatGPT relied on only one set of recommendations from one organization, issued by the American Cancer Society, and did not offer differing recommendations put out by the Disease Control and Prevention or the US Preventative Services Task Force.

The launch of ChatGPT late last year drove a wave in demand for the technology, with millions of users now using the tools every day, from writing school essays to searching for health advice.

Microsoft has invested heavily in the software behind ChatGPT and is incorporating it into its Bing search engine and Office 365, including Word, PowerPoint and Excel.

But the tech giant has admitted it can still make mistakes.

AI experts call the phenomenon ‘hallucination’, in which a chatbot that can not find the answer it is trained on confidently responds with a made-up answer it deems plausible.

It then goes on to repeatedly insist the wrong answer without any internal awareness that it is a product of its own imagination.

Dr Yi however suggested the results were positive overall, with ChatGPT correctly answering questions about the symptoms of breast cancer, who is at risk, and questions on the cost, age, and frequency recommendations concerning mammograms.

He said the proportion of right answers was ‘pretty amazing’, with the ‘added benefit of summarising information into an easily digestible form for consumers to easily understand’.

Over a thousand academics, experts, and bosses in the tech industry recently called for an emergency stop in the ‘dangerous’ ‘arms race’ to launch the latest AI.

They warned the battle among tech firms to develop ever more powerful digital minds is ‘out of control’ and poses ‘profound risks to society and humanity’.

View More News