News

When AI makes mistakes: ChatGPT can generate fake data to support scientific hypotheses

Artificial intelligence is all the rage. While companies like Meta outline their proposals to add to applications like WhatsApp, among other tools, it is clear that ChatGPT is the most popular AI technology of the moment. An aid that many people use daily to resolve all types of doubts and questions.

Table of Contents

But that does not mean that the chatbot developed by OpenAI is always rigorous in all its responses. At least that’s what it shows a new study, which ensures that ChatGPT can generate false data to support scientific hypotheses.

ChatGPT and the invention of data

We have all heard the supposed risks that artificial intelligence may have for humanity, some of them more typical of science fiction than what reality seems to demonstrate.

At the end of the day, ChatGPT is nothing more than a tool capable of synthesizing data and providing answers to questions based on it. But what would happen if those data were not really true?

According to a new study published by JAMA Ophthalmology, that is precisely what happens with OpenAI AI. Scientists Andrea Taloni, Vicenzo Scorcia and Giuseppe Giannaccare, concerned about the integrity of artificial intelligence when dealing with certain scientific hypotheses, decided to put it directly to the test.

His exercise was simple: simulate an investigation that revolved around two surgical procedures for eye infections. The AI ​​response included 160 male and 140 female participants and described data that, while seemingly correct at first, led to clearly wrong conclusions, in the eyes of the experts.

Basically, ChatGPT confirmed that one of the proposed methods was more effective than the other, when experts say the exact opposite is true. The problem? That AI invented information to support its hypotheses, something that has raised concerns among scientists who carried out the study.

ChatGPT is a “cliché generator”

Although the news seems surprising, in reality it may not be so surprising, if you look at it with perspective. Although ChatGPT is a complex tool full of possibilities, it is by no means infallible. Its friendly and very “human” presentation can hide reality: its main function is to be credible, but not completely truthful.

As many specialists have pointed out, ChatGPT is incapable of reasoning, no matter how much it may sometimes seem otherwise. What does this mean? Well, the conclusions reached do not always have to be sensible or true. The famous professor Ariel Guersenzvaig, for example, has already warned that the OpenAI platform “invents the sources.”

A conclusion also reached by UNED vice-rector Julio Gonzalo, who shared in social networks a search about Spanish poets in which the AI ​​invented the birth dates of many authors. Apparently, to adjust to the requests that the teacher had previously made.

To what extent can ChatGPT and artificial intelligence in general be trusted? This is a question that is on the minds of many right now. Nor should we forget that, as is often remembered, this is a technology that is still in its infancy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button