News

ChatGPT: these are the risks of following your medical recommendations, according to a study

Artificial intelligence has made a huge leap in some time now, and nothing seems to suggest that it won’t continue to do so. But that doesn’t mean you should take everything he says at face value. The creators of the popular ChatGPT themselves have already warned actively and passively that their tool does not have to be one hundred percent reliable. And even less so when it comes to delicate matters.

Table of Contents

This is precisely what happens with health issues. Using ChatGPT to obtain medical recommendations can generate risks, as a new study indicates carried out in the United States.

The risks of ChatGPT

That artificial intelligence “lies” is something that is clear at this point. Perhaps the verb lie is too human for these programs, but the reality is that they act that way.

ChatGPT itself can often err in its conclusions, and consider information to be completely true that does not correspond to the truth at all, or that directly implies a fallacy. Many examples have demonstrated this.

But these “hallucinations,” as AI ravings are often known, have special meaning when it comes to medical recommendations. Because yes, just as Google and the Internet are used to search for this type of information, ChatGPT is also used for the same purpose. It is something natural, one supposes. Natural, yes, but not necessarily reliable.

In case there was any doubt about this, furthermore, the prestigious Long Island University in New York has emphasized this based on a new study on the matter.. In their research, the scientists asked professionals about 39 questions related to medications. When comparing their answers with those of ChatGPT, they discovered that only 10 could be defined as satisfactory.

The rest of the answers were either inaccurate or incomplete. In one of the cases, furthermore, artificial intelligence created by OpenAI It was even based on references that, according to Long Island specialists, are non-existent. For these reasons, their conclusion is clear: following ChatGPT’s recommendations on medical matters can be harmful and dangerous for patients.

A doctor without empathy

This issue, seen in perspective, responds in part to a fundamental issue: ChatGPT’s inability to develop empathy. Although AI can analyze large amounts of clinical data, it lacks the human ability to empathize and understand the patient’s personal contextwhich can often lead to impersonal recommendations unrelated to the patient’s reality.

Furthermore, human health professionals often apply clinical intuition based on experience and knowledge accumulated over years. AI, on the other hand, lacks this intuition and may not consider important subjective factors in medical decision-making.

Another example that ChatGPT (and presumably other AIs that have appeared or are yet to appear, such as Google’s Gemini or WhatsApp’s) can be extremely useful as a source of consultation or help, but they are far from always hitting the target. And with delicate matters, it is better to be cautious.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button