Beware ChatGPT’s Medical Advice

Viktor Eriksson
2 Min Read

In over half the situations where hospital referral was warranted, the chatbot suggested patients either remain at home or simply arrange a standard doctor’s visit.

Chat GPT Health
Credit: Open AI

Concerns about the safety of OpenAI’s ChatGPT Health service have been raised by a new study in Nature Medicine, revealing that it often fails to recommend emergency care when truly required, as reported by The Guardian.

Researchers evaluated ChatGPT Health using 60 authentic patient scenarios, ranging from minor discomfort to severe medical conditions. Three physicians independently determined the necessary level of care beforehand, and these assessments were then benchmarked against the AI tool’s recommendations. Remarkably, in more than half of the instances where a patient should have been immediately admitted to the hospital, the system instead advised staying home or arranging a routine doctor’s appointment.

The study indicated that the service showed greater proficiency in unmistakable emergencies like strokes or intense allergic reactions. However, it struggled with symptoms that were more intricate or less defined. Furthermore, the researchers highlighted deficiencies in the system’s management of suicide risk, noting that crucial warning features occasionally vanished based on supplemental details provided within a given scenario.

OpenAI, in its defense, stated that the findings of the study do not accurately represent the typical real-world application of their service, and emphasized that the model undergoes continuous updates, as reported by The Guardian.

Generative AIArtificial Intelligence
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *