{"id":378619,"date":"2023-12-12T13:12:20","date_gmt":"2023-12-12T18:12:20","guid":{"rendered":"https:\/\/platohealth.ai\/chatgpt-performs-well-as-partner-in-diagnosing-patients-drugs-com-mednews\/"},"modified":"2023-12-12T20:47:42","modified_gmt":"2023-12-13T01:47:42","slug":"chatgpt-performs-well-as-partner-in-diagnosing-patients-drugs-com-mednews","status":"publish","type":"post","link":"https:\/\/platohealth.ai\/chatgpt-performs-well-as-partner-in-diagnosing-patients-drugs-com-mednews\/","title":{"rendered":"ChatGPT Performs Well as ‘Partner’ in Diagnosing Patients – Drugs.com MedNews","gt_translate_keys":[{"key":"rendered","format":"text"}]},"content":{"rendered":"

Medically reviewed by Carmen Pope, BPharm<\/a>. Last updated on Dec 13, 2023.<\/span><\/p>\n

By Ernie Mundell HealthDay Reporter<\/p>\n

TUESDAY, Dec. 12, 2023 — Doctor’s brains are great decision-makers, but even the smartest physicians might be well-served with a little diagnostic help from ChatGPT, a new study suggests.<\/p>\n

The main benefit comes from a thinking process known as “probabilistic reasoning” — knowing the odds that something will (or won’t) happen.<\/p>\n

\u201cHumans struggle with probabilistic reasoning, the practice of making decisions based on calculating odds,\u201d explained study lead author Dr. Adam Rodman<\/a>, of Beth Israel Deaconess Medical Center in Boston.<\/p>\n

\u201cProbabilistic reasoning is one of several components of making a diagnosis, which is an incredibly complex process that uses a variety of different cognitive strategies,” he explained in a Beth Israel news release. “We chose to evaluate probabilistic reasoning in isolation, because it is a well-known area where humans could use support.\u201d<\/p>\n

The Beth Israel team utilized data from a previously published survey of 550 health care practitioners. All had been asked to perform probabilistic reasoning to diagnose five separate medical cases.<\/p>\n

In the new study, however, Rodman’s team gave the same five cases to ChatGPT’s AI algorithm, the Large Language Model (LLM), ChatGPT-4.<\/p>\n

The cases included information from common medical tests, such as a chest scan for pneumonia, a mammography for breast cancer, a stress test for coronary artery disease and a urine culture for urinary tract infection.<\/p>\n

Based on that info, the chatbot used its own probabilistic reasoning to reassess the likelihood of various patient diagnoses.<\/p>\n

Of the five cases, the chatbot was more accurate than the human doctor for two; similarly accurate for another two; and less accurate for one. The researchers considered this a “draw” when comparing humans to the chatbot for medical diagnoses.<\/p>\n

But the ChatGPT-4 chatbot excelled when a patients’ tests came back negative (rather than positive), becoming more accurate at diagnosis than the doctors in all five cases.<\/p>\n

\u201cHumans sometimes feel the risk is higher than it is after a negative test result, which can lead to over-treatment, more tests and too many medications,\u201d Rodman pointed out. He’s an internal medicine physician and investigator in the department of medicine at Beth Israel.<\/p>\n

The study was published Dec. 11 in JAMA Network Open<\/a><\/em>.<\/p>\n

It’s possible then that doctors may someday work in tandem with AI to become even more accurate in patient diagnosis, the researchers said.<\/p>\n

Rodman called that prospect “exciting.”<\/p>\n

“Even if imperfect, their [chatbots’] ease of use and ability to be integrated into clinical workflows could theoretically make humans make better decisions,\u201d he said. \u201cFuture research into collective human and artificial intelligence is sorely needed.\u201d<\/p>\n

\n

Sources<\/h2>\n