An AI-based second opinion service could improve clinical decision-making today

https://www.statnews.com/2024/05/03/ai-based-second-opinion-service/

Millions of Americans rely on the internet to answer questions about their own health. The public release of powerful artificial intelligence models like ChatGPT has only accelerated these trends.

In a large survey, more than half of American adults reported putting their own health information into a large language model (LLM). And there’s reason to believe these models might bring real value to these people, such as the case of a mother who, after seeing 17 physicians and receiving no diagnosis for her son with chronic pain, put MRI reports and additional history into ChatGPT. It returned a diagnosis of tethered cord syndrome, which was later confirmed — and operated on — by a neurosurgeon.

This story is not unique. Missed or delayed diagnoses harm patients every day. Each year, an estimated 795,000 Americans die or become permanently disabled from misdiagnoses. And these misdiagnoses are not exclusively rare “zebras” like tethered cord syndrome. Just 15 or so diseases, many of them common, like heart disease and breast cancer, account for half of serious harms. The sicker an individual, the higher the stakes — and the more common these errors become. In a recent study of people admitted to the hospital who were then transferred to an intensive care unit because their conditions got worse, 23% had a diagnostic error affecting their case; 17% of those errors caused severe harm or death.

While numerous factors — many of them outside the control of physicians — are at play in diagnostic errors, human cognition plays a major role. These problems have long been realized by the medical community — the Institute of Medicine released its landmark report “To Err is Human,” in 1999, with comprehensive recommendations to tackle diagnostic errors. But 25 years later, diagnostic errors remain stubbornly persistent.

While many people might imagine that a physician approaches a diagnosis much like Sherlock Holmes — or Dr. House — diligently collecting facts to compare against her or his encyclopedic knowledge of the disease, the reality is far more prosaic. Decades of psychological study, influenced by the pioneering work of Danny Kahneman and Amos Tversky, have shown that diagnosis is subject to the same predictable biases and heuristics as other domains of knowledge. For example, emergency room doctors were less likely to test for a pulmonary embolism (a blood clot in the lungs) when the triage information mentioned heart failure, even when objective data and documented symptoms suggested a pulmonary embolism. This suggested that the physicians got stuck on the first information given to them, a problem called anchoring bias.

Doctors do a poor job of estimating the likelihood that patients have diseases and how testing changes those probabilities — and are readily outperformed by general-purpose language models. Decades of research have similarly shown the widespread involvement of other cognitive biases such as availability bias, confirmation bias, and premature closure in the diagnostic process.

Since ChatGPT was released to the public in late 2022, there have been hundreds of demonstrations of the diagnostic reasoning capabilities of general-purpose large language models and other AI models on a broad array of general diagnostic tasks, some of which we performed with various collaborators. We believe there is compelling evidence that AI, safely integrated into the clinical workflow, could be helpful today to address some of the limitations of human cognition for medical diagnosis. In particular, AI could be made available as a “second opinion” service in the hospital to assist physicians and other medical professionals with challenging medical cases and also to check for blind spots in diagnostic reasoning. Second opinion services with human physicians — admittedly on a much smaller scale — have already shown that they can provide real value to patients.

What would this look like in practice?

Building a second opinion system powered by a large language model is no longer in the realm of science fiction. As a physician treating patients (A.R.) and a medical AI researcher (A.M.), we envision a system that allows a treating physician, using the electronic medical record, to place an “order.” But instead of selecting a diagnostic test, the physician would summarize the clinical question about a patient the same way they would talk to a colleague. After submitting the order, the question, along with the entire chart, would go to a secure computing environment where an LLM would process and provide a recommendation of possible diagnoses, blind spots, and therapeutic options.

Just as in the opening case, where the diagnosis of tethered cord syndrome was confirmed by a neurosurgeon, recommendations emerging from the model would be first reviewed by a physician who serves as a human in the loop to prevent obvious errors and hallucinations (where an AI model often confidently states factual inaccuracies). After this review, the second opinion would be sent back to the requesting doctor to be placed in the medical record and considered by the ordering physician.

Similar to human second opinions, it is not essential that the requesting physician follow the recommendations emerging from the LLM. But the mere process of considering other options can help reduce diagnostic errors. And unlike human second opinion services, the costs of running the model can be measured in cents, and the model can serve scores of clinicians and their patients in parallel.

To be sure, there are obvious risks that would need to be mitigated in early studies with close human involvement. LLMs contain the ethnic, racial, and gender biases of the data they were trained on, which could influence second opinions in unpredictable and harmful ways. LLMs are also capable of hallucinating; while humans also make errors, AI hallucinations may be more egregious and may be more likely to cause harm. Having a human expert in the loop would be absolutely essential, especially in early studies.

However, the stakes of continuing the current rate of diagnostic errors are so high, and other attempts to reduce errors have failed to make any meaningful dent, that we feel now is the time to start studying these technologies. To riff off the old saying, to err is human, so AI must opine.

Adam Rodman is a practicing internist at the Beth Israel Deaconess Medical Center and an assistant professor of medicine at Harvard Medical School. Arjun K. Manrai is an assistant professor of biomedical informatics at Harvard Medical School and a founding deputy editor of NEJM AI.

LETTER TO THE EDITOR

Have an opinion on this essay? Submit a letter to the editor here.

{
"by": "pseudolus",
"descendants": 0,
"id": 40247340,
"score": 2,
"time": 1714742240,
"title": "An AI-based second opinion service could improve clinical decision-making today",
"type": "story",
"url": "https://www.statnews.com/2024/05/03/ai-based-second-opinion-service/"
}
{
"author": "Adam Rodman, Arjun K. Manrai",
"date": "2024-05-02T20:41:55.000Z",
"description": "Safely integrating AI into the clinical workflow could help address some of the limitations of human cognition when it comes to diagnosing medical problems.",
"image": "https://www.statnews.com/wp-content/uploads/2024/05/AdobeStock_610351985-1024x576.jpeg",
"logo": "https://logo.clearbit.com/statnews.com",
"publisher": "STAT",
"title": "An AI-based second opinion service could improve clinical decision-making today",
"url": "https://www.statnews.com/2024/05/03/ai-based-second-opinion-service/"
}
{
"url": "https://www.statnews.com/2024/05/03/ai-based-second-opinion-service/",
"title": "An AI-based second opinion service could improve clinical decision-making today",
"description": "Millions of Americans rely on the internet to answer questions about their own health. The public release of powerful artificial intelligence models like ChatGPT has only accelerated these trends. In a large...",
"links": [
"https://www.statnews.com/2024/05/03/ai-based-second-opinion-service/",
"https://www.statnews.com/?p=1148556"
],
"image": "https://www.statnews.com/wp-content/uploads/2024/05/AdobeStock_610351985-1024x576.jpeg",
"content": "<section>\n\t<p><span><span>M</span></span>illions of Americans rely on the internet to answer questions about their own health. The public release of powerful artificial intelligence models like <a target=\"_blank\" href=\"https://www.statnews.com/2023/01/26/chatgpt-health-care-science-hospitals/\">ChatGPT</a> has only accelerated these trends.</p>\n<p>In a <a href=\"https://www.usertesting.com/resources/library/industry-reports/us-consumer-perceptions-ai-healthcare\" target=\"_blank\">large survey</a>, more than half of American adults reported putting their own health information into a large language model (LLM). And there’s reason to believe these models might bring real value to these people, such as the <a href=\"https://www.today.com/health/mom-chatgpt-diagnosis-pain-rcna101843\" target=\"_blank\">case of a mother</a> who, after seeing 17 physicians and receiving no diagnosis for her son with chronic pain, put MRI reports and additional history into ChatGPT. It returned a diagnosis of <a href=\"https://www.ninds.nih.gov/health-information/disorders/tethered-spinal-cord-syndrome\" target=\"_blank\">tethered cord syndrome</a>, which was later confirmed — and operated on — by a neurosurgeon.</p>\n<p>This story is not unique. Missed or delayed diagnoses harm patients every day. Each year, an<a href=\"https://qualitysafety.bmj.com/content/33/2/109\" target=\"_blank\"> estimated 795,000 Americans</a> die or become permanently disabled from misdiagnoses. And these misdiagnoses are not exclusively rare “zebras” like tethered cord syndrome. Just 15 or so diseases, many of them common, like heart disease and breast cancer, <a href=\"https://pubmed.ncbi.nlm.nih.gov/37460118/\" target=\"_blank\">account for half of serious harms</a>. The sicker an individual, the higher the stakes — and the more common these errors become. In a <a href=\"https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2813854\" target=\"_blank\">recent study</a> of people admitted to the hospital who were then transferred to an intensive care unit because their conditions got worse, 23% had a diagnostic error affecting their case; 17% of those errors caused severe harm or death.</p>\n<p>While numerous factors — many of them outside the control of physicians — are at play in diagnostic errors, human cognition plays a major role. These problems have long been realized by the medical community — the Institute of Medicine released its landmark report <a href=\"https://www.ncbi.nlm.nih.gov/books/NBK225182/\" target=\"_blank\">“To Err is Human,”</a> in 1999, with comprehensive recommendations to tackle diagnostic errors. But 25 years later, diagnostic errors remain stubbornly persistent.</p>\n<p>While many people might imagine that a physician approaches a diagnosis much like Sherlock Holmes — or Dr. House — diligently collecting facts to compare against her or his encyclopedic knowledge of the disease, the reality is far more prosaic. Decades of psychological study, influenced by the <a href=\"https://www.newyorker.com/books/page-turner/the-two-friends-who-changed-how-we-think-about-how-we-think\" target=\"_blank\">pioneering work</a> of Danny Kahneman and Amos Tversky, have shown that diagnosis is subject to the same predictable biases and heuristics as other domains of knowledge. For example, emergency room doctors were <a href=\"https://pubmed.ncbi.nlm.nih.gov/37358843/\" target=\"_blank\">less likely to test</a> for a pulmonary embolism (a blood clot in the lungs) when the triage information mentioned heart failure, even when objective data and documented symptoms suggested a pulmonary embolism. This suggested that the physicians got stuck on the first information given to them, a problem called anchoring bias.</p>\n<p>Doctors <a href=\"https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2778364\" target=\"_blank\">do a poor job</a> of estimating the likelihood that patients have diseases and how testing changes those probabilities — and are <a href=\"https://pubmed.ncbi.nlm.nih.gov/38079174/\" target=\"_blank\">readily outperformed</a> by general-purpose language models. Decades of research have similarly shown the widespread involvement of other cognitive biases such as availability bias, confirmation bias, and premature closure in the diagnostic process.</p>\n<p>Since ChatGPT was released to the public in late 2022, there have been hundreds of demonstrations of the diagnostic reasoning capabilities of general-purpose large language models and other AI models on a broad array of general diagnostic tasks, some of which <a href=\"https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2812737\" target=\"_blank\">we</a> <a href=\"https://arxiv.org/abs/2311.05591\" target=\"_blank\">performed</a> <a href=\"https://www.medrxiv.org/content/10.1101/2024.03.12.24303785v1\" target=\"_blank\">with</a> <a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10273128/\" target=\"_blank\">various</a> <a href=\"https://arxiv.org/abs/2211.13352\" target=\"_blank\">collaborators</a>. We believe there is compelling evidence that AI, safely integrated into the clinical workflow, could be helpful today to address some of the limitations of human cognition for medical diagnosis. In particular, AI could be made available as a “second opinion” service in the hospital to assist physicians and other medical professionals with challenging medical cases and also to check for blind spots in diagnostic reasoning. Second opinion services with human physicians — admittedly on a much smaller scale — have already shown that they can <a href=\"https://pubmed.ncbi.nlm.nih.gov/32645107/\" target=\"_blank\">provide real value</a> to patients.</p>\n<h2>What would this look like in practice?</h2>\n<p>Building a second opinion system powered by a large language model is no longer in the realm of science fiction. As a physician treating patients (A.R.) and a medical AI researcher (A.M.), we envision a system that allows a treating physician, using the electronic medical record, to place an “order.” But instead of selecting a diagnostic test, the physician would summarize the clinical question about a patient the same way they would talk to a colleague. After submitting the order, the question, along with the entire chart, would go to a secure computing environment where an LLM would process and provide a recommendation of possible diagnoses, blind spots, and therapeutic options.</p>\n<p>Just as in the opening case, where the diagnosis of tethered cord syndrome was confirmed by a neurosurgeon, recommendations emerging from the model would be first reviewed by a physician who serves as a <a href=\"https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems\" target=\"_blank\">human in the loop</a> to prevent obvious errors and hallucinations (where an AI model often confidently states factual inaccuracies). After this review, the second opinion would be sent back to the requesting doctor to be placed in the medical record and considered by the ordering physician.</p>\n<p>Similar to human second opinions, it is not essential that the requesting physician follow the recommendations emerging from the LLM. But the mere process of considering other options can <a href=\"https://www.nature.com/articles/s41591-024-02850-w\" target=\"_blank\">help reduce diagnostic errors</a>. And unlike human second opinion services, the costs of running the model can be measured in cents, and the model can serve scores of clinicians and their patients in parallel.</p>\n<p>To be sure, there are obvious risks that would need to be mitigated in early studies with close human involvement. LLMs contain the ethnic, racial, and gender <a href=\"https://www.nature.com/articles/s41746-023-00939-z\" target=\"_blank\">biases</a> of the data they were trained on, which could influence second opinions in unpredictable and harmful ways. LLMs are also capable of hallucinating; while humans also make errors, AI hallucinations may be more egregious and may be more likely to cause harm. Having a human expert in the loop would be absolutely essential, especially in early studies.</p>\n<p>However, the stakes of continuing the current rate of diagnostic errors are so high, and other attempts to reduce errors have failed to make any meaningful dent, that we feel now is the time to start studying these technologies. To riff off the old saying, to err is human, so AI must opine.</p>\n<p><em>Adam Rodman is a practicing internist at the Beth Israel Deaconess Medical Center and an assistant professor of medicine at Harvard Medical School. Arjun K. Manrai is an assistant professor of biomedical informatics at Harvard Medical School and a founding deputy editor of </em><a href=\"https://ai.nejm.org/\" target=\"_blank\"><em>NEJM AI</em></a><em>.</em></p>\n<p>\n</p><h2>LETTER TO THE EDITOR</h2>\n<h4>Have an opinion on this essay? <a href=\"https://statnews.com/submit-a-letter-to-the-editor/?subject=An%20AI-based%20second%20opinion%20service%20could%20improve%20clinical%20decision-making%20today\" target=\"_blank\">Submit a letter to the editor here</a>.</h4>\n<p></p>\n</section>",
"author": "@adamrodmanmd",
"favicon": "https://www.statnews.com/wp-content/uploads/2023/06/cropped-STAT-Favicon-Round-192x192.png",
"source": "statnews.com",
"published": "2024-05-02T20:41:55+00:00",
"ttr": 216,
"type": "article"
}