A Blog by Jonathan Low

 

Apr 18, 2023

Why Without Effective Oversight, AI Could Hurt Patient Medical Care

Although it disappointed many with its impact on the Covid pandemic, AI may help provide better outcomes in certain healthcare situations. 

But there are growing concerns it will be viewed as a panacea whose recommendations provide simplistic but authoritative answers to complex questions, lead to insurance companies relying on it as a cheaper alternative as well as causing more malpractice lawsuits, all of which could degrade patient care. JL 

Dr. Marc Siegel reports USA Today:

Medical applications of AI are based on pattern recognition from analysis of diseases such as diabetes or heart disease before it occurs, or mutations of  brain tumors while surgery is going on. (But) AI threatens clinical judgment formed from years of experience (and) empathy for patients. Patients are already using it for medical advice. And malpractice risk? If you are a (doctor) who disagrees with your AI, a patient could sue, using AI recommendations as evidence. Insurance companies use AI algorithms for authorization, whether or not to cover a medication or test. Fights for insurance coverage could become even more escalated as personalized medicine is replaced by algorithms.

Artificial intelligence can be a useful scientific tool, but it also could threaten a doctor’s essential role

Medical school provided a similar kind of intelligence for me in that my brain was bombarded with a billion factoids, which laid a tapestry of information. Buried in my unconscious mind was the minutiae of nephrology, which I could bring to bear when I had a sick kidney patient, or the physiology of a failing heart when my patient’s lungs filled with fluid.

But the medical applications of computerized artificial intelligence are different. AI is based on a more precise pattern recognition from retinal analysis of diseases such as diabetes or heart disease before it occurs, or even the kind of mutations of a glioma (one of the worst kinds of brain tumors) while surgery is still going on.

Recent studies also have examined pre-cancerous stem cells in the blood as well as other factors for AI to analyze that could help with diagnosis and treatment. 

Nonetheless, what AI will always lack is my clinical judgment formed from years of experience, not to mention my empathy for my patients. Increasingly, AI threatens that.

You have only to look at a study recently published in Nature Biomedical Engineering that looked at intraoperative video monitoring to instruct surgeons to understand that the boundary between doctor and computer is too easily blurred in a way that could intimidate or even threaten a surgeon’s abilities.

 

And what about malpractice risk? If you are a radiologist, a dermatologist or a surgeon who decides to disagree with your AI feed, based on years of clinical judgment and experience, and you end up being proven wrong in retrospect, what is to prevent your patient from suing you and using the AI recommendations as evidence?

This might well intimidate doctors from going against AI recommendations for treatment, even if their judgment tells them to do so.

Consider that AI, when applied to clinical medicine, can give you only general answers. It cannot know the nuances of your case or history.

ChatGPT is a popular new AI bot that answers users' questions. Patients already are using it for medical advice.

As Dr. Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard, told the New England Journal of Medicine: “Now with these large language models like ChatGPT, you see patients going to and asking questions that they’ve wanted to ask their doctors – because, as is commonly the case, you forget things when you go to the doctor, perhaps because you’re stressed, and because, unfortunately, doctors don’t have that much time.”

I'm concerned about how patients will use advice from AI

Kohane is excited about this advancement, but I am deeply concerned about it. While he is right that my availability in the office for face time with my patients is limited, especially because of electronic health records documentation requirements, the solution is definitely not after-visit consultations with artificial intelligence, which could easily provide information that misleads rather than helps a patient.

In the same New England Journal of Medicine article, another AI expert, Dr. Maia Hightower, chief digital and technology officer at University of Chicago Medicine, pointed out the growing role of AI as an administrative tool in the opaque interface among doctors, patients and insurance companies.

“So in order to communicate with payers, with our insurance companies, we’ll often have bots or automation that transfers information from the health system to the insurance company and back," Hightower said. "In the case of insurance companies, we know that they often will use AI algorithms for prior authorization of procedures, whether or not to cover a particular medication or test. And in those cases, there isn’t much transparency on our side as a provider organization.”


As a practicing internist, I have a big problem with this and can envision a future where fights for insurance coverage become even more escalated than they are already – and where personalized medicine is replaced by algorithms. What’s to stop insurance companies from replacing me with a cheaper, more predictable AI robot, who practices some of the science but none of the art of medicine?

1 comments:

Thomas More said...

Many alumni and student reviews show that Custom assignment writers is the UK's most popular and best assignment site.

Post a Comment