Patients and healthcare providers should approach AI with caution

The technology may offer benefits, but careful oversight is necessary

Written by 🦋 Kerry Wong |

This banner image for

Last week, I met with a new hepatologist to address concerns about my liver function tests. The week before, I had a follow-up with the neurologist who’s been treating my sarcoidosis-associated small fiber neuropathy for years. At both appointments, my doctors asked permission to use artificial intelligence (AI) to take notes during our visit.

I instantly felt myself cringe. I’d just seen this technology on display in a recent episode of “The Pitt,” and it did not go well. The HBO Max series, now in its second season, focuses on a single shift in a Pittsburgh emergency room, with each episode reflecting one hour in the ER. It’s been hailed by medical professionals (and professional patients) as perhaps the most realistic medical drama ever created.

In the second episode of the season, “8:00 A.M.,” new attending physician Baran Al-Hashimi (played by Sepideh Moafi) is eager to demonstrate an app “that can listen to our conversation and the details of my physical exam, and write it all up in [the patient’s] medical record.” While most are wowed by the possibility, resident Dennis Whitaker (Gerran Howell) is hesitant to celebrate “without seeing the full thing.”

Al-Hashimi explains that this technology can save doctors substantial charting time, “improving both patient and physician satisfaction.” However, Whitaker reads the AI-generated content more carefully and notes a significant error. “It says here she takes Risperdal, an antipsychotic. She takes Restoril when needed for sleep.” In response, medical student Joy Kwon (Irene Choi) quips, “AI: almost intelligent.”

Recommended Reading
An anatomical heart beats inside of a

New AI model predicts sudden cardiac death risk better in cardiac sarcoidosis

I asked my neurologist if she finds the app-generated notes accurate, and told her about this episode. She assured me that, “of course,” she has to check the record and correct any errors, but added that it still takes more thorough, detailed notes than she could. And so, a little less reluctantly, I consented.

That’s the part that scares me, though. Not so much with this doctor — I trust her. But I worry about less diligent members of the medical community relying too much on AI and not enough on their own intelligence, knowledge, and intuition. Al-Hashimi, in fact, derides others in the ER who are led by their “gut” instincts (though they often turn out to be correct).

I know, I sound like an old fogy who’s afraid of new technology, and maybe I am. But my fear is more about its application than the AI technology itself.

Skepticism is healthy

We’ve all searched the internet for our symptoms as they developed and our disease(s) once we were finally diagnosed. As symptoms change or we consider new treatments, we run back to our phones or computers to learn more.

These days, Google search results start with an AI overview, a compilation of details from a variety of sources about the topic in question. But what information, and which sources it is selected from, makes all the difference. When it comes to our health, we need to know that the data we’re receiving, and that our providers are using to treat us, are accurate and verified.

Unsurprisingly, I am not alone in my trepidation. Bionews, the parent company of Sarcoidosis News and more than 50 other rare disease communities, recently released a Rare Trust in AI Index. According to this survey, “a majority of rare disease patients remain skeptical of health information generated by artificial intelligence, even as AI tools become more visible in healthcare.”

This skepticism is a good thing. It forces us not to simply accept the AI overview, but to look further, to the sources from which that information is gathered. I know I can rely on the sarcoidosis information I’ll find at trusted news sites, hospitals, and patient advocacy organizations.

Doctors often warn patients against using “Dr. Google,” encouraging us to be patient and trust their education and experience. I can somewhat understand that caution; having all the answers at our fingertips means we’ll have the wrong answers right there, too. If we aren’t able to discern the difference, we may face dangerous consequences.

Personally, I feel most comfortable with a combination of both. I need doctors whose expertise and judgment I can trust, but I also need to do my own research to guide my conversation with them. Now, I need my doctors to have that same trust, combining the benefits of AI with the quality assurance of their own oversight.

I am trying to come to terms with this new technology and its place in managing my sarcoidosis. It seems inevitable, but it must be used with caution by both our providers and ourselves. AI can be a great starting point — an arrow that can guide us in one direction or another — but ultimately, we must choose which path to follow.


Note: Sarcoidosis News is strictly a news and information website about the disease. It does not provide medical advice, diagnosis, or treatment. This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read on this website. The opinions expressed in this column are not those of Sarcoidosis News or its parent company, Bionews, and are intended to spark discussion about issues pertaining to sarcoidosis.

Leave a comment

Fill in the required fields to post. Your email address will not be published.