Rethinking medical education in times of AI: Part 1 “When doctors aren’t the only ones making diagnoses”

Part 1: “When doctors aren’t the only ones making diagnoses.”

Sure, the promise isn’t entirely new.

When IBM’s supercomputer Watson beat “Jeopardy!” champions Ken Jennings and Brad Rutter in 2011, the company tried to reach for the stars. The logic was simple: If their approach was superior in a quiz show, why not do the same in medicine.

Calling it “the future of knowing”, (at least) their marketing message was strong. But reality was brutal. Instead of revolutionizing medicine, IBM’s stock price took a long lasting hit. Their lead scientist David Ferrucci left a year later. Watson had failed.

“Solving medicine” turned out to be extremely complex. At least compared to a popular game of trivia. Yet, the promise of medical AI kept growing and growing.

A few years later, when Machine Learning expert Geoff Hinton was asked at a 2016 Toronto conference about the most exciting things to come, he confidently replied:

“Let me start by just saying a few things that seem obvious. I think if you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff but hasn’t looked down. So he doesn’t realize there’s no ground underneath him. People should stop training radiologists.”

“Obvious”. Sure. Yet, whenever I tried to meet up with my radiology friend David in the years thereafter, it’s always been him having a tight schedule and a barely manageable workload.

If AI had only been able to cut his workload in half, he sure would have been deeply grateful – even if you had called him a coyote before.

Yet other experts joined the choir. Asked by then IMF director Christine Lagarde whether there will be an AI doctor, thought leader Yuval Noah Harari replied in a 2018 interview: “Certainly. I think it’s coming quite soon.”

With the Watson debacle in mind, naturally I stayed skeptical. But then came the fall of 2022 and suddenly everything seemed different: LLMs are everywhere!

So are we about to finally meet Sherlock?

AI will support doctors in the diagnosing process

AI making diagnoses

It’s not only the experience that pretty much everyone has probably had by now: You type in really any question into Chat GPT (or similar models) and you always get a surprisingly sound (sounding) answer. Even when it comes to medical topics.

Already people have reported remarkable real-life success stories. Only recently a US woman claimed that using Chat GPT has led to finally finding the correct diagnosis for her 4 year old son. Three short words (“Tethered Cord Syndrome”) turned out to be the simple answer for what they’ve been so desperately searching for.

And studies are also starting to catch up – not only in radiology. Take a paper that was recently published in the journal “Rheumatology International”, for instance. In this study, Martin Krusche and his co-authors evaluated the diagnostic accuracy of a large language model, directly comparing rheumatologists with ChatGPT-4.

In cases of patients with inflammatory rheumatic disease (IRD), “ChatGPT-4 provided the top diagnosis in 71 % vs 62 % in the rheumatologists’ analysis. Correct diagnosis was among the top 3 in 86 % (ChatGPT-4) vs 74 % (rheumatologists).”

This of course is only one of the many astonishing results of this particular study – and more broadly speaking of the countless other studies that are currently being conducted.

This time it really seems like the “big AI promise” might actually deliver.

Yes, each study result still needs to be confirmed by other researchers. Yes, many AI-based studies are being conducted in “lab settings” and still need further practical evaluation, as well as special safety guards. And yes, the given example shows that AI is far from perfect…

…but so are doctors.

Doctors in the Age of AI

As in any profession, the quality spectrum ranging from good to bad is vast. And even a generally “good doctor” can have a bad day or simply be experience-biased when cases seem (too) similar to previous ones (see Kahneman’s “Thinking, Fast and Slow”).

I myself have been running around with a seemingly chronic dermatological disease because a seasoned doctor had over confidently told me so. It “only” took 3 years and a coincidental encounter at a dinner party for my “incurable nightmare” to indeed be cured (by later briefly applying an anti-fungal shampoo). Hallelujah.

But even if a doctor’s diagnosis is correct, the experience can still be – let’s call it “bumpy” at times. Many patients report that their clinician had a paternalizing style or seemed impatient. Something Chat GPT would never do to you (unless you’ve specifically asked it to).

Of course AI predictions still remain tricky and many important questions unclear: Will the clinical use of AI be more of a hybrid or centaur approach for example?

Even though by now it seems highly likely that doctors will soon be integrating AI into their workflows. Nobody knows which tasks will be solved in a joined “AI-AND-doctor-effort” (hybrid approach) or in a fully separated “AI-OR-doctor-effort” (centaur approach: some tasks handled entirely by doctor, others solely by AI).

Also many regulatory, safety, and ethical issues might keep delaying what in studies already seems to work. But no matter when exactly this shift will happen, there’s one prediction we’ll already make with confidence:

The “End of Dr. House” is near

The Dr-House-types of physicians (“knowledgeable, yet patronizing”) will soon be outdated.

Why? Because AI will raise all doctors’ diagnostic abilities and thereby shrink the quality gap between the good and the bad ones (something studies already start indicating).

And even if some Dr-House-types manage to remain top of their class for a while (knowledge-wise). This will be only by a small negligible margin. Not enough to compensate for a bad temper.

Instead the best doctors will be the ones that both are integrating AI into their daily practice and are “all-stars” when it comes to health communication.

Now you might argue: But patients can also “talk” to their AI chatbot for hours. Of course. However, that doesn’t mean that they’ll be able to fully understand and properly evaluate all associated risks. After all, studying medicine takes time – no matter who the novice is.

But more importantly, whenever hardship hits and people get severely sick, it’s essential to get the neutral opinion from someone who’s been trained in exactly this: Giving professional advice. Because at the core of our soul, we all know: When being in an emotionally vulnerable state, we just tend to make bad decisions. I sure have.

Of course there are also family and friends around to help out. And that’s great. But given the emotional closeness of these relationships, there are always subconscious strings-attached and other emotional biases (see Hanson’s “Elephant in the Brain”). Neutral advice looks different.

How Doctors become All-Star Communicators

So what do modern doctors need in the age of AI to stick out and be better health communicators?

One central aspect is emotional intelligence. Using AI, the patient may have already retrieved lots of information regarding their condition. The clinician therefore needs to respectfully explore (and maybe challenge) the patient’s understanding of their illness.

Discussing sensitive topics with patients can be even more challenging if the patient has come across certain terms or even diagnoses before talking to a doctor – which is likely in the age of AI.

Building on this, clinicians and patients need to negotiate a mutually acceptable approach to treatment. More than what is theoretically possible according to the AI, this will require a meaningful doctor-patient exchange, specifically looking at the patient’s needs.

The emotional aspect will always be covered by the doctor

Medical education is key

The other central aspect is a little more pragmatic: Modern physicians need an excellent ability to really break down complex information and adjust it to the patient’s level of comprehension.

Ultimately this requires excellent medical education for the doctors themselves. Because only if physicians have fully grasped a medical topic both in a complex and(!) in a simple manner, they can effectively pass this knowledge on to others.

Good didactics, useful analogies, state-of-the-art storytelling, and appealing visuals can all help support clinicians to better communicate with their patients.

Because at the end it’s simple: If the AI grows stronger analytically, human doctors need to become someone you really want to trust and listen to.

…and for good reasons. Continue with part 2 of this article series to find out what impact AI’s “hallucination problem” can have on doctors and patients.

Sebastian Szur is a writer and medical doctor. After completing his medical studies, he went into health-tech where he focused on refining diagnostic algorithms and communicating digital innovation. He’s also worked at a clinic for internal medicine and psychosomatics studying the connection of mental and physical health. Writing has always been an essential part of his life. He’s the Head of Medical Writing at Medudy.

Teile diesen Post:

Weitere Posts