The ghost in the consultation room

 
A person typing at their computer

The ghost in the consultation room

 
A person typing at their computer

Barry Nguyen explains the pitfalls of patients relying on AI for diagnosis and how to navigate the ChatGPT era as a clinician.

The patient arrives with an MRI report in one hand and a smartphone in the other. Before you’ve even performed a single provocation test, they present you with a neatly bulleted ‘recovery road map’. 

This isn’t a collection of disparate Google searches. 

It is the output of ChatGPT: a conversational artificial intelligence (AI) chatbot that allows users to upload their medical documents and receive a bespoke, albeit automated, management plan. 

As the Royal Australian College of General Practitioners recently highlighted, we have reached a crossroads where innovation meets significant clinical risk. 

For the physiotherapist, this shift represents a fundamental challenge to our role as the primary architects of physical rehabilitation. 

The algorithm knows the average; we know the exception 

The Royal Australian College of General Practitioners’ recent analysis of ChatGPT sparked a necessary debate: is this a digital assistant or a ‘Dr Google 2.0’? 

While our GP colleagues are primarily concerned with diagnostic errors, the threat to physiotherapy is more subtle and more existential. 

It is the threat of de-contextualised prescription. Consider Sarah, a 34-year-old office worker with chronic lower back pain. 

Her ChatGPT-generated plan recommends a progressive loading program based on her uploaded MRI showing ‘mild L4– L5 disc bulge’ and her self-reported pain scores. 

The protocol is evidence-based and the exercise selection is textbook. 

The periodisation follows current best practice. It is also completely wrong for Sarah. 

The algorithm cannot see that Sarah guards into extension because of a previous pregnancy-related diastasis. 

She catastrophises every twinge because her father was disabled by back surgery. 

Her workstation set-up loads her spine asymmetrically for nine hours daily. 

Most critically, she is three weeks away from a work deadline and psychologically incapable of adhering to anything that requires more than ten minutes. 

ChatGPT knows that, statistically, loading is the gold standard for disc pathology. 

But it cannot palpate the protective muscle spasm, observe the kinetic chain compensation at the hip or read the anxiety in her face when you mention the word ‘deadlift’. 

The algorithm has read thousands of papers. We have read thousands of bodies. 

The radiology report as horror story 

One of the most significant risks identified by medical bodies is the ‘literalism’ of AI when interpreting imaging. 

A radiology report describing ‘severe disc degeneration’ or ‘moderate osteoarthritic changes’ is processed by a large language model as objective pathology requiring intervention. 

For the patient, it becomes a diagnosis. For AI, it becomes a treatment trigger. 

Physiotherapists spend years mastering the art of therapeutic language: explaining that a disc bulge is as normal at 45 as grey hair, degeneration is correlation rather than causation and pain rarely maps neatly onto structural findings. 

We practise what pain science researcher Lorimer Moseley calls ‘reconceptualising pain’. ChatGPT, by contrast, inadvertently practises the opposite. 

It validates the patient’s worst fears by treating every radiological finding as a structural problem requiring a structural solution. 

The AI-generated plan for that degenerative disc often includes phrases like ‘to address the severe degeneration’ or ‘targeting the damaged area’—language that embeds a pathoanatomical narrative we’ve spent two decades trying to dismantle. 

We are now treating not just the injury, but the AI’s nocebo effect. 

From gatekeeper to guide 

We cannot (and should not) attempt to compete with AI on information retrieval. 

Instead, we must reclaim what algorithms cannot replicate: contextualised clinical reasoning. 

This requires three concrete shifts in practice. 

The collaborative audit 

When a patient arrives with an AI-generated plan, resist the instinct to dismiss it.

Instead say, ‘This is a really solid starting point. Let’s go through it together and see how we adapt it to you specifically.’ 

Then conduct a live audit. 

Explain why that recommended squat pattern needs modification because of their ankle mobility.

Demonstrate why the AI’s three-times-weekly frequency won’t work for their recovery capacity right now. Show them the difference between a generic protocol and a personalised one. 

This positions you not as a competitor to the AI, but as the essential interpreter, the specialist who transforms information into wisdom. 

Radiology reframing as standard practice 

Make it routine to address imaging reports directly, even before the patient mentions AI. 

Use phrases like ‘Your MRI shows some disc bulging, which sounds scary. Here’s what that actually means for someone your age...’ 

By pre-empting the AI’s literal interpretation, you inoculate against its nocebo effect. 

You become the voice of reassurance before the algorithm becomes the voice of alarm. 

Strategic automation for human connection 

Use professional, Ahpra-compliant AI tools for your own documentation and administrative tasks. 

Voice-to-text transcription, automated exercise program builders and AI-assisted progress notes can save you 30–40 minutes per day. 

Invest that reclaimed time where it matters: in the hands-on assessment, the motivational interviewing or the treatment session that runs ten minutes over because the patient finally felt safe enough to discuss their fear of re-injury. 

Automate the documentation. Protect the relationship. 

The irreducible core 

In this era of AI-assisted healthcare, a pattern has emerged in practice. 

Patients arrive with ChatGPT-generated plans. 

They are intrigued, sometimes impressed, occasionally overwhelmed. But they keep coming back. 

Not because they lack information (they have more than ever). 

They return because they need someone who can answer the question the algorithm cannot. 

‘Will this work for me?’ That question requires pattern recognition across ten thousand previous patients. It requires reading facial expressions during movement. 

It demands knowing when to push and when to reassure, when evidence supports aggression and when clinical wisdom counsels patience. 

Our value is no longer in knowing what to prescribe. 

It is in the irreducible judgement of knowing when, how and why to prescribe it for the specific person in front of us. In an era where information is infinite and free, context is the scarcity. 

We are not competing with AI. 

We are providing what comes after it: the clinical reasoning and human connection that transforms data into healing. 

That has always been our role. 

The algorithm has simply made it unmissable.

Picture of Barry Nguyen APAM
Barry Nguyen APAM is a physiotherapist, a software engineer and the founder of CliniScribe AI. He is a member of the APA’s AI Advisory Group and a digital health adviser for the Australian Digital Health Agency. His focus is on helping clinicians adopt AI safely, ethically and effectively without compromising care or compliance.
 

© Copyright 2026 by Australian Physiotherapy Association. All rights reserved.