Bridging the AI divide in physiotherapy
Barry Nguyen examines how physiotherapists are adopting artificial intelligence and highlights the benefits of embracing technology, the costs of avoidance, the risks of uninformed implementation and what lies ahead for the profession.
Artificial intelligence (AI) isn’t coming to physiotherapy—it’s already here.
AI is embedded in how we communicate, automate workflows and, increasingly, deliver patient care.
But after running AI workshops for over 1000 physiotherapists and allied health professionals across Australia in the past 18 months, one thing has become impossible for me to ignore: a stark divide is forming in our profession.
On one side are clinicians who are leveraging AI to gain time, improve clarity and extend their career longevity.
On the other are those being left behind, not due to lack of skill or intelligence but because of inertia, confusion or misinformation about what AI can and cannot do safely in healthcare.
More concerning still, many clinicians are unintentionally breaching data privacy laws and Ahpra guidelines as they experiment with AI tools that were never designed for healthcare use.
A tale of two practices
The divide I’m witnessing isn’t subtle.
Across hundreds of clinicians, clear patterns have emerged that reveal two very different professional realities.
There is the AI-enabled practice where:
- clinicians save up to 90 minutes per day using AI scribes, automation tools and plain-language patient summaries
- professionals invest up to US$200 per month in premium AI subscriptions to accelerate clinical tasks and practice marketing
- practitioners finish documentation during work hours instead of staying late.
In contrast, the traditional practice sees:
- clinicians manually typing clinical notes well into the evening, unaware that safe alternatives exist
- students using AI to generate treatment plans they can’t explain or justify, undermining their learning
- professionals feeling increasingly overwhelmed by administrative burden.
But there’s a third, more troubling category emerging: the unknowingly noncompliant practice.
The compliance crisis hiding in plain sight
Perhaps the most serious issue is one many clinicians don’t even realise they’re creating.
Barry Nguyen
I regularly encounter physiotherapists who are unknowingly entering personal health information into free AI tools like ChatGPT without understanding that this may violate Australian data privacy laws (specifically the Privacy Act 1988) and Ahpra’s Code of conduct regarding patient confidentiality.
These tools store data on servers outside Australia, often without end-toend encryption or business associate agreements.
In essence, convenience is coming at the cost of compliance and clinicians don’t even know they’re at risk.
The informed consent gap
Adding to this compliance concern is a fundamental oversight in patient care: the failure to obtain informed consent for AI use.
Patients frequently assume their health data is managed solely by their treating clinician, unaware it might be processed by external, non-health-specific systems.
This isn’t merely an oversight—it’s a violation of foundational legal and ethical requirements in medical practice.
The introduction of AI doesn’t diminish our responsibility for informed consent; it amplifies its importance and underscores the need for transparency in AI-driven healthcare.
Why smart clinicians are avoiding smart technology
Through workshop feedback and clinic conversations, I’ve identified three primary barriers preventing safe and effective AI adoption.
Not having time to learn how to use it
The irony here is profound—AI tools are specifically designed to return time to clinicians.
Many can be adopted in under an hour with minimal training, yet this perceived time barrier keeps clinicians trapped in inefficient workflows.
Thinking they’re not tech-savvy enough
This reveals a fundamental misunderstanding.
Clinicians assume they need to grasp machine learning algorithms or complex prompting techniques.
They don’t.
The best AI tools are built for healthcare professionals, not software engineers.
Believing it’s too risky for healthcare
This concern is absolutely valid but the greater risk lies in using the wrong tools in the wrong way.
What’s needed isn’t avoidance but clear guidance on safe implementation.
What responsible AI can deliver
When clinicians use ethical, privacycompliant AI tools specifically designed for healthcare, they can:
- generate accurate, editable SOAP notes, GP letters and patient summaries • provide clear post-treatment instructions in plain English
- automate administrative workflows like appointment reminders and patient onboarding
- support clinical reasoning in complex cases
- create condition-specific patient education resources in seconds
- reclaim time to focus on direct patient care, mentoring and clinical decision-making.
The key phrase here is ‘when used responsibly’.
With the right tools—built for healthcare and hosted securely— clinicians improve documentation quality, reduce burnout and enhance patient communication without compromising privacy or professional standards.
The path forward
To bridge this divide safely and ethically, our profession must prioritise the following four key areas.
Comprehensive digital literacy education
We need training that covers not just how to use AI but when, where and why it’s appropriate.
This education must reach clinicians at all career stages, from students to senior practitioners.
Clear, practical compliance guidelines
The profession needs specific guidance on compliant AI use, including explicit direction on what information should never be entered into public tools and how to properly vet platforms for security and privacy compliance.
Educational program integration
AI literacy must be woven into physiotherapy education programs so that students understand both the capabilities and professional responsibilities that come with AI-assisted clinical practice.
Purpose-built professional tools
We need AI tools developed specifically for physiotherapists, hosted within Australian data sovereignty requirements and fully aligned with Ahpra guidelines and privacy law.
The choice before us
Using AI in physiotherapy isn’t just a trend. It’s a reality and it’s accelerating rapidly.
Without clear professional leadership, peer-to-peer education and responsible integration strategies, we risk creating two problematic extremes: clinicians who fall behind in efficiency and professional relevance, and those who unknowingly violate privacy regulations while trying to keep up.
The encouraging news is that bridging this gap doesn’t require perfect technical knowledge.
It requires curiosity about new possibilities, caution in implementation and commitment to staying current with both technology and professional standards.
As a profession, we must stop treating AI like a threat to our clinical judgement and start recognising it as a powerful clinical tool that, when used responsibly, can help us deliver better care while preserving what matters most—the therapeutic relationship between clinician and patient.
The divide exists, but it’s not permanent. The question is: which side of it will you choose to be on?
>>>Barry Nguyen APAM is a digital health adviser at the Australian Digital Health Agency, a physiotherapist, a software engineer and the founder and CEO of CliniScribe AI. Barry has delivered AI education to over 1000 allied health professionals and works at the intersection of clinical practice, data privacy and digital transformation.
© Copyright 2026 by Australian Physiotherapy Association. All rights reserved.
