My twin sister spent years moving through consultations without a clear explanation for her symptoms. At one point, aged 31, she was told it was “wear and tear”. At another, she was advised she might be depressed. Neither was true – and both closed off further inquiry.
Answers were slow, and uncertainty became routine. An unexpected encounter with a locum doctor led to a diagnosis of myotonic muscular dystrophy.
Experiences like this are often dismissed as unlucky or exceptional. They are not. Across healthcare systems, diagnostic error is common. Most people will experience at least one missed or delayed diagnosis in their lifetime. Women are more likely to have symptoms dismissed. Older patients are more likely to suffer medication errors. Racial minorities encounter greater communication breakdowns. People with complex, chronic or rare conditions are more likely to be told nothing is wrong – or nothing more can be done. The “inverse care law” captures this reality: those with the greatest health burdens experience the greatest obstacles to care.
These complaints are universal and rich countries like Australia are not immune.
Access to care remains contingent on fluky factors medicine rarely confronts – geography, specialist access, transport, job flexibility, caring responsibilities and who you know as a “good doctor”. For older people, those with disabilities, or patients in insecure or gig-economy work, access can mean lost income and exhausting logistical hurdles. For those outside major cities, distance alone can be decisive. Healthcare may be publicly funded, but access still depends on time, mobility, persistence, and recognising you have symptoms worth pursuing.
None of this suggests a lack of professional commitment. Clinicians work under extraordinary pressure: workforce shortages, administrative burden and an ever-expanding body of medical knowledge leads to burnout. Human attention and memory are finite. Modern healthcare routinely asks clinicians to exceed those limits – and then treats the consequences as individual failings rather than human and systemic ones.
Yet medicine is also a high-status occupation. Like other professions it seeks to defend its own interests and processes. When things go wrong, scrutiny too often focuses on defending procedural compliance rather than whether the system works for patients.
Artificial intelligence unsettles this logic.
AI does not recognise professional hierarchy or tradition. It is judged – bluntly – by outputs not by some vague “art”. Does it recognise rare disease patterns? Does it treat people fairly? What makes it unsettling is that it shifts attention away from process and toward outcomes.
This is not to pen a love letter to AI. These tools can be opaque and confidently wrong. AI raises serious concerns about privacy, accountability and commercial influence. Used carelessly, these tools risk entrenching or worsening existing inequities rather than reducing them.
But refusing to engage with the messiness of AI is not a neutral stance.
Patients are already using chatbots – often because conventional routes have failed them, or because their symptoms do not fit neatly into short appointments and familiar diagnostic categories. Patient advocates such as Dave deBronkart have captured this reality with the shorthand #PatientsUseAI – not as a campaign, but as a statement of fact. People are not waiting to be invited. They are already experimenting, searching for explanations, and testing tools that promise to take their symptoms seriously.
Clinicians are doing the same. In my own surveys, and in studies conducted by others, substantial numbers of doctors report using commercial chatbots to assist with documentation, diagnostic reasoning and treatment planning. This uptake is often informal and under-acknowledged.
Healthcare has a purpose: to care for patients. The real question is who – or what – can deliver that care more reliably, more fairly and with fewer preventable failures.
There is no simple answer. In my book, I argue that any serious evaluation must compare AI against what we’ve currently got.
My sister’s diagnosis arrived through chance. It is a story about human fragility. AI did not create that fragility; it makes it harder to ignore.
AI is already part of healthcare. The question now is whether it will be governed seriously and judged without nostalgia – by the only metric that ultimately matters: whether patients are better cared for.
From our partners
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: www.smh.com.au









