Of all the industries where AI is being deployed at scale in 2026, healthcare is the one where the stakes are highest and the picture is most mixed. The technology in some areas is genuinely revolutionary. In others, it has caused real harm. Most reporting on either pole misses the middle, which is where most of it actually lives.

Medical imaging — the place AI actually arrived first

If a radiologist looked at one of your scans in the last few years, an AI system probably looked at it too. Diagnostic imaging is where AI in medicine has gone furthest. The reason is structural: medical images are visual, the patterns matter, the labelled training data exists in volume, and the workflow already produces a digital file. That is exactly the kind of problem deep learning has been good at since 2012.

The clinical use cases in regular practice now include:

Chest X-rays. Tools from companies like Annalise.ai (Australian, founded out of I-MED), Lunit (Korean) and Aidoc (Israeli) read chest X-rays for over a hundred conditions and flag the ones that need urgent attention. The radiologist still reads the image. The AI shifts the order they read them in, surfaces findings they might otherwise miss, and quantifies what they see.

CT scans for stroke and pulmonary embolism. RapidAI and Viz.ai (now widely deployed in stroke units) detect large vessel occlusions in CT scans within minutes of the scan being acquired. Faster detection means faster intervention, which means more brain saved. There is reasonable evidence this has reduced disability after stroke at the hospital level.

Mammography. The DeepMind-Google Health study published in Nature in 2020 showed an AI system reading mammograms at radiologist-equivalent accuracy. By 2026 several systems are deployed in breast screening programs around the world. Sweden was first to integrate AI into national screening; the UK, Germany and parts of Australia have followed.

Diabetic retinopathy. IDx-DR (now Digital Diagnostics) was the first FDA-approved autonomous diagnostic AI: it can screen for diabetic eye disease in a GP's office without specialist involvement.

Pathology. Slower to digitise than radiology, but Paige.AI and Ibex have FDA-approved tools for prostate cancer detection in biopsy slides, and most major academic pathology departments now use AI screening at some level.

What links all of these is that the AI does not replace the doctor. It re-orders the doctor's queue, flags things to look at first, and acts as a second reader. The clinical literature on these tools is now substantial enough that we can say with confidence: a radiologist with a good AI tool is faster and slightly more accurate than the same radiologist without one. Whether this represents a fair return on the cost and integration effort is a separate question.

Drug discovery — what AlphaFold actually changed

The most genuinely transformative use of AI in medicine has nothing to do with treating patients. It is in the lab, where AI has become a routine part of how new drugs are designed.

The breakthrough was DeepMind's AlphaFold, which in 2020 cracked the fifty-year-old "protein folding problem" — predicting the three-dimensional shape of a protein from its amino acid sequence. Knowing that shape matters because proteins are biology's workhorses, and what they do is determined by how they fold. The AlphaFold Protein Structure Database now contains predicted structures for over 200 million proteins, essentially all known proteins on Earth. Before AlphaFold, getting a single protein structure took years and a PhD. Now you look it up.

AlphaFold 3, released in 2024, extended this to predicting how proteins interact with other molecules — the actual mechanism by which most drugs work. That is a substantially harder problem and the model is correspondingly less reliable, but the direction is set. DeepMind spun out Isomorphic Labs as a drug discovery company built around it, with partnerships with Eli Lilly and Novartis.

Beyond AlphaFold there is now a whole industry of AI-first drug discovery firms: Recursion, Insilico Medicine, BenevolentAI, Atomwise. Their pitch is the same: the search space of possible drug-like molecules is around 10⁶⁰ — bigger than any chemist or computer can exhaustively explore — and ML models can prioritise the most promising candidates much faster than traditional medicinal chemistry. The first AI-discovered drugs are now in human trials. None has yet been approved, and the timeline from candidate identification to approval is typically a decade, so we will not know for a few more years whether the AI drug pipeline produces meaningfully better drugs than the traditional one. The honest answer is: probably yes, but not as fast as the industry's promotional material suggests.

Clinical decision support — the messy middle

"Clinical decision support" means software that helps doctors make decisions. AI has been bolted onto these systems for about a decade and the results are uneven.

The most-deployed example is Epic's Sepsis Model, embedded in the Epic electronic health record system that runs most US hospitals. It scores hospitalised patients for risk of sepsis. A 2021 JAMA Internal Medicine study by University of Michigan researchers found the model was substantially worse than Epic's published claims — it missed a majority of sepsis cases and produced large numbers of false alarms, contributing to alert fatigue. Epic responded; the model has been retrained. But the episode is the canonical example of a major AI clinical tool that was widely deployed without adequate independent validation, then quietly walked back.

The pattern recurs. Clinical risk-prediction models are easy to build and hard to validate properly. Different patient populations behave differently. A model trained on data from a teaching hospital in Boston frequently underperforms when deployed in regional Australia. The technical term is "distribution shift". The clinical term is "the model doesn't work here".

Generative AI has now arrived in clinical workflows too. Ambient scribes — tools like Nuance DAX, Abridge and Heidi (Australian, founded out of Melbourne) — record the doctor-patient consultation, transcribe it, and draft a clinical note. Adoption among Australian GPs has been rapid in 2025 and 2026 because it directly attacks the most-hated part of the job. Whether the notes are accurate enough to rely on, and whether the medico-legal exposure is acceptable, is being worked out in real time.

Where it has gone wrong

Racial bias in pulse oximeters. Pulse oximeters — the clip on your finger that measures oxygen saturation — were calibrated mostly on light-skinned patients. They systematically overestimate oxygen levels in patients with darker skin, meaning patients in genuine respiratory distress are scored as fine. This was documented in the literature for years, became a crisis during COVID-19, and is now being corrected slowly. It is not strictly an AI problem — it is a sensor calibration problem — but it is the same family of problem: a technology trained on a non-representative population, deployed at scale, harms the people it was not trained for.

Dermatology AI on dark skin. Skin cancer detection AI was trained largely on images of fair skin. Performance on darker skin is substantially worse. Several studies have documented this; the problem is being addressed through diversified training datasets, but the deployed tools are not all equally fixed.

The Optum/UnitedHealth algorithm. A widely used US algorithm for identifying patients who need extra care management used past healthcare costs as a proxy for medical need. Because Black patients historically receive less care for the same conditions, the algorithm systematically scored them as healthier than they were and excluded them from extra support programs. The 2019 Science paper by Obermeyer et al. that exposed this is now standard reading in every AI ethics course. The vendor revised the algorithm. Many similar algorithms are still in use elsewhere.

The Australian picture

Australia is a middle-of-the-pack adopter. Annalise.ai is one of the better-known radiology AI companies internationally and was founded out of an Australian radiology group. CSIRO's Data61 has health AI work running. The Therapeutic Goods Administration regulates AI-as-medical-device under the same framework as physical devices, with some specific guidance for adaptive (continuously-learning) systems published in 2023.

The most interesting Australian story is in primary care. Heidi Health, the ambient-scribe company, has gone from startup to widespread GP adoption in around two years. The economics work because GP consultations in Australia are time-pressured and largely fee-for-service, so anything that gives the doctor back ten minutes per consultation is paid for several times over. That is also the reason adoption has been faster here than in many other systems where the incentives push the other way.

The honest summary

Healthcare AI in 2026 is real and most of it is reasonable. It is not curing cancer. It is not replacing your doctor. It is making radiologists faster, helping pathologists not miss things, accelerating drug discovery in ways we will not fully see for a decade, and quietly filling out paperwork. Where it has gone wrong — biased algorithms, untested clinical models, overhyped claims — the failures have generally been the kind that you would predict from a technology developed and tested mostly on convenient populations and deployed before it was properly validated for the populations that actually use it.

The frontier most people miss is not the next ChatGPT for doctors. It is the next AlphaFold-style breakthrough in molecular biology. That is where the technology is genuinely changing what is medically possible, rather than just changing the workflow.