Most of this site is about AI you can hold in your hand. ChatGPT, Claude, Midjourney, Suno. The tools you log into, type prompts at, get answers from. That is one half of the AI story in 2026, and it is the half almost everyone you know is talking about.

The other half is much bigger and a lot less visible. The bank you use makes thousands of fraud-detection decisions about your transactions every month, all of them AI-driven. Your supermarket's stock levels, the routes its trucks take, what gets put on sale, what gets recommended on its app — all of those have machine learning underneath them. If you have had a recent X-ray, an AI system probably looked at it before the radiologist did. If you applied for a credit card, an algorithm decided whether you got it. If your child uses a school learning platform, an AI is shaping which questions they see next. If you are an Australian on the NDIS, a computer-generated plan may now decide your funding.

None of this is new. Banks have used machine learning for fraud detection for over twenty years. Amazon's recommendations are over a quarter of a century old. What is new is that the same kind of large-scale machine intelligence has now arrived in places it never had a foothold before — medicine, science, defence, public administration — and it is moving fast enough that the public conversation cannot keep up.

This section is the bigger picture. It covers the AI that runs in the background of modern life, the industries reshaping themselves around it, and the failures that get less press than the launches but matter just as much. The voice is the same as the rest of the site: plain English, candid, no hype, willing to point out when something has gone badly wrong. The depth, where it matters, goes beyond the consumer-tool view.


How this section is organised

Eight industry deep-dives, three pages on the bigger questions, plus an overview of the people and frameworks responding to the risks. You can read them in any order. Each is self-contained and around a fifteen-minute read.

The industries

Healthcare and Medicine. Radiology AI is now in clinical use across most large hospitals. Drug discovery has been transformed by AlphaFold and its successors. Clinical decision support is mainstream. So is the controversy: false positives, racial bias, clinician trust, regulatory whiplash.

Banking and Financial Services. The longest-running real-world deployment of machine learning in any industry. Fraud detection, credit decisioning, anti-money-laundering, algorithmic trading, customer service. Where the question of whether an algorithm should decide your life arose first, and is still unresolved.

Retail and E-commerce. Recommendation engines, demand forecasting, dynamic pricing, computer vision in stores, personalised marketing. The supermarket you shop at is one of the most aggressive AI deployments most people interact with daily.

Manufacturing and Logistics. Predictive maintenance, robotic vision on production lines, supply-chain optimisation, autonomous mining trucks. Less visible to consumers, enormous in dollar terms.

Defence and Warfare. Computer vision for surveillance and targeting, autonomous weapons systems, intelligence analysis, cyber operations. The Israeli "Lavender" system, Ukraine's drone war, AUKUS. Handled honestly — neither glorified nor reflexively condemned.

Research and Science. The quietly transformative use. AlphaFold mapped 200 million protein structures. AI now helps discover materials, predict weather, find exoplanets, prove mathematical theorems. The most clearly positive deployment of the lot, and a glimpse of what the technology is for when it is not selling us things.

Government and Public Services. Tax fraud detection, welfare eligibility automation, predictive policing, smart cities. Featuring Robodebt as the textbook case of automated decision-making gone catastrophically wrong, and the NDIS computer-generated plans as the live one — already being compared by former NDIA staff to Robodebt by the same designers.

Insurance. Underwriting, claims triage, fraud detection, telematics-based pricing. Where the ethical questions about algorithmic decision-making are sharpest, because the decisions directly determine who can afford to live where, drive what, or recover from what.

The bigger questions

When AI Goes Wrong. A cross-cutting page on the failures: Robodebt, the Dutch childcare benefits scandal, COMPAS, racial bias in pulse oximeters and dermatology models, Amazon's discontinued hiring AI, Workday's class action. What the patterns are and what they say about deploying these systems at scale.

AI Regulation. How governments are trying to respond. The EU AI Act, the US executive orders and state laws, Australia's Voluntary AI Safety Standard, NIST's AI Risk Management Framework, ISO/IEC 42001. What each one actually requires, and where the real teeth are (and aren't).

The AGI Question. What "AGI" actually means, why the people building it disagree by an order of magnitude on when it might arrive, and what it could mean. Hassabis, Amodei, Altman, LeCun, Hinton — separated by definition rather than just by forecast.

Who's Sounding the Alarm. Brief biographies of the people most prominently warning about AI risk — Geoffrey Hinton, Yoshua Bengio, Stuart Russell — and the ones pushing back on the doom narrative. Useful context for understanding the public debate.


If you read just one page in this section, make it Research and Science for what AI can do at its best, or When AI Goes Wrong for what happens when it is deployed without care. If you read two, add Government — the Robodebt and NDIS stories are the clearest local examples of where the real risks are.