Of all the deployments of AI covered in this section, the one with the most consistent record of going wrong is government — specifically, government attempts to automate the administration of social-welfare programs. The reason is structural. The decisions are high-stakes for the people affected. The populations are often those least able to push back. The political incentives reward cost reduction and faster processing, not careful design. The legal frameworks for challenging adverse decisions were built for human caseworkers, not algorithms. And the public servants implementing the systems frequently do not have the technical literacy to identify or challenge model failures.

Australia has produced two case studies of this pattern. Robodebt is the famous one. The NDIS computer-generated plans — being introduced as I write this in 2026 — looks worryingly like the next one.

Robodebt — the canonical Australian case

Robodebt was the Coalition government's automated debt-recovery scheme run by Centrelink between July 2015 and November 2019. The name is colloquial; the official name was the Online Compliance Intervention. The scheme automatically calculated debts owed by welfare recipients by averaging Australian Tax Office annual income data across fortnightly pay periods, then matching those averaged figures against Centrelink records. Where the figures did not match, a debt was raised and the recipient was sent a debt notice.

The technique was unlawful. It produced false debts at scale because actual income is not evenly distributed across the year — a person who earned all their income in six months and reported correctly to Centrelink during the months they earned would, under income averaging, appear to have under-reported income to Centrelink during the months they did not earn. The calculation was nonsense. The legal advice, when properly obtained later, was that there was no statutory basis for raising debts on this calculation.

Around 470,000 people received unlawful debt notices. Many were referred to debt collectors. Many were charged interest. Some had their tax refunds garnisheed. Several people are believed to have died by suicide in connection with the debts. The class action settlement (Prygodicz v Commonwealth) was for $1.8 billion in remediation. The Royal Commission into Robodebt, which reported in July 2023, found that the scheme was unlawful from inception, that successive ministers and senior public servants had been repeatedly warned and ignored those warnings, and that the design failures were so obvious that they could not be explained by incompetence alone. Several individuals were referred for civil and criminal investigation.

Robodebt was, technically, not a particularly sophisticated AI system. It was a rule-based matching algorithm against two government datasets. But the lessons it produced are general:

Automation does not check the lawfulness of what it automates. If the underlying calculation is wrong, doing it 470,000 times faster just produces more wrong outcomes.

Reversed burden of proof. The system shifted the work of proving the debt onto the recipients. Many could not do it. Centrelink's own records had been the basis for not-raising-a-debt under the previous human-decision regime, and the algorithmic regime treated the same records as evidence of fraud.

Appeals were structurally inadequate. The Administrative Appeals Tribunal repeatedly found against the government. The government repeatedly chose not to publish the AAT decisions or apply them as precedent.

"The computer says" is not an answer. When pressed, departmental officers could not explain how individual debts had been calculated. The system was treated as authoritative even though no human in the department fully understood it.

These lessons are now part of every Australian public service AI ethics training. Whether they have been internalised by the people building the next system is the question that the NDIS story is currently testing.

The NDIS computer-generated plans — the live case

The National Disability Insurance Scheme is in the middle of a major restructure under the legislation passed in 2024. One element of that restructure is a move away from human-planner-led plan development toward what is officially called "new framework planning" and informally called computer-generated plans. The system uses an assessment instrument called the I-CAN (Instrument for Classification and Assessment of Support Needs) to produce a participant's funded supports automatically based on assessment data.

The disability-advocacy response has been overwhelmingly negative. The headline concern, voiced by People with Disability Australia, the Australian Federation of Disability Organisations, and most state-level peak bodies, is that the system reproduces the structural problems of Robodebt: opaque automated decision-making, inadequate appeal rights (the Administrative Review Tribunal cannot directly alter a plan, only order another assessment), no clear way for participants to understand how the model reached its conclusions, and a framing that treats efficiency and consistency as more important than individual circumstances.

The most pointed critique is from former NDIA technology authorities who have publicly stated that "Robodebt and RoboNDIS were created at the same time by the same people" and that the underlying pattern — automated processing for administrative convenience without sufficient regard for lawfulness, ethics or harm — is the same. Internal NDIA documents reported in early March 2026 indicated that senior technology staff have warned the new framework planning model is "off track across every element" of its implementation, despite the political timeline pushing for rollout from mid-2026.

It is too early to say whether NDIS computer-generated plans will turn into Robodebt 2. The systems are different in important ways — the I-CAN tool was developed in academic disability-research settings and has a different evidentiary basis than Robodebt's income-averaging. But the institutional dynamics — the political pressure for cost control, the limited participant power to push back, the opacity of the algorithm — are recognisably the same. Public submissions to the NDIA's review of the framework are open and the disability sector has been mobilising against the rollout. This is a story to watch through 2026.

The international pattern

Robodebt is the Australian example of a global pattern. The international ones include:

The Dutch childcare-benefits scandal (toeslagenaffaire). Between roughly 2013 and 2019, the Dutch tax authority used an algorithm to flag childcare-benefit applications as potentially fraudulent. The system disproportionately targeted dual-nationality families. Around 26,000 families were wrongly accused of fraud, were required to repay benefits they had legitimately received, and were placed on a "fraud register" that affected their access to other government services. Children were taken into state care in some cases. The Rutte government resigned in January 2021 over the scandal. The settlements have run into the billions of euros. The pattern — opaque algorithm, vulnerable population, presumption of guilt, no effective appeal — is identical to Robodebt.

The UK Department for Work and Pensions. A Public Law Project investigation in 2023-2024 revealed that DWP was using algorithmic systems to flag suspected fraud in Universal Credit claims, with disproportionate impacts on certain demographic groups. The DWP's response has been gradual and partial. The episode is sometimes called "Britain's quiet Robodebt".

Michigan's MIDAS unemployment insurance system. The Michigan Integrated Data Automated System wrongly accused around 40,000 unemployment-insurance claimants of fraud between 2013 and 2015, often based on minor administrative discrepancies. Settlements have been ongoing for over a decade.

The recurring features are striking. Vulnerable populations. Cost-saving political objectives. Opaque algorithms procured from external vendors. Reversed burden of proof. Inadequate appeal mechanisms. Officials defending the system long past the point where the evidence is clear. The pattern is so consistent that it is now, in academic literature, called "algorithmic violence" against marginalised populations.

What governments do well with AI

Not every public-sector AI deployment is a disaster. The successes tend to be in places where the technology is replacing manual processing of routine cases rather than making consequential decisions about individuals.

Tax fraud detection. The Australian Taxation Office has used predictive analytics for fraud detection for over a decade with comparatively little controversy. The reason is structural — the model produces a flag, an auditor reviews the case, and the audit findings are themselves the basis for any subsequent action. The human is genuinely in the loop.

Translation services. NSW Health, the Victorian government and Multicultural Australia all use generative AI for translation in service-delivery contexts. The quality is good enough for routine information, with humans verifying anything sensitive.

Administrative document processing. Government departments produce vast volumes of routine correspondence. AI assistance for triage, classification and drafting (with human review) is one of the better-managed deployments.

Public-information chatbots. The ATO's chatbot, Services Australia's chatbot, the Department of Home Affairs' visa-information chatbot — these are mostly retrieval-augmented LLMs that answer factual questions about services. When the answer is "I don't know, please call this number", the system is doing its job.

Smart cities and infrastructure

Outside service delivery, the more visible AI deployments are in infrastructure: traffic optimisation, public-transport scheduling, asset-management for roads and water systems, computer-vision-based monitoring of public spaces. These tend to be lower-stakes for individual citizens and correspondingly less controversial.

The exception is policing. Predictive policing — using historical crime data to direct patrols to areas where crime is forecast to occur — has a long history of producing biased outcomes. The historical crime data is itself a record of where police have looked for crime, which is correlated with race and class. Models trained on it reproduce those patterns. The Australian state police forces have been more cautious than their US counterparts about deploying these systems but several have piloted versions, and the New South Wales Police Force has used variations of pattern-detection algorithms for years.

Facial recognition in public spaces is the related controversy. The Australian Border Force operates facial recognition at major airports. The Department of Home Affairs has piloted facial recognition for various identity-verification use cases. Several state police forces have used facial recognition during major events. The 2024 OAIC determination against Bunnings and Kmart for in-store facial recognition is the closest Australian regulators have come to drawing a public line, and the line is still partial and contested.

The honest summary

Government AI deployment in Australia and comparable jurisdictions has a worse record than industry. The reason is that the consequences fall on populations who cannot push back, the political incentives reward speed and cost reduction, and the institutional learning loops are weak — the next minister inherits the previous minister's system without inheriting the lessons. The Robodebt Royal Commission produced findings that should have been a turning point. The current NDIS rollout is a test of whether they were.

For citizens, the practical takeaway is to know your rights. If you have been the subject of an automated government decision in any sphere — Centrelink, the NDIS, the ATO, immigration — you have a right to know that an automated system was involved, to receive a reviewable explanation of how the decision was reached, and to challenge it. These rights are uneven across jurisdictions and unevenly enforced, but they exist. The Privacy Act, the Administrative Review Tribunal Act, and the various state-level public sector accountability mechanisms are the places to start.