Insurance and machine learning have been intertwined since long before the modern AI era. Actuarial science is, at its heart, statistical modelling of risk, and insurers have employed mathematicians to do it for over a century. What modern ML has changed is the granularity and the volume. Models that twenty years ago would have rated a driver based on age, postcode and vehicle type now rate them based on hundreds of features extracted from telematics data, financial behaviour, claims history, and increasingly behavioural cues from app usage.

The result is insurance that is more individually tailored, more accurate as a market, and more uncomfortable as a social contract. The whole point of insurance is to pool risk across people. The more accurately individual risk can be priced, the less pooling actually happens, and the more people who need insurance most struggle to get or afford it.

Underwriting

The traditional underwriting question — should we offer this person a policy and at what price — has been transformed in three coverage areas.

Motor insurance. The classical rating factors (age, gender, postcode, vehicle, claims history) have been supplemented by telematics. Pay-as-you-drive and pay-how-you-drive policies record driving data — speed, braking, cornering, time of day, distance — and price the policy on actual behaviour rather than demographic proxies. The Australian market has been slower to adopt this than the UK or US, partly because of regulatory caution around the data, but the major insurers (Suncorp, IAG, Allianz, QBE) all have telematics products. The benefit to safe drivers is real. The cost is the surveillance overhead and the implicit penalty for people whose driving patterns reflect circumstance rather than choice (shift workers, rural residents, parents of teenagers).

Health and life insurance. The data sources are different but the trajectory is the same. Wearable-device data (Apple Watch, Fitbit, Oura) is being incorporated into life and health underwriting, with discounts for verified activity levels. Genetic data is the live frontier — Australian regulation prohibited insurers from using genetic test results in 2024, following several years of campaigning by the Cancer Council and others, though enforcement is still being worked out. The general direction is toward more individually-priced policies, with the social-policy question of how to ensure that uninsurable risks remain insurable being increasingly contested.

Property and contents. Aerial imagery, satellite data and computer vision now feed into property risk modelling. Insurers can identify roof condition, vegetation proximity, swimming pools, and structural features without ever sending an inspector to a property. Climate-related risks (flood zones, bushfire-prone areas, coastal erosion) have become a sharper part of pricing as the underlying risks have grown. The Insurance Council of Australia's 2024 Climate Risk Outlook estimated that one in twenty Australian properties will become effectively uninsurable by 2030 on current trajectories. Whether this is a market success (accurately pricing risk) or a market failure (passing the cost of climate change onto individuals who cannot bear it) is, again, a social-policy question.

Claims triage and fraud

When a claim comes in, the traditional process was a human assessor. The current process is increasingly an ML pipeline that triages the claim into three categories: fast-track for low-risk routine claims, standard processing for ordinary claims, and detailed review for complex or potentially fraudulent claims.

Computer vision has been particularly transformative for motor claims. Tractable, Mitchell International and several Australian-developed competitors use ML on photos of vehicle damage to estimate repair costs. The estimate is close enough to a human assessor's that, for low-value claims, the system can cash-settle without ever sending anyone to look at the vehicle. Suncorp has been a public adopter; IAG has piloted similar systems. The customer experience is faster; the workforce of motor vehicle assessors is smaller as a result.

Fraud detection runs on the same general principles as banking fraud detection. Models score claims for the likelihood that they are fraudulent based on patterns in the claim itself (timing, location, claimant history, repairer history) and historical fraud cases. Genuine fraud rates in insurance are estimated at around 10% of all claims by some industry analyses; the actual percentage detected is much lower. The trade-off is the usual one: too aggressive a model produces false-positive fraud allegations against innocent claimants; too lax a model lets fraud through. Both directions cause harm.

Health insurance — the sharpest case

The most contentious AI deployment in the global insurance industry is in US health insurance, specifically the use of algorithmic prior-authorisation and claims-denial systems. UnitedHealthcare, Humana and Cigna have all faced class actions and regulatory scrutiny over algorithms (one widely-reported example is the nH Predict algorithm used by UnitedHealthcare for nursing-home stay decisions) that allegedly produced systematic claims denials at rates substantially higher than expert clinical review would warrant.

The US debate intensified after the December 2024 killing of UnitedHealthcare CEO Brian Thompson, which catalysed a public conversation about insurer claim-denial practices that had been brewing for years. The shooter's apparent motive — anger at a system perceived to deny care — produced a wave of public reckoning with the role of algorithms in healthcare access decisions. Several US state legislatures introduced bills in 2025 requiring human review of AI-driven claim denials.

The Australian context is different because most healthcare is provided through Medicare, with private health insurance playing a supplementary role. But the underlying principle — that algorithms can be deployed to reduce costs in ways that systematically disadvantage policyholders — applies wherever there is a private insurer making coverage decisions.

Where the social policy gets hard

Insurance is the cleanest example of an industry where ML works as advertised, and the working-as-advertised is itself the problem.

The basic mechanic is risk pooling. People with low risk subsidise people with high risk because nobody knows in advance who will turn out to be high-risk, and everyone wants protection against the worst outcomes. The more accurately individual risk can be measured, the less subsidy flows, and the people most exposed to the underlying risk pay more — sometimes much more. In the limit, this means people in flood-prone areas cannot get flood insurance, people with pre-existing conditions cannot get health insurance, and young drivers in low-income postcodes pay a multiple of what older drivers in wealthier postcodes pay for the same nominal coverage.

Markets are good at producing this outcome. They are not, on their own, good at deciding whether it is the outcome we want. Most jurisdictions respond with a combination of:

Mandatory pooling — Medicare, the public-housing component of social insurance, the universal-service obligation in life insurance markets in some countries.

Restricted rating factors — many jurisdictions prohibit insurers from using race, religion or genetic information; many also have specific rules on age and gender.

Disclosure and explainability requirements — the right to know what factors went into the decision and to challenge them.

None of these is a complete solution. The technical sophistication of insurer pricing has run substantially ahead of the regulatory capacity to govern it.

The Australian regulatory picture

The Australian insurance regulator (APRA) and the consumer regulator (ASIC) have been progressively active on AI in insurance. APRA's CPS 230 prudential standard, in force since 2025, requires insurers to manage operational risk including risk arising from algorithmic systems. ASIC's consultation on AI in financial services has signalled an expectation that insurers will document, validate and monitor AI systems involved in consumer-facing decisions. The Australian Human Rights Commission's 2021 report on Human Rights and Technology covered AI-driven discrimination in detail and recommended tighter rules; some of those recommendations have informed subsequent policy work.

The Insurance Council of Australia's 2024 Code of Practice includes AI-specific provisions — an obligation to maintain meaningful human oversight, to provide explainable decisions for adverse outcomes, and to enable challenge. The code is non-binding on signatories who do not want to be bound by it, and the ICA cannot enforce it. The General Insurance Code Governance Committee can sanction non-compliant insurers, but the sanctions are reputational rather than financial.

The honest summary

Insurance is the cleanest case of a sector where machine learning has done exactly what it was supposed to do — measure individual risk more accurately — and where doing what it was supposed to do has produced consequences that the social-policy framework around insurance is struggling to absorb. The technology will continue to advance. The political question of whether and how to constrain its use will continue to be contested, and not just by insurance companies. The deeper question — what risks should be pooled across the population versus priced individually — is one of the genuinely interesting public-policy debates of the next decade, and most of it will be conducted in the language of algorithms and data rather than the language of solidarity.