AI Regulation
How governments are responding. The EU AI Act, US executive orders and state laws, Australia's Voluntary AI Safety Standard, NIST and ISO/IEC frameworks. What each one actually requires — and where the real teeth are.
Regulating AI is hard for the same reason regulating any general-purpose technology is hard: it is everywhere, the use cases differ enormously in their risks, and the technology moves faster than the legislative process can keep up. The major jurisdictions have settled on a few different approaches, none of them complete and most of them in flux. This page walks through the main frameworks as they stand in 2026, what each one actually requires of organisations, and where the gaps are.
The EU AI Act — the world's most comprehensive law
The European Union's Artificial Intelligence Act came into force in stages from August 2024. It is the most comprehensive AI-specific legislation in any major jurisdiction and the closest thing to a global de facto standard, because firms doing business in the EU end up applying its requirements internationally.
The Act is structured by risk tier:
Unacceptable risk: AI applications that are simply prohibited. Social scoring by governments, real-time biometric identification in public spaces (with narrow law-enforcement exceptions), emotion recognition in workplaces and schools, predictive policing based purely on profiling, and a few others. The bans for unacceptable-risk AI took effect in February 2025.
High risk: AI used in domains where it could substantially affect people's rights or safety. The list includes recruitment and HR, education, critical infrastructure, law enforcement, migration and border control, access to essential services, and AI as a component of regulated products (medical devices, toys, vehicles). High-risk systems must satisfy a long list of requirements: risk management, data governance, technical documentation, transparency, human oversight, accuracy and robustness, post-market monitoring. Compliance requirements for high-risk systems are phased in through 2025-2027.
Limited risk: chatbots, deepfakes and similar systems. Subject to transparency obligations — users must be told they are interacting with AI, AI-generated content must be labelled.
Minimal risk: most everything else. No specific requirements.
The Act also has a separate regime for "general-purpose AI" (the foundation models, in plainer English). The largest models — over 10²⁵ floating-point operations of training compute — are designated as having "systemic risk" and face additional requirements: model evaluations, risk-mitigation measures, cybersecurity, energy consumption reporting. The compute threshold is currently met by the largest commercial models (GPT-5, Claude Opus 4, Gemini 3 Ultra) and by some open-weight releases.
Penalties are calibrated by company revenue, capping at the higher of €35 million or 7% of global turnover for the worst breaches. That is comparable to GDPR-scale fines and not a number any major company can ignore.
The substantive critique of the AI Act is that the high-risk requirements are heavy on documentation and process and lighter on outcome assurance, that the AI Office has limited investigative resources relative to the scale of compliance work, and that small and mid-sized businesses face disproportionate compliance burden. The political critique from the US and parts of the tech industry is that the Act will discourage innovation in Europe. The counter-evidence so far is that the major US labs continue to launch products in Europe, having absorbed compliance costs.
The US — fragmented, evolving, mostly state-level
The United States has no single comprehensive AI law. Its regulatory landscape is a patchwork.
The Biden Executive Order on AI of October 2023 was the most significant federal action in the previous administration. It used existing authorities — defence procurement, federal agency authorities, immigration policy — to set baseline expectations: safety testing for the largest models, watermarking standards, hiring of AI talent into the federal government, civil-rights-focused enforcement of existing law. The Order was substantially scaled back or rescinded after the change of administration in early 2025. The current federal framework is lighter.
Sector-specific federal regulators have continued to apply existing authorities. The Federal Trade Commission has pursued AI-driven deceptive practices under the FTC Act. The Equal Employment Opportunity Commission has issued guidance on algorithmic hiring. The Consumer Financial Protection Bureau has pursued algorithmic-lending discrimination. The FDA regulates AI-as-medical-device under existing medical-device authorities. None of these is AI-specific legislation but each represents real regulatory activity.
The action in the US is increasingly at the state level. Colorado's AI Act (effective 2026) is the most comprehensive state law, broadly modelled on the EU's risk-tier approach for "high-risk" AI affecting consequential decisions. California has passed a series of more targeted measures (deepfake disclosure, AI-content labelling, training-data transparency, frontier-model safety reporting). New York City's automated-employment-decision tools law has been in effect since 2023. Texas and Florida have lighter-touch versions. The total picture is messy enough that compliance for any national US business now requires tracking multiple state regimes.
Australia — voluntary now, mandatory soon (probably)
Australia's approach has been deliberately graduated. The 2024 Voluntary AI Safety Standard (VAISS) published by the Department of Industry sets out ten guardrails for organisations deploying AI in high-risk settings — accountability, risk management, data governance, testing, human oversight, transparency, contestability, supply-chain assurance, stakeholder engagement, and conformance documentation. As the name says, it is voluntary. Organisations can adopt it; many of the major banks, insurers and government agencies have publicly committed to it.
The mandatory framework — likely to come in the form of a Mandatory Guardrails Bill in 2026 — is in advanced consultation. The proposed approach broadly mirrors the EU AI Act's risk tiers, with high-risk AI subject to enforceable obligations and lower-risk uses subject to lighter transparency requirements. The Department of Industry's discussion papers and ministerial speeches have been clear that the direction of travel is toward enforceable rules, with the question being scope and timeline rather than whether to do it.
Sector-specific Australian regulators have moved earlier. APRA's CPS 230 prudential standard for operational risk applies to banks' and insurers' AI systems. ASIC has been active on algorithmic trading and consumer-facing AI in financial services. The Office of the Australian Information Commissioner has applied the Privacy Act to AI use cases (the 2024 Bunnings determination is the cleanest example). The eSafety Commissioner regulates social-media platforms including their algorithmic recommendation systems.
The 2024 Senate Select Committee on AI report and the 2025 Productivity Commission inquiry into AI productivity have both informed the policy direction. The Australian Human Rights Commission's 2021 report on Human Rights and Technology was an early influential document. The intellectual ground for an Australian AI Act has been well prepared; whether the political will sustains through to legislation is the open question.
NIST AI Risk Management Framework — the technical bible
The US National Institute of Standards and Technology published its AI Risk Management Framework (AI RMF 1.0) in January 2023 and has been iterating since. It is a voluntary framework — explicitly so — but in practice has become the technical reference document for organisations across the world that need a structured way to manage AI risk.
The framework is organised around four core functions: Govern, Map, Measure, and Manage. Each function has specific practices, suggested artifacts and questions to answer. The Generative AI Profile, published in July 2024, extends the framework specifically for generative-AI systems.
The reason the NIST framework has spread is that it is technically sound and politically neutral. It does not require legal compliance; it provides a defensible answer to the question "how is your organisation managing AI risk?" That is something every senior executive now has to be able to answer, regardless of jurisdiction. The framework gives them a structured way to do so.
ISO/IEC 42001 — the certification track
ISO/IEC 42001:2023 is the international standard for AI management systems, published in December 2023. It is the AI equivalent of ISO 27001 (information security management) — a process-and-governance standard against which organisations can be audited and certified.
The standard requires organisations to establish an AI management system covering policy, leadership, planning, support, operation, performance evaluation and improvement. Auditors check for evidence that the management system exists, is implemented and is effective.
The advantage of ISO 42001 over a regulatory framework is that certification is portable across jurisdictions. The disadvantage is the same as for any management-system standard: the certification audits the existence of processes, not the substantive quality of the AI systems themselves. An organisation can be ISO 42001 certified and still deploy harmful AI; the certification only verifies that they have a documented process for managing the risk.
Adoption is growing but uneven. Major banks, large enterprises, and AI vendors selling into regulated industries are the early adopters. The certification is starting to appear as a procurement requirement in some government tenders.
The UK — pro-innovation, sector-led
The UK's approach under both the Conservative and Labour governments has been deliberately lighter than the EU's. The 2023 White Paper "A pro-innovation approach to AI regulation" set out a framework where existing regulators (Information Commissioner's Office, Competition and Markets Authority, Financial Conduct Authority, Medicines and Healthcare products Regulatory Agency, etc.) apply existing authorities to AI in their sectors, coordinated through cross-cutting principles rather than through new AI-specific legislation.
The Bletchley Declaration of November 2023, signed by 28 countries at the AI Safety Summit hosted by the UK, established the AI Safety Institute (AISI) — a government body for testing frontier AI models. The AISI is now one of the more credible technical evaluation bodies for the largest AI models, and has bilateral testing arrangements with the major labs.
The Labour government elected in mid-2024 has been signalled as more willing to legislate than its predecessor, and a Frontier AI Bill is in consultation with potential introduction in 2026. The current most likely shape is targeted obligations on the developers of the largest models rather than a comprehensive risk-tier framework.
China — different model, different motivations
China has been one of the most active AI regulators, but for different reasons than the EU. Rules have focused on content control, social-stability concerns and competitive positioning of Chinese firms. The 2023 Generative AI Service Measures require providers of public-facing generative AI to register, to ensure outputs reflect "core socialist values", and to label AI-generated content. Algorithmic recommendation systems are subject to additional rules requiring transparency and user control.
The substantive AI safety work in China is real but operates in a different policy framing. The 2024 AI Safety Governance Framework published by the Cyberspace Administration of China has technical overlap with NIST and EU work, particularly on red-teaming, evaluations and incident reporting. Whether the Chinese and Western approaches converge or diverge over the next decade is one of the more important open questions in international AI governance.
What the frameworks have in common
For all the differences in legal status, scope and political framing, the major AI regulatory frameworks converge on a common set of substantive requirements:
Risk classification. Different uses of AI face different obligations. Identifying which uses are high-risk is the first step.
Data governance. Provenance, quality, fitness-for-purpose of training data; documented data-management practices; protections for personal information.
Testing and validation. Models must be tested before deployment for accuracy, robustness, fairness across protected groups, and safety. The testing must be documented.
Transparency. Users have a right to know when they are interacting with AI; affected individuals have a right to a meaningful explanation of automated decisions.
Human oversight. High-stakes decisions require humans in or on the loop. The form of oversight varies; the principle is uniform.
Contestability. Affected individuals must be able to challenge automated decisions through accessible processes.
Post-market monitoring. Models drift. Systems get used in unintended ways. Continuous monitoring of deployed systems is now an expectation rather than an aspiration.
Documentation. Organisations must maintain technical documentation of their AI systems sufficient for regulators to assess compliance.
The regulatory frameworks differ in what they require and what they do not — but if you build to a strong reading of these eight items, you are roughly compliant with all of them. That is the practical compliance posture most large multinational organisations have settled on.
Where the real teeth are
Regulation only matters if non-compliance has consequences. A quick scan:
EU AI Act: real penalties (up to 7% of global revenue), serious enforcement intent, but the AI Office is not yet at full investigative capacity. First major enforcement actions are expected through 2026-2027.
US state laws: penalties exist but are smaller; enforcement is uneven. The bigger compliance pressure is the threat of class action under existing civil-rights and consumer-protection law.
Australia: the Voluntary Standard has no penalties. The Privacy Act, Australian Consumer Law, ASIC and APRA frameworks all have real teeth for AI-related breaches but are not AI-specific.
NIST AI RMF and ISO 42001: voluntary; the consequence of non-adoption is reputational rather than legal.
The honest assessment is that AI regulation in 2026 is in a transitional period. The frameworks exist but the enforcement infrastructure has not caught up. Most organisations are doing more than the law requires (because the law is unsettled and the reputational cost of failures is high), and the next two to three years will reveal how the enforcement actually works in practice. Until that happens, the most effective regulator of AI behaviour remains the threat of public failure — which is why the cases on the When AI Goes Wrong page matter as much as the laws on this one.