If you have read any AI news in the last two years you have read about AGI. The acronym stands for Artificial General Intelligence. The phrase is everywhere — in CEO speeches, government white papers, academic papers, podcasts, regulatory consultations, and increasingly in the share-price disclosures of the largest technology companies. It is also one of the worst-defined terms in current technology, and the disagreements between people who appear to be talking about the same thing turn out, very often, to be disagreements about the words.

This page tries to do three things. First, sort out what AGI actually means. Second, lay out what the people most credibly placed to know are saying about how close it is. Third, walk through what its arrival would actually mean — separating the parts that are honestly contested from the parts that are mostly hype.

What "AGI" actually means

There is no single definition. The phrase has been used since at least the early 2000s, originally as a contrast to "narrow AI" — the kind of system that plays chess but cannot do anything else. AGI was meant to describe systems that, like humans, could turn their hand to any cognitive task. That is a useful contrast but a vague target.

The current definitions in circulation include:

OpenAI's definition, written into its corporate charter and famously into its commercial agreement with Microsoft, is "highly autonomous systems that outperform humans at most economically valuable work". This is an economic definition. The AGI is whatever, when deployed, would do most of what humans currently do for money. It has the advantage of being measurable. It has the disadvantage that the threshold for "most economically valuable work" is not crisp and has commercial consequences for OpenAI's relationship with Microsoft (the agreement reportedly grants OpenAI greater independence once AGI is declared, which gives both parties uncomfortable incentives).

Demis Hassabis's definition, articulated repeatedly in 2024-2026, is more rigorous. He has argued that current systems are missing several capabilities that humans take for granted: learning from a small number of examples, continuous learning across time without catastrophic forgetting, robust long-term memory, genuine planning and reasoning beyond imitating reasoning patterns in training data. AGI by Hassabis's definition is a system that has all of these. By that definition, today's leading systems are clearly not AGI, and he has said publicly that "maybe one or two more breakthroughs" are needed.

The "median human" benchmark, used in many academic surveys and at AI Impacts, defines high-level machine intelligence as a system that can do any cognitive task at the level of a typical human, unaided. This is a higher bar than OpenAI's economic definition (most economically valuable work is, in fact, done by people well above the median in their domain) and a lower bar than the "frontier human" benchmark used in some other discussions.

The "transformative AI" framing, popular among researchers at Open Philanthropy and elsewhere, sidesteps the AGI debate altogether. The question is not whether AI is general or narrow but whether it is transformative — by which they mean producing economic and societal change comparable to the agricultural or industrial revolutions. A system that automates research while remaining narrow could be transformative. A general system that nobody deploys would not be.

The key fact, and the one most missing from the public conversation, is this: when Sam Altman says AGI is imminent and Yann LeCun says AGI is far away, they are not necessarily disagreeing about which way the technology is going. They are very often disagreeing about which thing to call AGI. Altman's threshold is closer to OpenAI's economic definition. LeCun's is closer to Hassabis's capability-list definition, with the additional requirement that it be reached by an architectural path different from current LLMs. The labels conceal the substantive disagreement.

Where we actually are in 2026

Whatever you call it, the capability landscape in 2026 is genuinely different from where it was in 2022. The clearest signals:

Reasoning models. OpenAI's o3 series, Claude's extended thinking modes, Google's Gemini 3 reasoning variants and DeepSeek's R-series have produced systems that perform at near-superhuman levels on competition mathematics, coding contests, and graduate-level physics. The 2024 demonstration of AlphaProof getting a silver medal at the International Mathematical Olympiad (working in Lean) was the symbolic moment; the production-grade reasoning models that followed are now widely available.

Agentic systems. The shift from "AI as chatbot" to "AI as agent" — systems that can plan a multi-step task, take actions on a computer, observe outcomes, and adjust — is the most operationally significant change of 2025-2026. Browser-control agents, coding agents and research agents are now in production at every major lab. The capability gap between a competent human knowledge worker and an agentic system has narrowed materially in two years.

Long context and continual operation. Frontier models routinely handle context windows in the millions of tokens. Models can now operate over hours rather than minutes, with limited but real long-horizon planning capability.

Where progress has been slower. The list of things current systems still cannot do reliably is real: continuous learning without retraining, genuine causal reasoning, embodied action in the physical world (robotics is improving but slowly), creative leaps that go beyond interpolation of training data, and the kind of "I notice this is a strange situation, let me step back and reconsider" reflective behaviour that humans do effortlessly. These are the gaps Hassabis points to. They are not nothing.

The honest summary is that the leading models in 2026 are extraordinary at a wider range of tasks than even informed observers expected, while still being clearly bounded in ways that an honest spec sheet would describe as "not yet AGI by any rigorous definition". Whether that distance is one breakthrough or several is the question.

What people are predicting — and how confident they are

The forecasts span a wide range, and the range itself is information. Here are the major public positions as they stood in early 2026.

Dario Amodei (Anthropic CEO) has been the most aggressive of the lab leaders. In Anthropic's March 2025 submission to the US Office of Science and Technology Policy, the company wrote that it expected "powerful AI systems will emerge in late 2026 or early 2027". By Amodei's framing, "powerful AI" is a system "broadly better than all humans at almost all things" — close to a superintelligence-class threshold. His 2024 essay Machines of Loving Grace sets out what he thinks such a system would do for biomedical research, mental health, economic development and political institutions. The thesis is that the upside is enormous if the safety work keeps pace, and that the safety work has to be done by people inside frontier labs because nobody else has the access.

Sam Altman (OpenAI CEO) has been similarly bullish, while being more careful about the specific word "AGI" since OpenAI's commercial situation depends on the term. In 2025 he wrote that OpenAI now "knows how to build AGI" and has begun to turn its focus to superintelligence. His timeline language has shortened over time. He has indicated he expects AGI within the current US presidential term.

Demis Hassabis (DeepMind CEO) sits in the middle. His earlier estimates were for AGI in 5-10 years from 2024; in January 2025 he revised this to 3-5 years, while continuing to insist that one or two technical breakthroughs are still needed. Hassabis is the lab leader most willing to say in the same breath that the technology is closer than most outsiders realise and that there are real capability gaps that are not yet solved.

Yann LeCun (Meta Chief AI Scientist) argues that the current LLM-based architecture will never reach human-level intelligence and that a different approach — his own Joint Embedding Predictive Architecture, or JEPA — is needed. He has put the timeline for human-level AI under that approach at 5-10 years if the field adopts the new architecture, with the implication that under the current architecture the answer is "never". He is the most prominent academic voice arguing that the AGI hype is misplaced.

Geoffrey Hinton has put a probability of around 50% on AI reaching human-level capability within 5-20 years from 2024, with substantial probability mass on the shorter end. His estimate is closer to Hassabis's than to Amodei's but explicitly framed as a probability distribution rather than a point.

Eliezer Yudkowsky takes the existential framing seriously enough that the timeline becomes secondary. His position is that whenever it arrives, if no alignment progress has been made, it will be too late. The implication is that the question of "how soon" is a question about how soon the catastrophe might happen, not about how soon the prize might arrive.

Academic survey aggregates. The AI Impacts surveys of AI researchers have shown timelines compressing fast. The 2016 survey put the median estimate of "high-level machine intelligence" at 2061. The 2022 update put it at 2059. The 2023 update put it at 2047. The 2024-2025 updates have pulled the median further forward into the 2040s. The same researchers, asked about specific tasks, have repeatedly been surprised by how fast capabilities have arrived. Their predictions for "AI passes the Turing test" or "AI writes a New York Times bestseller" have been overshot more often than missed.

METR's time-horizon research (Model Evaluation and Threat Research) has tracked how long a task an AI system can reliably complete. The headline finding from their 2024-2025 work is that this length has been doubling roughly every seven months. Extrapolated naively, that gets to multi-day autonomous task completion in a few years and multi-week in not many more.

The honest read of all of this is: the people closest to the technology, who five years ago would have said AGI was decades away, now say years. The disagreement on the number of years is real. The agreement on the order of magnitude is striking.

What AGI would actually mean

The "what would it mean" question deserves the same separation as the timeline question — different framings give different answers, and the distance between them is itself informative.

The optimistic case

Amodei's Machines of Loving Grace argues the upside is staggering if the technology is built carefully. Compressed decades of biomedical progress, with cures for many cancers and genetic diseases. Mental-health treatment that genuinely works. Compressed timelines on materials and energy R&D, accelerating the climate transition. Economic gains comparable to the industrial revolution but produced over years rather than centuries. Political and institutional stability if (and this is a large if) the gains are broadly shared.

Hassabis frames this slightly differently. His emphasis is on AI as a tool for science, with AlphaFold as the proof-of-concept. The thesis is that the most consequential applications will be in research rather than in chat: cures, materials, energy, climate. He is genuinely optimistic about what "AI for science" could deliver.

The pessimistic case

Hinton, Bengio, Russell and Yudkowsky all point in roughly the same direction: a system substantially smarter than humans would, by default, be very hard to control. The technical problem of giving such a system goals exactly aligned with human flourishing is one we have not solved, and the people closest to the work are not confident we will solve it in time. The economic and political incentives push the technology forward faster than the safety work can keep up.

The disagreement among the pessimists is not about whether the risk is real but about how high the probability is and how tractable the problem is. Yudkowsky's probability estimate is the highest. Hinton's is more measured but still material. Bengio thinks the problem is solvable with sustained effort. Russell thinks the basic engineering challenge can be reframed in a way that makes alignment a tractable research programme.

The skeptical case

LeCun, and many academic economists, argue that the AGI conversation overstates how disruptive the technology will be in the time frame people are expecting. Their points are usually some combination of the following. First, the current LLM-based systems are bounded in ways that the public conversation underweights, and the next architectural breakthrough may take longer than the lab leaders expect. Second, even when capable systems exist, deployment lags capability — the productivity gains from the personal computer took twenty years to show up in measured statistics, and AI may follow a similar curve. Third, even genuinely transformative technologies tend to be metabolised by societies more slowly than their inventors predict. The skeptical case is not that nothing will change. It is that the change will be slower, more uneven and less uniformly distributed than either the optimistic or pessimistic camps suggest.

The middle case

Most working researchers — including many at the frontier labs — sit closer to the middle than to either extreme. The realistic view is something like this. We are likely to see continuing rapid capability gains for at least the next several years. The economic effects will be substantial in some sectors (software development, customer service, content production, basic research) and slower in others. There will be both major successes and major failures. The technology will not arrive as a single moment but as a continuous transition, with the things AI cannot do shrinking quickly and the things AI does well becoming taken for granted. Whether some particular threshold gets crossed and called "AGI" will, in retrospect, look more like a marketing question than a scientific one.

The Davos 2026 snapshot

The clearest public articulation of the current disagreement was at the World Economic Forum in Davos in January 2026, where Hassabis, Amodei and LeCun were all on stage together. Amodei held to his "powerful AI in 2026-2027" line. Hassabis put a 50% probability on AGI within the decade and emphasised the remaining capability gaps. LeCun argued that current models will never reach human-level intelligence and that the conversation in the room was using the wrong frame.

The fact that three of the most prominent figures in the field, on the same stage, gave answers spanning "next year" to "never with current architecture" is the single most important fact about the AGI question in 2026. Anyone telling you the answer is settled is not paying attention.

What it means for an ordinary reader

If you are not in the AI industry, the question is not really when AGI arrives; it is what to do about the fact that the people building it disagree this sharply about whether it is close.

The practical implications:

Take the technology seriously now. Whatever you call it, the systems available in 2026 are good enough to matter for most knowledge work. Learning to use them well is the equivalent of learning to use the internet was in 2002 — not optional for most professional careers in the next decade.

Watch the institutional response, not the labs. The honest test of whether society is preparing for transformative AI is not what the AI labs say but what governments, professions, universities and unions actually do. Most of those institutions are moving slowly. That is a signal worth weighing.

Be skeptical of timelines from people with skin in the game. The people predicting AGI soonest tend to be those whose share price benefits from the prediction. The people predicting AGI furthest away tend to be those whose academic reputation depends on the current paradigm holding. Triangulate.

Pay attention to the "How would we know?" question. The most useful thing anyone can ask in this debate is: what observation in the next year would change your view? People who can answer that question are taking the empirical side seriously. People who cannot are essentially expressing a worldview rather than a forecast.

The honest summary

AGI in 2026 is real as a research direction, real as a commercial story, real as a regulatory category — and not yet real as a deployed system, depending on whose definition you accept. The trajectory is steep enough that ordinary readers should probably plan as if the technology will continue to surprise on the upside. The trajectory is uncertain enough that anyone telling you exactly what will happen in 2028, or 2030, or 2035, is selling something. The most defensible position is to take the question seriously, understand the disagreement, watch the empirical signals, and treat the marketing language with appropriate skepticism. That is what the rest of this guide tries to help you do.


Further reading and sources

If this page has interested you, the resources below are where I'd send you next. They span the optimist, pessimist and skeptic positions deliberately. Reading two or three of them in a row is the fastest way to get oriented in the actual debate.

Essays and books worth reading in full

Machines of Loving Grace by Dario Amodei (2024). The fullest articulation of the optimistic-but-careful inside-the-tent view. Long but readable. The case for what powerful AI could do for biology, neuroscience, mental health, economics and governance.

Human Compatible by Stuart Russell (2019). The clearest book-length treatment of the AI control problem. Russell is one of the world's leading academic AI researchers and the co-author of the standard AI textbook; this is his statement of why the safety problem is real and why he thinks it is solvable.

Superintelligence by Nick Bostrom (2014). The book that made the AGI risk argument mainstream. Dense but careful. Pre-dates the LLM era so some of the technical framing is dated, but the conceptual analysis still holds up.

Situational Awareness by Leopold Aschenbrenner (2024). A long-form essay arguing that AGI is closer than the public thinks and that the geopolitics of AI development is the central question of the next decade. Written from inside the frontier-lab world; takes the optimistic-timeline view to its logical conclusions.

Forecasts and frameworks

AI 2027. A scenario forecast by Daniel Kokotajlo and colleagues, written as a month-by-month narrative of how AGI development might unfold. The most concrete published prediction of how the next two years could go. Read it for the framework, not the specific dates.

AI Impacts. The longest-running source of empirical work on AI forecasting, including the regular surveys of AI researchers that have shown timelines compressing.

METR (Model Evaluation and Threat Research). The independent evaluation organisation tracking how AI capabilities are scaling. Their time-horizon research is the cleanest current measurement of capability progress.

Will we have AGI by 2030? A careful summary of the evidence by the team at 80,000 Hours. Updated regularly. A good entry point for someone who wants to evaluate the timeline question for themselves.

Statements and submissions

Statement on AI Risk from the Center for AI Safety (May 2023). The single-sentence statement that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" — signed by Hinton, Bengio, Hassabis, Amodei, Altman and hundreds of others. The clearest signal that the existential framing is not fringe.

Anthropic's recommendations to OSTP (March 2025). The clearest contemporary statement of the "powerful AI in 2026-2027" position from the lab that has bet its strategy on it.

OpenAI Charter. The original definition of AGI used by the organisation that has done most to popularise the term. Worth reading because the definitions in commercial documents shape the policy conversation.

The skeptical case

Yann LeCun's interviews and talks. The clearest articulation of the position that current LLM architectures will not reach AGI and that a different approach is needed.

Gartner's Hype Cycle for emerging technologies. A reminder that essentially every transformative technology in history has been over-predicted on short timelines and under-predicted on long ones. Useful corrective to both the doom and the utopia framings.

Inside the labs

Anthropic's research blog, OpenAI's research blog, and DeepMind's research. The places to read what each lab is actually publishing, rather than what is being said about them in the press.

Australian context

Australia's Voluntary AI Safety Standard and the ongoing consultation on the Mandatory Guardrails Bill. Covered in detail on the AI Regulation page.

The key habit to develop, if you want to follow this debate as it unfolds, is to read primary sources rather than the press coverage of them. The labs publish, the researchers tweet, the regulators consult. The newspaper version is always a few weeks late and several layers of summary removed. That is true even of this page.