
Artificial Intelligence: A Guide for Thinking Humans
Melanie Mitchell (2019)
The cleanest plain-English guide to what AI is and is not. Read this first; everything else on the list becomes navigable afterwards.
Books, essays, papers and reporting for going deeper on AI. Thirty books across the field, plus the original sources, regulatory documents and journalism worth following.
This is the reading I would put in front of an educated friend who asked where to start with AI. The book list draws from across the AGI debate: from inside the labs, from the people sounding the alarm, from the sceptics who think scaling will not produce AGI, from the practical you-can-use-this-now camp, and from the writers thinking about power, labour and values. Below the books are the essays, papers and ongoing sources I rely on for accuracy. The list is curated, not exhaustive — but if you read three or four entries you will already be substantially better-informed than the people writing most of the AI commentary you encounter in the press.
Try your library first. Most of the books on this list will be on the shelves at your local public library, or easy to request through the catalogue. If the book is not there, ask the library to buy it. Most Australian libraries take requests and often will, especially for books still in print. For ebooks, most Australian public libraries lend through BorrowBox: install the app, sign in with your library card, and the ebook reads on your phone, tablet or Kindle for free for the loan period.
Borrowing is not freeloading off the writer. Under Australia's Public Lending Right scheme, every Australian library copy earns the author an annual payment for as long as it is on the shelf. The scheme paid more than $28 million to Australian writers in 2024-25. Borrowing also means one copy serves many readers, instead of many copies sitting on shelves once read. Better for the writer's long-term royalties, better for the planet, and free.
If you do want to own a book, each card has two buttons. The first goes to an Australian bookseller — usually Dymocks or Readings, sometimes Booktopia or the publisher. The second goes to Amazon Australia. The Australian-bookseller links pay nothing — they are there on principle. The Amazon Australia links are affiliate links and we earn a small commission on those orders only. The library is still the best option; this is for the books you will keep.
If you read just three books from this list, read these in order. Roughly 750 pages of unusually good prose between them. Mitchell tells you what AI is. Mollick teaches you to use it. Christian shows you why getting the values right is the hard part.



Books from 2024–2026 reflecting the post-ChatGPT moment. Hao for the politics of OpenAI, Witt for the hardware story, Genesis for the philosophical reach.






Older books whose framing still holds up well in 2026. Tegmark and Lee are the bigger-picture reads; Fry and Christian and Griffiths are the friendliest entry points.




Where the question of how far AI can go is fought out in book form. Russell for the technical heart of the safety case; Yudkowsky and Soares for the extreme-doom version; Marcus for the sceptic who thinks scaling will not get us to AGI; Suleyman for the inside-the-tent voice.




How AI sits inside politics, history, justice and capital. Crawford and Zuboff are the structural reads; Harari is the long view; O'Neil is the most accessible introduction to algorithmic harm.





Where AI is meeting specific domains: warfare (Scharre), medicine (Topol), business economics (Agrawal et al.), the labs themselves (Olson, Li). Plus AI 2041 for short-story-format scenarios.






If you want to start before reading anything substantial, these are the gentlest doors in.


Long-form pieces on capabilities, timelines and accountability that you can read in an afternoon.
Aschenbrenner, Leopold. "Situational Awareness." Self-published essay, June 2024.
A long-form essay arguing that AGI is closer than the public thinks and that the geopolitics of AI development is the central question of the next decade. Written from inside the frontier-lab world.
Amodei, Dario. "Machines of Loving Grace." DarioAmodei.com, October 2024.
A long-form optimistic essay from Anthropic's CEO arguing that powerful AI could dramatically accelerate progress in health, science and human welfare. The clearest single statement of an inside-the-tent optimist position.
Anthropic. "Anthropic Response to OSTP RFI." March 2025.
Anthropic's submission to the US Office of Science and Technology Policy. States plainly that powerful AI systems could emerge as soon as late 2026 or 2027. A primary document of the most aggressive-timeline position.
Angwin, Julia, Jeff Larson, Surya Mattu and Lauren Kirchner. "Machine Bias." ProPublica, May 2016.
The landmark investigation that made COMPAS a public case study in algorithmic bias by comparing risk scores with real outcomes in criminal justice. Still one of the most teachable examples of how opaque systems produce unequal harms.
Abraham, Yuval. "'Lavender': The AI Machine Directing Israel's Bombing Spree in Gaza." +972 Magazine, April 2024.
The major investigative report on AI-assisted targeting in the Gaza war. A vivid, current example of how automation enters life-and-death military systems.
The original technical papers that defined modern AI. Most are written for ML researchers, so I have noted where to find a plain-English companion.
Vaswani, Ashish, et al. "Attention Is All You Need." NeurIPS, 2017.
The transformer paper that underlies modern large language models. Highly technical; pair with a plain-English explainer such as Jay Alammar's "The Illustrated Transformer".
Silver, David, et al. "Mastering the Game of Go with Deep Neural Networks and Tree Search." Nature, January 2016. Paywalled
AlphaGo: one of the most important milestones in modern AI, showing a machine beating a top human at Go. Best read alongside the AlphaGo documentary, which is the cleanest popular treatment.
Jumper, John, et al. "Highly Accurate Protein Structure Prediction with AlphaFold." Nature, July 2021. Paywalled
The landmark AlphaFold paper, crucial for understanding AI's impact on scientific research. The DeepMind project page is the cleanest readable companion.
Brown, Tom B., et al. "Language Models are Few-Shot Learners." NeurIPS, 2020.
The GPT-3 paper. Central for understanding the leap to large general-purpose language models. Technical; pair with a plain-English summary when sharing.
The empirical backbone for anyone wanting to verify claims rather than take them on trust.
Stanford Institute for Human-Centered AI. AI Index Report. Stanford HAI, annual.
The single most useful annual empirical overview of the AI field, covering research, industry, policy, labour and public attitudes.
OECD. "OECD.AI / AI Observatory." Ongoing.
A major international reference hub for AI policy, country profiles and governance materials. Especially valuable for comparative work across jurisdictions.
Partnership on AI and contributors. "AI Incident Database." Ongoing.
A practical source for documented incidents involving AI failures, misuse or harm. The incident summaries page is browsable for teaching, writing and verification.
AI Impacts. "AI Timelines and Surveys." Ongoing.
A long-running source compiling expert surveys and arguments about when advanced AI might arrive. Useful because it preserves original survey materials and methodology rather than just headlines.
METR. "Research publications." Ongoing.
METR's work anchors forecasting claims in empirical measurement of model capabilities. More technical than other entries here.
80,000 Hours. "Artificial Intelligence — an Overview of the Risks and Benefits." Ongoing guide.
One of the clearest general-reader introductions to long-term AI risk, timelines and policy relevance. Advocacy-oriented but transparent and well linked to primary material.
Open Philanthropy. The Most Important Century series. Ongoing.
A detailed case that this century may be unusually important because of transformative AI. Long but more readable than most forecasting literature.
For most readers, the EU AI Act overview and Australia's Voluntary AI Safety Standard are the right starting points. The full legal texts are dense; the official explainer pages are deliberately readable.
European Commission. "AI Act." Shaping Europe's Digital Future. Ongoing.
The best starting point for the official EU framework, timelines and implementation materials.
National Institute of Standards and Technology. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." NIST, January 2023.
The core US voluntary framework for managing AI risks in organisations. Practical rather than legalistic. The companion resource page includes profiles, playbooks and implementation materials.
Department of Industry, Science and Resources (Australia). "Voluntary AI Safety Standard." Australian Government, September 2024.
The primary Australian reference for practical guardrails around AI use. Pair with the Introduction to the Standard for a faster on-ramp.
ISO/IEC. "ISO/IEC 42001 Information Technology — Artificial Intelligence — Management System." ISO, December 2023. Paywalled
The international standards reference for AI management systems. Important for readers tracking how organisations are being asked to formalise AI governance.
For Australian readers and anyone wanting to understand local cases. The Robodebt Royal Commission report is the single most important Australian primary source on AI/automation harms.
Royal Commission into the Robodebt Scheme. Report. Commonwealth of Australia, July 2023.
Essential primary material on the most important Australian case of automated administrative harm. Three volumes; the executive summary alone is useful.
Australian Public Service Commission. "Robodebt Royal Commission." APSC, 2023.
A useful supplementary source for the public-sector governance lessons drawn from Robodebt.
Australian Library and Information Association. "AI Resources." ALIA, ongoing.
A useful Australian professional gateway gathering policy, ethics and practice resources in one place. Especially valuable for librarians, educators and public-interest readers.
For ongoing reading. The journalism around AI is uneven; the publications below have a track record of substance over hype.
ProPublica. Investigative non-profit; consistently strong on algorithmic accountability and public-sector AI.
+972 Magazine. Israeli-Palestinian publication that broke the Lavender system reporting. Important for readers wanting field reporting on AI in conflict and surveillance.
MIT Technology Review. Solid reporting on AI capabilities, deployments and policy. Mix of free and paywalled.
Quanta Magazine. The best long-form science journalism for educated general readers. Strong on AI's intersection with mathematics, physics and biology.
UNESCO. "Reporting on Artificial Intelligence: A Handbook for Journalism Educators." UNESCO, 2022.
A meta-resource for identifying what good AI journalism should look like. Useful as a quality filter for readers deciding which reporting habits to trust.
This list is a starting point, not a finished resource. Categories I would still like to fill out:
More on bias, accountability and the present-tense harms of AI. Virginia Eubanks's Automating Inequality, Safiya Umoja Noble's Algorithms of Oppression, Ruha Benjamin's Race After Technology, Joy Buolamwini's Unmasking AI, Meredith Broussard's More Than a Glitch.
More Australian work. CSIRO's responsible-AI publications, the Australian Strategic Policy Institute's reports on AI and national security, Productivity Commission inquiries, Australian academic voices.
Newsletters and podcasts. Recurring rather than one-off journalism.
If you have suggestions for any of these, the feedback link at the bottom of every page goes straight to me.