How do I use AI to learn a complex topic fast and teach it next week without sounding like I know more than I do?
Posted 4 May 2026Deep dive
Use five end-user tools in a deliberate order, and add one habit that matters more than any of the tools. The five-tool part will get you from "I do not know this topic" to "I have a study pack, a quiz, and a usable slide deck" inside a long evening or a focused day. The habit is what stops you from teaching material you do not actually understand.
The real risk is not getting material out the other end. The real risk is finishing with confident slides and a polished voice while still not actually understanding the topic. AI is excellent at producing fluent prose regardless of whether it is right, and the reader of that prose (including you) has no easy way of telling. The whole trick is designing the workflow so it makes that gap visible, not invisible. The honest test is not "did the AI produce good-looking output". It is "could you defend any sentence on any slide if a smart person in the room asked you to".
Where do I find good sources?
Start with Perplexity. It is the AI search tool that cites every claim with the source it pulled from. For a topic like AI regulation, ask it specific questions: "What is the current state of the EU AI Act?", "Which Australian regulators have published guidance on AI in financial services?", "What are the criticisms of the NIST AI Risk Management Framework?".
Open every cited source. Confirm it actually says what Perplexity claims it says. The citations are usually right, but "usually" is the wrong standard for teaching material. Build a folder of three to ten primary sources: the regulator's PDF, the vendor's white paper, the standards body's publication, the peer-reviewed paper. Three good sources beats forty mediocre ones, and you have to actually read this material.
What not to trust at this stage: any AI's prose as a source. Treat AI search results as a useful map. They are not the territory.
How do I actually understand the material?
Move the source PDFs into NotebookLM. NotebookLM is Google's research assistant that reads only the documents you give it, and crucially, it cites where every answer came from inside your corpus. It will tell you when an answer is not in the material, instead of inventing one. That is the opposite of how a normal chatbot behaves.
Ask NotebookLM to produce a study guide and an FAQ for your topic. Read both against the actual sources. Where the study guide says something you didn't catch in the source, dig in. That is where the material is genuinely complicated, or where the tool is filling in gaps with plausible-sounding generalities.
NotebookLM also produces a podcast-style audio summary that two synthetic voices read out. It is good for reinforcement on a walk; it is not a substitute for reading.
The technical machinery underneath (chunking, embeddings, retrieval) is in the background; you don't need to know how it works to use it well. If you want the longer version, the trustworthy document assistant deep dive covers it.
How do I quiz myself honestly?
Use ChatGPT or Claude. Paste your study guide in and ask for twenty questions of varying difficulty: ten basic comprehension, five "explain why" questions, five that require combining two pieces of the material. Answer them without looking at the source, then have the AI mark you. Write the answers down rather than say them out loud, so you can see your own thinking.
The bit that matters is what you do with the wrong answers. Each one is a hole in your understanding. Go back to the original source. Resist the urge to have the AI re-explain it; the AI's explanation will sound clearer than the original because the AI is good at explaining, but you might still not actually understand what was said. The discipline is to come back to the source.
If you can, repeat the quiz a day later with new questions. What you remember after a sleep is the closer test.
How do I make slides without faking expertise?
Use Gamma for speed and decent default design, or have Claude or ChatGPT produce a slide outline as plain text that you paste into PowerPoint or Keynote for more editorial control. Either path is fine. Whatever you do, do not let the tool generate the deck before you have done the previous stages. Pretty slides hide weak understanding from you.
A rule that works: every slide should have a one-sentence claim you can defend from a primary source. If you can't, the slide is decoration and should come out. Most AI-built decks are about twice as long as they need to be.
How do I check the material is actually correct?
This is where most learning workflows fall over. The model that wrote the slide will happily tell you the slide is correct. That is not a check.
Three checks that work.
First, paste the slide content into a different AI than the one that wrote it. Ask it to find errors, contradictions, and ambiguities. Two different models trained on different data frequently catch each other's mistakes. If Gamma drafted it, send it through ChatGPT. If ChatGPT drafted it, send it through Claude. The disagreement between two models is more useful than the agreement of one.
Second, demand citations for every non-obvious claim. Ask "what is your source for this?". If the AI cannot produce a real, checkable source, treat the claim as unverified until you find one yourself. AI tools sometimes invent citations that look plausible. Click through.
Third, write three or four "tough audience" questions. The one your most knowledgeable friend would ask. The one a sceptic would ask. The one someone with a stake in the outcome would ask. Try to answer those questions from your slides alone. If you can't, the gaps will show up live.
How do I stop AI from turning weak understanding into confident teaching material?
The answer is to design the workflow so it forces you to demonstrate understanding before you produce teaching material, not after.
The cleanest test I know: before you build the slides, write a one-page explanation of the topic in your own words, with no AI assistance and no looking at the study guide. If you can't get to a coherent one-pager, you do not understand the topic well enough to teach it yet. Go back to the source and the quiz.
A second test, for the day before you teach: pick three questions an audience member might genuinely ask, write them down, and answer them out loud without your slides and without the AI. If you stumble, you have an honest signal that something hasn't settled.
The line that matters is between "I have learned about this" and "I understand this". A clever workflow can blur the two. These tests are how you keep them apart.
What I'd avoid
I would avoid letting any single tool do every stage. The whole point of the cross-checks is that different tools see the material at different points; one tool end-to-end is one set of blind spots end-to-end.
I would avoid generating the slide deck first and the understanding second. Once a deck exists, the temptation is to make the lesson fit the deck, not the other way around.
I would avoid skipping the primary sources. Vendor white papers, regulator PDFs, and standards documents are denser than AI summaries but they are the actual ground truth. The summaries downstream are only as good as the sources upstream.
And I would avoid teaching anything in a serious context (compliance, legal, medical, anything where being wrong matters) on the strength of an AI workflow alone. For that level of stakes, you need a subject-matter expert to read the material before you stand up. AI can carry the personal-learning case. It cannot replace the expert reviewer for the high-stakes one.
A simple test before you walk into the room
Three questions. Write them out on a piece of paper, put your phone away, close your slides, and answer them out loud as if the audience just asked them.
If your answers wander, get vague, or veer into "well, that is a great question, let me come back to that", your prep is not finished. Do another quiz round, read the sources again on the weak spots, and try the test the next day. If it is the night before and you are still stumbling, narrow the scope of what you are actually teaching. A small lesson you understand beats a big one you don't.
The short version
The workflow: Perplexity for sources, NotebookLM to read and summarise them, ChatGPT or Claude to quiz yourself, Gamma to draft slides, and a different AI than the one that drafted them to fact-check. Then a one-page explanation in your own words and a three-question out-loud test before you commit.
For a topic you are coming in cold on, that is somewhere between a long evening and a focused day. The slides are the easy part. The understanding is the work, and the workflow is designed to make sure you actually do it.
Got a question?
Send it through the feedback link. No signup, no list. I'll add it to the queue.