Important Reminders
The handful of habits that separate confident AI users from people who get burned. Most of these are covered elsewhere in the guide. They show up on one page here because they matter enough to repeat.
You have read most of these scattered across the guide already. They sit together on this page because the failure mode for AI users is not "they did not learn the rule once". It is "they forgot it on a Tuesday afternoon when they were busy". Bookmark this page. Re-read it after a month of using AI. The reminders compound.
Two lists to keep in your head
Never paste these into any AI tool. Passwords. Two-factor codes (the six-digit numbers from an authenticator app or text). API keys. Bank account numbers, BSB and SWIFT codes. Credit card numbers and CVVs. Tax File Numbers. Medicare numbers. Passport numbers and driver's licence numbers. Full bank statements. Raw payslips. Detailed personal medical information tied to a name. The chatbot company has whatever you type, in clear readable form, the moment you press Send. Treat the input box like a chat with a stranger.
Never decide on AI alone for these. Medical decisions of any consequence. Legal decisions on actual matters. Significant financial commitments (loans, super switches, insurance buys). Tax filings beyond the simplest PAYG return. Anything where a wrong answer would hurt you or someone else. AI is fine for preparing the questions you take to a GP, pharmacist, solicitor, mortgage broker, or registered tax agent. It is not fine as the substitute for the conversation.
The longer version of both lists, with a sentence on why each item matters, is on Rules That Matter.
Verify, especially the specific bits
AI will confidently present false information. It fabricates statistics, invents quotes, cites sources that do not exist, and gets dates wrong without flinching. This is not a bug. It is how the architecture works (the long version is in What AI Cannot Do). The practical rule: use AI to draft, use your own brain to verify. Be especially careful with numbers, dates, specific claims, and anything medical, legal, or financial. If it matters, check it.
Pay for the subscription if you are using AI for work
Paid plans across all the major chatbots have stronger privacy protections than free tiers (the long version is on the Privacy and Security page). Twenty dollars a month is the cheapest privacy investment you can make if you are touching anything sensitive. Free tiers exist to gather training data. Paid tiers exist to take your money in exchange for not doing that.
Anonymise before you paste
Replace names with "Person A" or "the client". Strip identifying details from documents. Substitute company names with descriptions ("a competitor in the same space"). The model still gives you a useful answer, and the data going to the cloud is much less identifying. This is the highest-leverage habit on the privacy page. It deserves repeating here.
Read the permissions before you click Allow
The chatbot is a chat window with no hands until you give it some. Browser plugins, agents like Claude Cowork, "AI on your phone" features, calendar and email integrations: each one expands what the chatbot can see and do. Read the dialog. "Read your Gmail" is a different proposition from "read your Gmail and send messages on your behalf". The difference matters.
Confidence is not correctness
The model has no way of knowing it is wrong. It has no doubt. It will tell you the made-up court case is real with the same calm tone it uses for things that are true. If anything, the more polished the answer sounds, the more carefully you should check it. Doubt is your job, not the model's.
Ask for pushback explicitly
By default, the model agrees with you. Ask "is this email good?" and you get "yes, here is why". Ask "what is wrong with this email?" or "argue against my position" and you get genuine critique. The good models will pick apart their own work if you ask them to. They will not volunteer.
Save anything you might want again
Conversations can be deleted, models change, retention policies shift. If a chat produces something useful, copy the output. Paste it into your notes, your email, a document. Treat the chat itself as a draft surface, not a permanent archive. The thing you needed to keep is rarely the thing you remembered to save.
Do not use AI where the cost of being wrong is high
Tax advice, medical diagnosis, legal opinions on an actual case, professional certifications: AI is fast and frequently wrong. Talk to a qualified person for anything where a wrong answer would hurt you. The model is a research assistant, not an oracle. Use it to prepare for the conversation with the qualified person, not to replace them.
You are the editor
Take what is useful, ignore what is not. The model produces drafts. You decide what to keep. People who treat AI output as final-and-must-be-good get worse results than people who treat it as an opinionated first pass that needs work. The judgement is yours. That has not changed.
That is the list. None of these are hard. All of them are easy to forget when you are busy. Re-read this page in a month.
Three companion pages sit alongside this one and pick up the same threads in more depth. Rules That Matter is the hard floor: the things to never paste into AI and the decisions to never make on AI alone. How to Check What AI Tells You is the verifying companion to the "verify" reminder above, broken down by use case. When Not to Use AI covers the times when reaching for AI is the wrong move in the first place.