When you talk to an AI tool, your words go somewhere. Where they go, who keeps them, and for how long, is the privacy story. It is more complicated than "your data is safe" or "your data is being sold". Both of those are wrong. The truth sits in between, and it depends on which tool, which plan, and what you typed in. This page is the version a sceptical friend would tell you over coffee.

What actually happens when you press Send

The mental model first, because everything else makes more sense once you have it. When you type a question into Claude or ChatGPT and press Send, your text leaves your device, travels across the internet to the company's servers, and is processed by the AI model there. The reply comes back to your device. None of this is secret. None of it is encrypted in a way the company itself cannot read. The company has, at the moment of processing, the full text of what you typed.

What happens next is where the variation lives. The company might use your text to improve future versions of the model (this is called training). It might store the conversation for thirty days for abuse-detection purposes (catching attempts to misuse the tool). It might keep it for two years to comply with legal-hold rules. It might delete it the moment you click the trash icon, or it might "delete" it from your view while keeping a backup copy. The specifics depend on the tool, the plan, and the privacy policy. Two completely different questions are at play, and they are easy to confuse.

Question one: is it used for training?

This is the question most people focus on, and the answer is mostly comforting. On paid plans, none of the major chatbots use your conversations to train their models by default. On free plans, the picture is mixed: the free tier is largely a training-data acquisition mechanism for some companies and not others. Specifics below.

Question two: is it stored, and who can see it?

This is the question more worth worrying about. Even when your conversation is not used for training, the company typically holds it on their servers, for periods ranging from a few days to indefinitely. That data is accessible to a small number of company employees (usually under strict access controls), and to law-enforcement agencies under subpoena. AI conversations are not therapist-confidential. They are not lawyer-client privileged. In storage terms, they are emails.

The big chatbots, in plain English

Claude (Anthropic). Paid plans (Pro, Max, Team, Enterprise): not used for training, conversations stored for thirty days for abuse detection then deleted. Free plan: not used for training by default, but Anthropic occasionally invites you to opt in to share conversations to help improve the model. Anthropic has the most clearly documented policies on training and retention of the major chatbots.

ChatGPT (OpenAI). Paid plans (Plus, Pro, Team, Enterprise): not used for training by default. Free plan: used for training unless you opt out, which you can do under Settings → Data Controls. Conversations are retained for thirty days even after you delete them, longer if held for legal compliance. Enterprise plans have the strongest data protections.

Gemini (Google). Free tier: conversations may be reviewed by humans and used to improve the service, unless you turn Gemini Apps Activity off in your Google account settings. Paid plans (Google AI Pro, Ultra): not used for training. Worth knowing that Gemini is deeply tied to your Google account, and when used inside Gmail or Docs it processes the email or document content within the conversation. Google's data-handling sits under their broader privacy policy, which covers a lot of services.

Perplexity. Does not train on your queries. Search queries are processed in real time to find answers, but are not retained for model training. Conversation history is stored against your account for your convenience and is deletable from the dashboard.

Creative tools. Privacy policies vary widely. Suno, Midjourney, Runway, Kling, and many others typically retain rights to content created on free tiers, and your prompts and outputs may be used for training. Read the terms before doing any commercial work with them.

The few rules that matter

Never type these into any AI tool

  • Passwords, two-factor codes (the six-digit numbers from an authenticator app or text), security PINs
  • API keys (the password-equivalents that let one piece of software talk to another)
  • Bank account numbers, BSB and SWIFT codes, credit card numbers and CVVs
  • Tax File Numbers, Medicare numbers, passport numbers, driver's licence numbers
  • Anything else with "do not share" written next to it

If a model is processing your text, the company has it in clear, readable form at the moment of processing. Treat AI input the way you would treat a chat with a stranger.

Think twice before sharing these

  • Other people's full names, especially in a work or relationship context
  • Medical information about yourself or your family
  • Confidential contracts, NDAs, or anything covered by professional privilege
  • Internal business documents, employee details, customer data
  • Photos that include other people who have not consented to being photographed by an AI

The fix for most of these is the same: anonymise first. Replace names with "Person A". Replace company names with "the client". Strip identifying details from documents before you paste. The model still gives you a useful answer, and the data going to the cloud is much less identifying. This is the single highest-leverage habit on this page.

Generally safe to share

  • Your own writing for editing, summarising, or translating
  • Publicly available text (articles, websites, public reports)
  • General research questions that do not identify you or anyone else
  • Creative work where you own the rights and are happy with the tool's content terms

Voice and images: more than text

Voice mode and photo uploads behave differently from text chat in ways worth knowing.

Voice. When you talk to ChatGPT or Claude in voice mode, you are sending audio to the cloud. That audio captures the voices of anyone within earshot, the background noise of your home, and information that would not be present in a typed transcript. Voice is convenient. It is also higher-bandwidth than text, in privacy terms, and the audio itself is now part of the company's record of the conversation.

Photos. A photo carries metadata (the hidden data tags attached to the file: where and when it was taken, on what camera, sometimes which model of phone), shows everything in frame, and sometimes captures things in the background you did not intend to share (a document on the desk, a face in a mirror, a window letting in identifying detail of where you live). When you upload a photo to an AI tool, treat it the way you would treat posting it on Instagram. Probably fine. But think about who is in it.

Connected services: the surface area expands

A standalone chatbot is a chat window with no access to anything you have not pasted in. The newer features change this. Claude Cowork, Manus, OpenClaw, browser plugins, "AI on your phone" features, calendar and email integrations: all of these give the chatbot ways to reach into other parts of your digital life. The moment you grant one of those permissions, the privacy story changes.

Two practical rules.

One, treat new permissions as serious. Read the dialog before clicking Allow. The dialog usually tells you exactly what the AI is being given access to. "Read your Gmail" is a different proposition from "read your Gmail and send messages on your behalf". The difference matters.

Two, give the agent the smallest folder it needs, not your whole drive. If you want a meal-planning agent to read your shopping list, point it at a single document, not your home directory. Cowork in particular makes this easy. Take advantage.

Workplace AI is a different game

If you use Microsoft Copilot at work, or your employer has rolled out Google Gemini in Workspace, or you have access to ChatGPT Enterprise through your company, the privacy story is your employer's, not yours. Whatever you type into those tools is governed by the contract your employer signed with the AI vendor. In most cases that includes the right for IT to audit your conversations.

The flip side: do not run your personal Claude or ChatGPT account on a work computer for sensitive personal matters. Your work IT may be able to see your browser history, may have a security tool that scans clipboard contents, and certainly has a policy about it. Use the right account for the job.

The free tier is the training pool

Companies do not give away expensive products for free out of generosity. The free tiers of most AI tools are, to varying degrees, the data-acquisition mechanism for the next generation of the model. This is also why paid plans have stronger privacy: they do not need your data, the free plan does. If you are doing any work that touches sensitive information, the most cost-effective privacy investment you can make is paying twenty or thirty dollars a month for a paid tier.

What "delete" actually means

When you click the trash icon on a conversation, the conversation disappears from your view. The company typically keeps a copy on their servers for thirty days, sometimes longer. This is not them being shady. It is for abuse detection (catching people trying to misuse the model) and legal compliance (responding to subpoenas without being negligent). After the retention window, the conversation should be gone. "Should be" is doing some work in that sentence: companies have lost backups, exposed conversations through bugs, and miscategorised accounts. Anthropic and OpenAI have reasonable but not perfect track records here.

The relevant point: do not type something into a chatbot under the impression that you can fully unsay it later. You can sort of unsay it. You cannot completely.

What to do if you slip up

Eventually you will paste something you should not have. Everyone does. Three steps, in order:

  1. Delete the conversation from the chatbot interface. This stops the most obvious leakage and starts the retention countdown.
  2. If the leak was a credential (a password, an API key, an account number), rotate it. Change the password. Revoke the API key. Cancel the credit card if needed. Treat it as if it had been pasted into a public forum.
  3. If the leak was sensitive personal data about someone else (a client, a colleague, a family member), consider whether you have a notification obligation. For Australian businesses processing personal information, the Privacy Act may require you to act. When in doubt, ask a lawyer.

Most slips are not catastrophic. The point of the steps is to keep them that way.

Australian privacy law, in two paragraphs

For personal use, the Australian Privacy Act 1988 mostly applies to organisations holding personal information about you, not to you typing your own information into a chatbot. The chatbot company has its own obligations under foreign privacy law, which Australian courts may or may not enforce.

For business use, things are different. If you are using AI tools to process customer data, the Privacy Act applies regardless of where the AI tool's servers are located. Uploading a customer's personal information to an overseas AI service may constitute a disclosure of personal information under the Act, which triggers obligations about consent, notification, and reasonable steps to protect the data. The Office of the Australian Information Commissioner has issued guidance on this. If you are running a business and using AI on customer data, read it, or talk to a lawyer.

The habit that matters

Most of this page reduces to a single test, which I cannot improve on so I will quote it directly: would you be comfortable reading this input aloud in a crowded coffee shop? If yes, it is probably fine to share with an AI tool. If not, anonymise it first, or do not share it at all. Most of the privacy mistakes that end up in the news are because someone forgot to apply that test.

The next page in this section, How Much Does It Cost?, covers what you actually pay for the paid plans. Which, as this page argues, is also a privacy investment.

If you want the short version of this page that fits on a fridge magnet, head to Rules That Matter. It is the two-list summary you can come back to when you are tired and rushed: the things never to paste into AI, and the decisions never to make on AI alone.