← Back to recent questions

The friction of explaining yourself disappears, and that is a bigger deal than it sounds. Once an AI knows your work, your industry, the recurring projects you are juggling, you stop opening every chat by laying out the same background. That sounds like a small saving. In practice it changes how often you reach for the tool at all.

Why this question matters

Most people experience AI as a stranger you keep meeting again. You ask a question, you get a generic answer, you close the tab. Memory and context features turn that stranger into a colleague who already knows what you are working on. The shift is from "useful sometimes" to "useful daily".

How I'd approach it

Three tiers, each one a step deeper.

The simplest: turn on memory in ChatGPT or Claude. Both call the feature "Memory". The assistant quietly notes things you tell it and brings them back when relevant: that you write in Australian English, that you have two teenagers, that the recurring topic is your charity board work. You do not have to do anything special; it learns by use.

The middle tier: Custom GPTs in ChatGPT Plus, and Claude Projects. These are saved versions of the assistant pre-loaded with the documents, instructions, and context for one specific job. I have a recipe assistant pre-loaded with twenty years of recipes, and a writing assistant pre-loaded with my voice rules. The setup is a one-off ten minutes. The saving is every time after that. For anything you do more than three times, this is the difference between fifteen seconds and fifteen minutes.

The deeper tier: agents that hold context across sessions and act on your behalf. We are not really there yet for most non-technical users in 2026, but it is where things are heading.

Day to day, what you notice is shorter prompts, more relevant answers, and less "let me explain the background again". You start treating the assistant less like a search engine and more like a colleague who has read the brief.

What I'd avoid

Don't put genuinely sensitive material into memory: financial details, health information, anything you would not want surfaced later. Memory features can leak across chats in ways the providers occasionally fix and occasionally do not. Treat anything you put into memory as written down, not whispered. Review what is stored every few months. Both ChatGPT and Claude let you see and delete individual memories from settings.

The honest catch: switching providers later is harder once you have built up real memory and projects in one of them. None of it ports across vendors. Pick the assistant you are most likely to stay with for a year, then commit. The compounding return on context is real, but only if you stick with one.

Got a question?

Send it through the feedback link. No signup, no list. I'll add it to the queue.