What does a world look like where nobody trusts video, audio, or photographs anymore?
Posted 3 May 2026
The honest answer is that we are already partway into that world, and most of it is less dramatic than the headlines suggest. The shift is not that nothing can be trusted. The shift is that the medium itself stopped being the proof. A photo, a video, or an audio clip is no longer evidence of what happened. It is just a file. We are going back to an older habit: trust the source, not the format.
What changes day to day
Three things, really. First, family scams. The "Hi Mum, I'm in trouble, I need money now" call in a voice that sounds exactly like your son or grandson is the most common harm AI is doing to ordinary households in 2026. The fix is boring and it works: agree on a callback habit and a family code word. If a panicked call comes in, you hang up and ring the person back on the number you already know, or you ask the code word. Both stop a cloned voice cold. The scams and deepfakes page covers this in more detail.
Second, news and social media. The internal voice that used to say "well, there is video of it" has had to be retired. A clip on a feed is now a claim, not a fact. The practical move is to check whether a real outlet you already trust is reporting the same thing, and to look at the source rather than the share. Reuters, AP, the ABC, and the major newspapers are all running provenance markers on their published images now under the C2PA standard. C2PA is a small piece of metadata that travels with a photo or video and records who shot it, when, and what edits were made. It is starting to appear as a small "verified source" badge in browsers and on news sites. It is not perfect (anything can be re-screenshotted to strip it) but it is the first real piece of infrastructure for trustable media on the open internet.
Third, courts and serious decisions. Police, banks, and courts are quietly rebuilding around provenance chains rather than file authenticity. CCTV is trusted because it came from a sealed camera with a clear chain of custody. A clip pulled from a social feed is treated with much more scepticism than it was five years ago. Insurance claim handling has tightened for the same reason.
What does not change
A lot more than you might expect. In-person conversation is exactly as trustable as it always was. Anything physically signed, witnessed, or notarised still holds. Verified archives like Trove, the State Library, or peer-reviewed journals still work because the trust comes from the institution, not the file format. The everyday currency of professional life (a meeting on a calendar, a contract that goes through DocuSign, a payment from a known account) has not changed.
The other thing that does not change is your own life. Your friendships, your family, your local community, the things you actually do on a given Saturday. None of that runs on AI-generated video. The crisis of trust is a crisis of mediated content, not of lived experience.
What I'd avoid
The biggest trap is the equal-and-opposite reaction: doubting everything. There is a thing called the liar's dividend, where someone caught on a real recording can now plausibly claim the recording is faked. If our default becomes "anything could be fake", that does the bad actors' work for them. The better default is "what is the source, and do I trust them". That is a harder habit, but it is the right one.
The other trap is buying gadgets or apps that promise to detect AI-generated content. Detection runs behind generation, always. The detectors that exist in 2026 are unreliable on anything recent. Spend the same effort on the callback habit, the code word, and the source-checking instead. Cheaper, and it actually works.
Got a question?
Send it through the feedback link. No signup, no list. I'll add it to the queue.