How do I tell an AI image from a real photograph in 2026?
Posted 6 May 2026Deep dive
Stop trying to spot it from the picture itself. The visual giveaways that worked in 2023 (six fingers, melted ears, garbled text on signs) are mostly gone in the leading tools. The reliable checks are not in the image. They are in the source, the context, and a reverse image search, in that order. Provenance markers help when they are present, but they often are not, so they are a useful confirmation rather than a reliable test.
For most people, that change of habit is the whole answer: assume a striking image you saw on social media might be AI-generated, and check the source before you treat it as a fact. The rest of this page is what to do when you actually need to know.
What you are really asking
The honest version of the question is: how do I avoid being fooled by a photo I cannot independently verify? Once you frame it that way, "spot it by looking" stops being the right tool. The fakes that fool news desks do not get caught by squinting at the pixels. They get caught by where the image came from and whether anyone you trust is also reporting it.
Why the visual telltales are losing
Three things changed in roughly the last eighteen months. The leading image generators fixed the obvious bugs (hands, eyes, short pieces of text), so those tells are now unreliable. The interesting problem image is no longer a fully synthetic picture but the partial fake: a real photograph with the background swapped, lighting changed, or an unwanted person removed by an AI editor. And the consumer "AI image detector" web tools have not kept up with the generators; independent testing keeps finding them unreliable on current images, and they sometimes flag genuine photographs as AI-made. Treat them as a weak signal, not a verdict.
The practical consequence is that anyone confidently telling you "look for this telltale" in 2026 is fighting last year's war.
The three checks that actually work
Check the source first. An image posted by a real news outlet that has its own checks (Reuters, AP, the ABC, the major newspapers) is in a different category from one posted by an anonymous account on a social feed. The medium stopped being the proof a couple of years ago. The source is now the proof. This is the move covered in a recent question on what changes in a world where you cannot trust media at face value.
Reverse image search. Drop the image into Google Lens or TinEye and see where else it appears. If a striking "news" photo only exists on three accounts that all reposted the same anonymous original, that is information. If a celebrity photo shows up only on aggregator sites and not on the celebrity's own channels or any wire service, that is information. TinEye is better at finding the same image after editing or recompression; Google Lens is better at giving context. Use both. The phrase to remember: a striking image with no traceable origin is not yet evidence of anything.
Cross-check with what you already trust. If a photo claims a real event happened, would a news outlet you already trust be carrying it too? If only one outlet is carrying it and you have never heard of that outlet, that is the question to investigate, not the pixel-level analysis of the image.
What is C2PA and SynthID, in plain English?
Two parallel efforts, both genuinely improving, both worth understanding as confirmation rather than as a primary check.
C2PA (the Coalition for Content Provenance and Authenticity) is an industry standard for attaching a small piece of signed metadata to an image, video, or audio file: where it came from, when it was made, and what was done to it after capture. The badge that displays this is called Content Credentials. The Pixel 10, the Samsung Galaxy S25, and the Leica M11-P sign their photos at capture, which is the start of a real trustable chain. Adobe's tools sign images edited or generated in Photoshop. LinkedIn shows a small "CR" badge; TikTok and Cloudflare preserve the markers. Click the badge and you see the chain of edits. The catch is that most social platforms still strip the metadata when they re-encode the image, so a signed photo often arrives there stripped. Presence of the badge is a strong "yes, real, from this source" signal. Absence tells you almost nothing.
SynthID is a different thing: Google's invisible watermark, baked into images, video, audio, and text generated by Google's own AI tools (Gemini, Veo, Lyria). Think of it as a hidden signature in the pixels themselves rather than metadata that can be stripped. Google has a public SynthID Detector portal where you can upload a file and ask. The honest limitation: it only finds Google's watermark. Anything generated in a non-Google tool will come back unmarked even when it is unmistakably AI-made. The EU AI Act, with disclosure rules taking effect in August 2026, is pushing every major provider in this direction, but timelines vary.
Putting these together: provenance markers, when present, are strong signals. Their absence is a much weaker one.
When you really need to know
For social media doomscrolling, the three checks above are enough. For anything where being wrong matters (a story you are about to share that affects someone's reputation, an image attached to a financial or legal claim, a photo presented as evidence in a workplace investigation, a piece of content used in journalism or in court), treat the verification as part of the work and not an afterthought.
If the work is consequential and the image cannot be traced to a primary source you trust, the right answer is usually that the image cannot be safely used, not that you have failed to spot the AI.
How would I know I am being fooled?
The trust problem here has four shapes. Missing provenance: most images on the open internet carry no signed credentials, and that makes them unverified, not fake; fall back on source and context. Conflicting hits: if a reverse image search shows the same picture on a real wire service (older date) and a fake-news aggregator (newer date with a different caption), the image is real but the story attached to it is the lie, which is a different problem from a generated image. Stripped metadata: a photo can arrive without C2PA markers because the platform stripped them, not because they were never there; treat absence as no information rather than a strike against authenticity. The hybrid case: a real photograph with an AI edit. None of the visual checks catch this cleanly. The only reliable test is whether the unedited original exists somewhere traceable. If it does, compare; if it does not, treat the image as a claim, not as evidence.
A simple test you can run
Next time a striking image comes across a feed, before you share it, do this once:
One, look at who posted it. If you do not know the source, stop and find out. Two, drop it into Google Lens or TinEye and see whether anyone with a name you recognise is also carrying it. Three, if it carries a Content Credentials badge, click it and read the chain. Four, if it claims a recent event, check whether a news outlet you already trust is reporting that event independently. If the image fails any of one, two, or four, treat it as a claim rather than a fact, regardless of how convincing it looks.
That whole sequence takes under a minute. It will catch the vast majority of fakes that ordinary people encounter, including the awkward case where a real photograph has been miscaptioned to mean something it does not.
What I'd avoid
I would avoid trusting consumer "AI image detector" tools as a primary verdict; they are unreliable on current generators and sometimes flag real photographs as AI-made, which is the worst kind of failure. I would avoid the reverse trap: assuming everything novel or striking is AI. That is the liar's dividend, where real footage gets dismissed because doubt has become free. And I would avoid posting an image to a fact-check audience without first running the simple test above; posting the question with the image lets the image keep travelling.
The short version
You cannot reliably spot an AI image by looking at it in 2026, and trying to is a losing game. The three checks that work are source, reverse search, and contextual cross-check. Content Credentials and SynthID are real additions when they are present, but absence tells you almost nothing because most platforms strip the markers and most non-Google tools do not carry the watermark. For everyday use, the one-minute test above is the answer. For anything consequential, an image you cannot trace to a trusted source is not safe to use, even if it looks fine.
Got a question?
Send it through the feedback link. No signup, no list. I'll add it to the queue.