🏆 Top 100 AI Tools 📒 Prompt Library 🎭 Persona Explorer Disclaimer
AI Scams and Deepfakes

This section is about what AI can do to you, and what you can do about it.

This is not a theoretical concern. It is a current, measurable, growing problem.

The Scale of the Problem in Australia

In 2025, Australians reported over 200,000 scams to the ACCC's Scamwatch service, with total reported losses of $335 million. That is reported losses -- the real figure is estimated to be significantly higher because many victims never report. People over 65 lost the most money of any age group: $89 million. Investment scams caused the most damage at $172 million in losses. And the trend is getting worse, not better: while the number of reports dropped in 2025, the amount of money lost per report went up. Scams are becoming less frequent but more effective. Each one is harder to spot.

AI is a significant reason why.

What AI Has Changed

The same technology that generates helpful content also generates convincing fakes. The same voice synthesis that powers ElevenLabs narration can clone your voice from a 30-second sample. The same image generation that creates beautiful art creates fake identity documents. The same fluent text generation that writes your emails writes scam emails that no longer have the spelling mistakes and awkward phrasing we used to rely on to spot them.

You do not need to be paranoid. You do need to be aware.

What Is Now Possible

Voice cloning

AI can clone a voice from a short audio sample. A voicemail greeting, a video you posted on social media, a conference recording. The cloned voice can then say anything, in real time. Scammers are using this to call family members pretending to be someone they know. "Hi Mum, I'm in trouble, I need you to transfer money." The voice sounds real because it is built from a real voice sample. This is not hypothetical. Australian police have reported cases. It works because we instinctively trust a familiar voice, especially under emotional pressure.

Deepfake video

AI can generate video of a person saying and doing things they never said or did. The quality varies, but at its best it is convincing enough to fool people in short clips. This has been used for fraud (fake video calls from "executives" authorising payments), misinformation (fabricated political statements), and personal harassment. For most people, the immediate risk is not that someone will make a deepfake of you. It is that you will see a deepfake of someone else and believe it.

AI-generated phishing and scam messages

This is the biggest everyday impact. AI has eliminated the telltale signs of scam emails. No more broken English. No more bizarre formatting. AI-generated scam emails read like legitimate business correspondence. They reference real companies, use correct branding language, and construct plausible scenarios. The same applies to SMS scams, fake customer service chats, and fraudulent documents. AI can generate convincing invoices, contracts, and official-looking correspondence in seconds. The ACCC reported that phishing scam losses more than tripled in early 2025 compared to the year before, driven in part by scammers using more sophisticated tools to impersonate trusted organisations.

Fake images and documents

AI image generation can produce fake receipts, fake identification, fake product reviews with fabricated photos, and fake profile pictures for social engineering. If you are buying from an online marketplace, hiring someone based on their portfolio, or assessing whether a social media profile is real, be aware that images alone prove nothing.

Warning Signs

No single test catches every AI-generated fake. But these questions will catch most of them.

Urgency and emotion. Scams work by making you act before you think. Any message that demands immediate action, especially involving money, deserves a pause. "Your account will be closed in 24 hours." "I need help right now." "This offer expires today." Legitimate organisations rarely create this kind of pressure.

Unusual payment methods. Requests for gift cards, cryptocurrency, wire transfers to unfamiliar accounts, or payment through apps you do not normally use. These are almost always fraud, regardless of how convincing the rest of the message looks.

Unexpected contact from someone you know. If a family member, friend, or colleague contacts you through an unusual channel (a new phone number, a different email, a social media message instead of their usual method), verify through a channel you trust before acting. Call them on a number you already have. Ask them something only they would know.

Anything that does not feel right. Trust your instincts. If a phone call sounds like your son but something feels slightly off, hang up and call him directly on his regular number. If an email from your bank looks perfect but you were not expecting it, go to your bank's website directly instead of clicking any links.

Video quality cues. Current deepfake video often has subtle issues: unnatural blinking, slight lag between lip movement and audio, odd lighting on the face compared to the background, and distortion around the edges of the face during movement. These are getting harder to spot as the technology improves, but they still catch many fakes.

Using AI to Check Suspicious Content

Here is the useful inversion: AI is also one of the best tools for detecting AI-generated scams. You can use Claude or ChatGPT to analyse anything that looks suspicious.

"I received this email claiming to be from Australia Post about a missed delivery. Analyse it and tell me whether it looks legitimate or like a scam. Check the sender address, the language, any links mentioned, and anything that seems off. Here is the full email: [paste the email text]"
"I received a text message saying my Medicare details need updating and to click this link. Is this a legitimate message? What should I do?"
"Someone sent me an invoice for a service I do not remember ordering. I have uploaded the PDF. Does this look like a real invoice or a fraudulent one? What specifically looks suspicious?"

AI is good at this because it can cross-reference the language patterns, formatting, and tactics against known scam templates. It will not catch everything, but it is a fast first check before you click anything or send money.

Make this a habit. If something looks suspicious -- an email, a text, a voicemail, an invoice -- run it through Claude or ChatGPT before you respond. It takes thirty seconds and it could save you thousands of dollars.

Protecting Your Family

The people most vulnerable to AI-enhanced scams are often the people least aware that the technology exists. If you have elderly parents, relatives who are not tech-savvy, or family members who are trusting by nature, have these conversations now.

The callback rule. Agree as a family that if anyone receives an urgent call asking for money -- even if the voice sounds exactly like a family member -- the answer is always "I will call you back on your normal number in five minutes." No exceptions. No matter how urgent it sounds. Make this agreement now, before you need it.

The verification question. Some families establish a code word or a verification question that only real family members would know. If you get a call from "your son" asking for emergency money, ask the code question. A voice clone will not know the answer.

Show them what is possible. If you have a relative who does not believe voice cloning is real, show them. Use ElevenLabs' free tier to clone your own voice from a short sample (with your own consent, obviously). Play the result for them. Hearing a convincing fake of a voice they know is more persuasive than any warning.

What AI Cannot Protect You From

AI can help you analyse and identify suspicious content after the fact. It cannot protect you in the moment. A convincing phone call from someone who sounds like your daughter and is crying will bypass any rational analysis because it targets your emotions, not your intellect.

The best protection is the callback rule above, combined with awareness that this technology exists. Slowing down remains the single most effective defence against fraud, AI-generated or otherwise.

Where to Report

If you or someone you know has been scammed, report it to Scamwatch at scamwatch.gov.au. Every report helps the ACCC's National Anti-Scam Centre identify patterns and shut down scam operations. You can also call Scamwatch on 1300 795 995. If money has already been transferred, contact your bank immediately -- the faster you act, the better the chance of recovering funds.

If your personal information has been compromised -- for example, if you have shared identity documents, tax file numbers, or banking details with a scammer -- contact IDCARE on 1800 595 160. IDCARE is Australia and New Zealand's national identity and cyber support service. They will help you develop a response plan to limit the damage and protect your identity going forward. The service is free.