🏆 Top 100 AI Tools 📒 Prompt Library 🎭 Persona Explorer Disclaimer

Why I'm building this site

I'm building this site because artificial intelligence is no longer a side topic for technical people, early adopters, or the professionally curious.

It is becoming part of ordinary life.

It already affects how people search, write, learn, compare products, handle paperwork, solve problems, study, organise work, create images, draft emails, and make decisions. AI is now being used widely enough in business and attracting enough capital that it cannot honestly be treated as fringe.

This does not mean AI is all good.

It means it is real, it is spreading, and it is starting to shape the ground people stand on.

Too much of the conversation swings between two bad positions. One is hype. The other is dismissal. One tells people AI will solve everything. The other tells them it is all rubbish, theft, cheating, laziness, or fraud with a glossy interface.

Neither position is good enough.

This site exists because people need a more useful response than that. They need help learning how to use AI well, where it helps, where it fails, where it lies, and where it can quietly make them worse if they use it badly.

That is the point.

Not worship.
Not panic.
Not slogans.

Practical skill.

My position, stated plainly

AI has upside and downside.

The upside is obvious enough. It can help people learn faster, get unstuck, compare options, explain things in plain language, draft rough material, test ideas, translate, summarise, and tackle tasks that would otherwise take much longer.

The downside is just as real. It can help flood the internet with junk. It can flatten judgement. It can make weak thinking look polished. It can help scammers. It can put pressure on jobs. It can undercut creative work. It can amplify misinformation. It can be folded into surveillance, manipulation, and war.

Both things are true at once.

That is why this site is not based on a simple claim that AI is wonderful, or a simple claim that AI is evil. Both of those are children's versions of the argument.

My actual view is harder-edged than that.

AI is now important enough that ignorance is becoming a liability.

Not because everyone needs to become an engineer. They do not.

But because people now need enough AI literacy to use it to their advantage, defend themselves against its abuse, and avoid being outpaced by people who learn how to work with it better than they do.

The safest position now is neither worship nor refusal.

It is competence.

Ignoring AI is not caution

I know people I respect who are quietly anti-AI or openly hostile to it. I understand some of their concerns. Some of those concerns are justified.

There is plenty to dislike.

There is cheap synthetic rubbish everywhere. There are legal and ethical concerns around data, ownership, consent, and labour. There are serious worries about what happens when systems become persuasive enough to sound authoritative even when they are wrong. There is good reason to be wary of overdependence.

But refusal to learn is still a mistake.

When a technology starts reshaping work, study, communication, administration, fraud, creativity, and public trust all at once, choosing not to understand it is not a serious defence. It is a self-inflicted blind spot.

That matters for adults, but it matters even more for their children.

Children do not need to be raised to trust AI blindly. In fact, that would be foolish. But they do need to grow up knowing how to question it, test it, direct it, and use it well. Otherwise they enter a world already shaped by these systems without the literacy to handle them.

That is not wisdom.

That is exposure.

AI is not just another calculator

People often compare AI to the calculator because the comparison is comforting. It suggests this is just another useful tool, one more bit of practical assistance.

The comparison is too small.

A calculator handled arithmetic. AI touches language, images, code, search, explanation, drafting, tutoring, planning, and information processing itself. It works across domains that used to feel separate. It reaches into office work, home life, education, media, customer service, design, software, and personal administration all at once.

That is why the old analogies are starting to break down.

This is closer to a general-purpose layer being added across large parts of modern life. Once adoption reaches the scale it has now, expectations, workflows, and competitive baselines start to change with it.

Once that happens, the question is no longer whether the technology exists.

The question becomes whether you know how to operate in a world where it does.

The labour market will change, and some of that change will hurt

One reason this argument matters is that AI is not just about convenience. It is about capability and economic pressure.

The labour market is not being wiped clean in one stroke, but it is being rearranged. Some workers will become more productive and more valuable because AI helps them. Some roles will be thinned out, broken apart, or devalued because parts of the work can now be automated or accelerated. The disruption will not fall evenly. It never does.

That is why "AI will create jobs" and "AI will destroy jobs" are both too blunt.

The truth is messier.

AI will create advantage for some people and pressure for others. It will help some firms, unsettle others, and force many workers to adapt whether they wanted this or not. The issue is not whether every job disappears. The issue is whether people are prepared for the shifts in skill, pace, and expectation.

Refusing to learn AI in that environment is not principled resistance. It may simply mean entering the next phase of work less prepared than the people beside you.

"AI slop" is real, but the phrase is often used lazily

There is a lot of garbage being produced with AI.

Bad writing. Bad images. Fake expertise. Derivative sludge. Empty content dressed up as usefulness. Entire acres of internet mulch.

So yes, "AI slop" exists.

But the phrase is often used as if it proves that AI itself is worthless. That does not follow.

Most slop is not evidence that the tool can only produce rubbish. It is evidence that bad users can now produce rubbish faster. Vague prompts, weak judgement, no checking, no taste, no standards, no editing, no domain knowledge, of course the result is poor.

Garbage in, garbage out still applies.

If anything, AI makes that rule harsher because it produces output at speed and with a surface polish that can fool people into thinking something half-baked is good enough.

The reverse is also true.

A capable user can use AI to learn, clarify, compare, draft, refine, question, and improve. A careless user can use the same tools to produce noise. The machine is not the whole story. The quality of thought behind the prompt matters. The care taken after the output matters. The standards of the user matter.

One of the less obvious benefits of helping more people become skilful with AI is that it may reduce the amount of AI slop in the world. Slop is not just a product of the tool. It is usually a product of poor use, weak judgement, low effort, and no standards. When people learn how to prompt properly, refine outputs, question weak results, and bring their own taste and intelligence to the process, the quality rises. The more AI is used by people who care about clarity, accuracy, usefulness, and craft, the less room there is for lazy synthetic rubbish to dominate. Teaching people to use AI well is not only good for them individually. It is one of the few real antidotes to the flood of bad material these tools can produce.

One of the best ways to fight AI slop is not to run from AI, but to raise the standard of the people using it.

Real creators are already using AI well

One reason the public conversation about AI gets stuck is that too many people judge the whole field by its worst output. They look at generic images, dead-eyed music, bloated writing, and synthetic junk and conclude that AI can only make culture worse. I understand that reaction. A lot of what is being produced is poor.

But that is not the whole story.

There are already real writers, artists, musicians, designers and filmmakers using AI as part of a serious creative practice. Not to avoid skill, but to extend it. Not to replace judgement, but to give judgement more to work with. Not to make art without effort, but to explore structure, variation, texture, voice, arrangement, editing and iteration in new ways.

At the same time, many creators are rightly alarmed by the economic and ethical risks, including consent, compensation, imitation and market flooding. Both things are true. The field contains real promise and real damage.

The important point is that poor AI-assisted work does not prove the tool can only produce poor work. It often proves only that the person using it had little skill, little taste, or little care. The better creators become at using AI, the more obvious that distinction will become.

The problem is not that AI is touching writing, music, images and video. The problem is that most people have only seen what happens when it is used badly.

Why I take this seriously

My interest in AI is not theoretical.

I use it in my work assessing major IT bids and RFP responses against a framework I built to identify structural fragility. That framework was developed from years of experience and a large body of analysis drawn from original, authenticated sources.

AI helped me work through that material at scale, test patterns, and refine a way of spotting where bids are fragile, not just in isolated areas, but in the intersections between them.

That matters because the biggest problems in major IT bids often do not come from a single obvious risk. They come from combinations of weaknesses that become far more dangerous when they collide.

AI did not replace my judgement in that work. It helped me extend it.

That is one reason I take this technology seriously. Used badly, it produces noise. Used well, it can help uncover patterns, sharpen analysis, and make important judgement more informed.

Scams are getting better, not worse in the old-fashioned way

One of the clearest reasons ordinary people need AI literacy is fraud.

Scam messages no longer need to be badly written. Fake calls no longer need to sound obviously fake. Fraudulent websites, emails, cloned voices, synthetic identities and persuasive messages can now be produced faster, more cheaply, and at greater scale than before.

That is not abstract.

It means the old clues are becoming less reliable. Bad spelling is no longer a dependable warning sign. Scam emails can sound polished. Fake calls can sound personal. Fraudulent messages can be tailored at scale.

This is one of the strongest practical cases for AI education. People need to know what modern deception looks like. They need to know how synthetic media works, how false confidence sounds, and how quickly generated material can be used to manipulate trust.

If the attack surface has changed, the public needs to change with it.

Creative work is under real pressure

Anyone speaking honestly about AI has to admit that the creative industries have genuine reasons to be alarmed.

Writers, illustrators, designers, musicians, editors, voice actors, photographers, translators, and media workers are not imagining the pressure. AI can help ordinary people make things they could not make before. It can also flood markets with cheaper imitations, weaken pricing power for human creators, and give firms an excuse to replace thoughtful craft with acceptable-enough output.

So this manifesto is not asking artists or designers to smile and accept being bulldozed by automation.

It is saying something else.

The reality of AI does not disappear because we dislike its consequences. The public still needs to understand the tools. Creators still need leverage, legal protection, recognition, and fairer rules. Both things can be true at once.

You can defend human creativity and still argue that people need AI literacy.

In fact, the case for literacy gets stronger, because the public needs enough understanding to tell the difference between useful assistance, cynical cost-cutting, and synthetic rubbish.

The Iran war made the point even harder to ignore

By March 2026, AI is not just a workplace issue, or a consumer issue, or a culture-war issue.

It is entangled with war.

Part of that is the information war around war itself. Synthetic media, manipulated clips, false context, generated imagery and AI-amplified propaganda muddy public understanding while events are still unfolding. That does not mean every contested piece of footage is fake. It means the environment in which people try to decide what is true has become more difficult, more polluted, and easier to manipulate.

That alone should be enough to end the lazy idea that AI is just a harmless consumer novelty.

There is more. AI is also being discussed globally in relation to autonomous weapons, military decision-making and battlefield information systems. Even where AI is not the decisive actor, it is increasingly part of the machinery around decision, targeting, propaganda, and perception.

That changes the moral and practical stakes.

It means AI literacy is not just about productivity anymore. It is also about civic reality. People need to understand how these systems shape what they see, what they believe, and how quickly falsehood can travel in moments that matter.

If AI can alter the information environment around war, then pretending it does not matter in everyday civilian life is nonsense.

The answer is not avoidance. It is mastery.

Every serious technology has a shadow. AI is no different.

In some ways, it may be worse than most because its reach is so broad. It touches work, trust, creativity, fraud, education, administration, and public discourse all at once.

That is exactly why the response cannot be passive fear.

The right response is capability.

Not mastery in the pompous sense. Most people do not need to become technical specialists.

I mean practical mastery.

Knowing how to ask better questions.
Knowing how to prompt clearly.
Knowing how to test an answer instead of swallowing it whole.
Knowing when AI is helping and when it is bluffing.
Knowing when privacy matters.
Knowing when a human expert matters more.
Knowing how to edit, verify, challenge, and redirect what the system gives back.

That is the line this site is built around.

Use AI, do not drift into being used by it.

Because that is what happens when people stay passive. They let the defaults shape them. They let the tools set the pace. They let faster, more capable users build the advantage while they mutter from the sidelines that the whole thing is ugly.

Sometimes it is ugly.

That changes nothing.

It is still here. It is still spreading. It is still changing the conditions of work and life.

What this site is trying to do

This site is meant to be useful.

It is not trying to turn ordinary people into AI zealots. God knows there are enough of those already.

It is trying to help them become capable.

Capable enough to use AI in real life.
Capable enough to avoid obvious traps.
Capable enough to improve the output instead of accepting the first draft.
Capable enough to spot hype, deception, and false authority.
Capable enough to get the benefits without surrendering their judgement.

That is the goal.

To make AI less mystical.
Less theatrical.
Less frightening.
Less dominant.

And more usable, more understandable, and more answerable to the person holding the keyboard.

Final position

So this is where I stand.

AI is not a fad.
It is not neutral in its effects.
It is not all good.
It is not all bad.
It is already changing work, fraud, creativity, trust, and public life.
It is already creating advantage for some and risk for others.
It is already close enough to ordinary life that refusal to engage with it is no longer a harmless personal preference.

It is becoming a form of literacy.

That does not mean people should kneel before it.

It means they should learn it.

Carefully.
Critically.
Without hype.
Without surrender.
Without pretending the downsides are not real.

Because the best defence against AI is not refusal.

It is competence.

And if AI is shaping jobs, scams, culture, and even the information environment around war, then ignorance is no longer caution.

It is exposure.

References

[1] Stanford HAI, AI Index Report 2025. Key figures referenced here include 78% of organisations reporting AI use in 2024, up from 55% in 2023, and US$33.9 billion in global private investment in generative AI in 2024. https://hai.stanford.edu/ai-index/2025-ai-index-report

[2] World Economic Forum, Future of Jobs Report 2025. Figures referenced here include 170 million jobs projected to be created and 92 million displaced by 2030. https://www.weforum.org/publications/the-future-of-jobs-report-2025/

[3] UNESCO, "Creators face projected global revenue losses of up to 24% by 2028, new UNESCO report shows," 4 March 2026. https://www.unesco.org/en/articles/creators-face-projected-global-revenue-losses-24-2028-new-unesco-report-shows

[4] Europol, EU Serious and Organised Crime Threat Assessment 2025. Referenced for warnings about AI accelerating romance scams, voice cloning, deepfakes, LLM-generated fraud scripts and other deceptive practices. https://www.europol.europa.eu/cms/sites/default/files/documents/EU-SOCTA-2025.pdf

[5] European Parliamentary Research Service, Scam calls in times of generative AI, 2025. Referenced for AI-enabled identity theft, phishing, investment scams, recruitment scams, deepfakes and voice cloning. https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/777940/EPRS_ATA%282025%29777940_EN.pdf

[6] ABC News, ABC Verify reporting on misinformation during the Iran-Israel war, March 2026. Referenced for false or misleading war footage and the difficulty of verifying what is real during live conflict. https://www.abc.net.au/news/2026-03-05/abc-verify-misinformation-iran-israel-war/106415388

[7] United Nations Secretary-General statement, 24 September 2025, and related UN materials on lethal autonomous weapons systems operating without human control. https://press.un.org/en/2025/sgsm22830.doc.htm

[8] International Committee of the Red Cross, statements and submission on AI in the military domain, 2025. Referenced for risks in autonomous weapons, military decision-making, and information and communication systems. https://www.icrc.org/en/statement/we-cannot-let-AI-be-deployed-on-battlefield-without-oversight-and-regulation