Generate Videos From Nothing

Video generation is the newest AI frontier and is moving fast. You can now type a description or upload an image and get back a short video clip. These are short (5-30 seconds typically), but the quality is improving month by month. Useful for social media content, visual concepts, and creative experimentation.

If you're hoping to use ChatGPT Plus for video: you can't, at the moment. OpenAI shut down the original Sora on 26 April 2026, so video generation is no longer included with any ChatGPT plan. The replacement, Sora 2, is rolling out invite-only and is not yet available in Australia. For now, the working options here are Runway, Kling, Veo (via Google AI Pro), and Pika — all four work fine in Australia.
Runway
What it isAI video generation and editing platform with text-to-video and image-to-video
Best atHigh-quality short video clips, creative visual effects, professional-grade output
Free tierFree tier with 125 one-time credits (enough for about 25 seconds of video)
First paid tierStandard Plan - US$12/month (~A$19/month)
Ken's takeRunway is the most established AI video tool. The Gen-4.5 model (current as of 2026, sits on top of the Artificial Analysis Text-to-Video benchmark) produces genuinely impressive results. Browser-based, no software to install. The free tier is enough to experiment but runs out fast. Think of this as a creative tool for short clips, not a replacement for traditional video production.
Sign uphttps://runwayml.com
Kling AI
What it isAI video generator from Kuaishou with industry-leading video length (up to 3 minutes)
Best atLonger AI-generated videos, generous free tier, built-in audio generation
Free tier66 free credits per day (enough for 1-2 short videos daily, resets every 24 hours)
First paid tierStandard Plan - US$7/month (~A$11/month)
Ken's takeKling has the most generous free tier of any AI video tool. The daily credit refresh means you can experiment consistently without paying. Videos can be up to 3 minutes, which is significantly longer than competitors. Quality is good but can be inconsistent. Failed generations still consume credits, which is frustrating. Worth trying on the free tier before considering paying.
Sign uphttps://klingai.com
Veo 3.1 (Google)
What it isGoogle's video generation model. The natural replacement for Sora, which OpenAI shut down on 26 April 2026.
Best atCost-effective serious video work, with audio generation built in. Veo 3.1 Lite (released 31 March 2026) sits at US$0.05/sec for 720p output via the API, making it the cheapest serious video model on the market. Fast and Standard tiers handle higher-quality work.
Free tierLimited generations included with the free Gemini tier; Google AI Pro adds a usable monthly allowance.
First paid tierGoogle AI Pro - US$19.99/month (~A$32/month) for around 90 videos a month, or pay-as-you-go via the API.
Ken's takeIf you are already on Google AI Pro for Gemini, Veo is essentially free at the volumes most people need. The audio-included generation makes it more useful for social content than Runway or Kling, where you would normally source audio separately. Quality is on par with Kling and slightly behind Runway for cinematic work, but the price difference is significant.
Sign uphttps://gemini.google.com
Pika
What it isAI video generation focused on stylised effects and creative visual experiments rather than cinematic realism. Pika 2.5 is the current model
Best atShort, playful, social-media-style clips with effects you cannot easily get from Runway or Kling. The Pikaffects library (Pikadditions, Pikaswaps and similar) is the differentiator
Free tierFree tier with daily credit allowance for short clips
First paid tierBasic Plan - US$8/month (~A$13/month). Pro at US$76/month for bulk creation
Ken's takePika sits in a different niche to Runway and Kling. Where the others compete on cinematic realism, Pika leans into playful stylised effects. If you make TikTok content or want quick fun clips with creative twists, Pika is the right pick. For serious video work, quality lags Runway. The free tier is enough to test it. Worth knowing exists, but not the first AI video tool I would tell a beginner to try.
Sign uphttps://pika.art

Try this right now (free)

Open Kling AI (free account) and try: "A golden retriever running through a sunlit wheat field in slow motion, cinematic look." Or upload a photo of a landscape and ask it to animate it with gentle wind and moving clouds.

Edit Videos Smarter

This is different from generating videos from scratch. CapCut is a video editor (many of you may already use it) that has added AI features to make editing faster and easier. The standout is auto-captioning, which alone saves hours of work.

CapCut
What it isVideo editor with powerful AI features for automated editing, captions, and content creation
Best atAuto-captioning (92-95% accuracy), AI avatars, text-to-speech, script-to-video assembly, background removal
Free tierFree tier includes auto captions, basic AI tools, AI avatars, 1080p export
First paid tierCapCut Pro - US$19.99/month (~A$32/month) (price nearly doubled in early 2026)
Ken's takeThe auto-caption feature alone makes CapCut worth knowing about. Upload a video, click one button, and get accurately timed subtitles. The free tier is genuinely generous. If you make any kind of video content for social media, CapCut should be in your toolkit. Note: CapCut is owned by ByteDance (the TikTok parent company), which is worth being aware of from a privacy perspective.
Sign uphttps://capcut.com

Try this right now (free)

Record a 30-second video of yourself talking on your phone. Upload it to CapCut and use the auto-caption feature. See how accurate the transcription is and how quickly it generates styled subtitles.

Worked Example: Dogs Cooking Dinner

The best way to understand AI video prompting is to try something ridiculous. So let us make a video of dogs cooking dinner.

Your first instinct might be to type "dogs cooking dinner" and hit generate. You will get something back, but it will be vague and generic. AI video tools respond to detail the same way AI text tools do. The more specific you are, the better the result.

Here is a prompt that actually works in Kling AI:

"A golden retriever wearing a white chef's hat stands on its hind legs at a kitchen bench, stirring a pot with a wooden spoon. A beagle in a tiny apron sits nearby chopping vegetables with its paws. Warm kitchen lighting, steam rising from the pot, cinematic shallow depth of field."

Notice what that prompt does. It names specific dog breeds (not just "dogs"). It describes a specific action for each dog. It sets a scene with lighting and atmosphere. And it uses a camera term — "shallow depth of field" — that the AI understands.

Will it be perfect? No. The dogs' paws will probably do something weird. The chopping will not look right. AI video still struggles with fine motor actions and realistic object manipulation. But the overall scene — dogs in a kitchen, wearing chef gear, apparently cooking — will come through clearly enough to make anyone smile.

Try this right now (free)

Open Kling AI with your free account and paste the prompt above. Generate it at standard quality first (uses fewer credits). Then try changing the breeds, the dish, or the kitchen style. A poodle making sushi. A border collie flipping pancakes. You will learn more about how these tools interpret prompts in ten minutes of playing around than you will from any tutorial.

See also: If you are interested in AI-generated music to pair with your videos, see Music. For converting speech to text, see Transcription. For AI-generated images to use as starting frames, see Image Generation.