What it does is create photorealistic videos of up to 15 seconds from any prompt, for you to upload to Instagram, TikTok, or Sora2’s own platform.
Sora2’s killer feature is making cameos of you from a few seconds of uploaded footage, so you can create clips of yourself flying spaceships or riding tigers or whatever outlandish thing you fancy.
It has generated an outcry over the inordinate amount of compute and power needed to float the app. A 10-second video costs $.50 – vastly more than a ChatGPT query. Free tier users can make 30 per day. OpenAI, people feel, will need to start making money soon, or the whole AI-fuelled tech stock bubble might blow up.
This is a technology tipped to solve global warming daily burning through amounts of energy that could power cities to make “internet slop” for our SM feeds.
That said, the results, especially of the self-cameo feature, can be breathtaking. And as well as social media, this technology will revolutionise marketing.
So when slop is this good, maybe it’s time to get your hands dirty.
I tried Sora2 and, within a minute, had created videos of myself turning into a dolphin and growing wings and flying away.
What blew me away, first, was that it could render my likeness so well after the 10 seconds of video I fed it. Second is its “world model” – its understanding of object permanence and physics.
Dream big
Here then is a thumbnail guide to what you can do with Sora2.
First, dream big. Sora2 can render intricate scenes with multiple characters, specific motions and detailed backgrounds. Because your prompts will need to be extensive, you might consider the desktop app as well as the mobile.
A game-changer for creators, you can direct the camera. Specify camera movements like pans, tilts, zooms and tracking shots to give your clips a truly cinematic feel.
Sora2 can also take an existing video or image and transform it. Upload your media and then, with a prompt change the entire style, setting, or action.
When you prompt, try to articulate four things: Subject, Action, Setting, and Mood. Don’t say, “A dog on a street.” Say something more like: “A high-angle wide shot of a golden retriever [Subject] running playfully through a fountain [Action] on a sun-drenched cobblestone street in Rome [Setting]. The scene is joyful and vibrant, with shallow depth of field [Mood/Lighting].”
And don’t just describe a continuous action. Break it into “beats”. Instead of “a man walks,” try: “A man in a trenchcoat walks three steps, pauses under a flickering neon sign, looks up, then turns left.”
Describe the camera movements. Say things like, “close-up on,” and “over-the-shoulder shot,” or “dolly in slowly.”
Don’t forget the audio. Add details like “rain tapping on a metal roof,” or “distant traffic hum” to build a richer world.
What’s off-limits? Well, in response to early misuse – like viral deepfakes of historical figures saying offensive things – OpenAI now blocks the text-to-video generation of public figures.
You can’t use someone’s likeness without their permission – and be very leery of giving other people permission to use yours.
And you can’t use it to animate copyrighted characters, though of course, people are finding ways around this.
Sora2 is an incredibly powerful tool. For those willing to learn the craft, it represents a new frontier for digital creativity.
Joe Smith is Founder of the AI consultancy 2Sigma Consultants. He studied AI at Imperial College Business School and is researching AI’s effects on cognition at Chulalongkorn University. He is author of The Optimized Marketer, a book on how to use AI to promote your business and yourself. Contact joe@2Sigmaconsultants.com.


