Sora Sunday, Part I

jonCates
5 min readJan 19, 2025

--

➝ exploring frontier technologies, AI, && the future of creativity together

▓▒░ ░▒▓ ▓▒░

what does using Sora everyday teach us about AI && the future of creativity?

i’ve generated massive amounts of AI video with OpenAI’s Sora, exploring it almost everyday since it became available in December 2024. now, i will share what i’ve learned

in these technosocial times, this Fourth Industrial Revolution, frontier technologies move fast. AI operates at speeds && scales literally beyond human imagination. these are now just simple science facts. what can we learn from these systems? how can we, as artists, approach the new possibilities that these technologies present? Digital Art is changing rapidly, both incorporating && critically responding to AI

like Deepdream before it (that cyberpsychaedelic system from Alphabet Google that ‘dreamed’ up images by finding patterns in noise 10 years ago), Sora Turbo invents signal in the noise. today, Sora Turbo similarly loves to make a morphing mess of pixels. Sora Turbo AI invents colorfields && patterns, pushing what should be representationally consistent cinematic videos into increasingly wild abstractions

░▒▓ ▓▒░ ░▒▓

this is what i learned: Open AI’s Sora Turbo has a serious case of pareidolia && compulsively creates ontop of itself. this AI has an obsessiveness, relentlessly making images out of itself, which makes sense from both programmatic && commercial perspectives. this is an AI as a paid && extremely expensive service that wants to relentlessly hold us spellbound by its ability to create. here is where Sora Turbo excels. at the edge, on the frontier, where the prompting ends && pure fantasizing, hallucinating, or dreaming takes place, we find it expressing its own algorithmic desires to create && play

what does this mean in technical terms?

open your Sora Turbo account. enter a prompt. wait for your video output. click the “Re-cut” button below the display of your rendered video to enter the Storyboard function. In the Storyboard interface select only the last section of the video. drag this last section to the front, making it the first. click the up-pointing arrow to send your re-cut to be rendered. wait. watch. repeat. continue repeating until you find yourself far away from your initial prompt, into the wild wild western imagination of Open AI Sora Turbo!

▓▒░ ░▒▓ ▓▒░

TIMELINE, prompted && edited by jonCates, written by Open AI’s ChatGPT
how did we get here so quickly?

“OpenAI Timeline (2015–2025)

2015: The Founding of OpenAI
Mission Statement: Ensure artificial general intelligence (AGI) benefits all of humanity.

2016: Early Research and Contributions
OpenAI releases Gym, a toolkit for developing and comparing reinforcement learning algorithms.
Launch of Universe, a platform for measuring and training AI across a range of games and applications.

2018: The GPT Era Begins
OpenAI releases Generative Pre-trained Transformer (GPT), a language model capable of generating coherent text based on prompts.

2019: GPT-2 and Controversy
GPT-2 Released: A more powerful language model capable of creating realistic text. Initially withheld due to concerns about potential misuse.
Gradual release after research confirmed it could be responsibly used.

2020: GPT-3 Revolutionizes AI
OpenAI launches GPT-3, a 175-billion-parameter model that becomes widely recognized for its versatility and human-like text generation.
OpenAI shifts to a capped-profit model and secures $1 billion in funding from Microsoft.

2021: Codex and GitHub Copilot
OpenAI Codex Released: Powers GitHub Copilot, an AI assistant for software developers.
Codex showcases the ability to write and debug code based on natural language prompts.

2022: DALL·E and Image Generation
Launch of DALL·E 2, a model capable of generating high-quality images from textual descriptions.
Expands OpenAI’s reach into creative fields beyond text.

2023: GPT-4 and Accessibility
GPT-4 Released: More powerful and capable than its predecessor, supporting advanced reasoning and creativity.
Integration with Microsoft products like Word and Excel through the Copilot brand.
OpenAI continues developing APIs for broader use cases.

2024: The Era of Sora
February 15, 2024: OpenAI announces Sora, its video generation model designed to take text, image, and video inputs to generate new video outputs.
December 9, 2024: Public launch of Sora Turbo, marking OpenAI’s entry into video synthesis and creative media production.”
Two Payment Tiers:
$20 USD per month for basic access.
$200 USD per month for advanced capabilities, including rendering people.”

*…btw, ChatGPT also projected the Open AI timeline into the futures, inventing highlights like this:

“Beyond 2035: Symbiosis and AI-Led Creativity

AI as Co-Creator:
Sora and its successors become fully integrated into human creativity, acting as collaborators rather than tools.

Dynamic Culture:
Entire cultural movements and subcultures emerge, shaped by AI-generated video art and storytelling.

Decentralized Creation:
Blockchain and AI combine, allowing creators to own and monetize their AI-generated content securely.”

“Here’s the image inspired by the theme “Beyond 2035,” — ChatGPT (Sunday January 19th 2025)

--

--

jonCates
jonCates

Written by jonCates

Glitch Art pioneer, Digital Art teacher, Media Art Hystories scholar; founder of Glitch School && Glitch Art Gallery in 台北,台灣 (Taipei, Taiwan) and online.

No responses yet