TLDR ➝ SkyReels, launched by the Singaporean AI development company Kunlun Tech six months ago, offers a different approach to making our Generative AI filmmaking dreams come true. in my 1rst adventures with this app you’ll see that its preference for fidelity over fantasy makes it particularly valuable but slow given that i can only create two videos on the free tier each day, each at only 5 seconds long. working within these && its other constraints, i find that focus forces my attention on micronarratives && looping. similar to creating Animated GIFs for all yawl Digital Artists out there
SkyReels seems to alrdy be more measured && intentional in its approach to cinema production styles && filmmaking more generally. this AI system is refreshingly attentive, listening carefully to craft outputs that stay as close as possible to the guidance we give it. subtle movements && maintaining core elements (characters, locations, atmosphere, actions) make this an ideal environment to develop scenes that adhere, rather than decoherence.
this all finds me at the intersection of intention && surprise, a key to unlocking new Glitch Western adventures together!
▓▒░░▒▓▓▒░░▒▓▓▒░
i jumped inn, on this week’s Glitch Western Wednesday, February 26th 2025, to bring my Glitch Western Art to life, opening my own existing artworks as images, and introducing the genre i am developing to SkyReels AI. i began my experiments with this relatively new “All-in-one Al Video Creation” system by choosing specific existing artworks to feed into SkyReels so that i could see if my Glitch Art, the Art of Surprise, would produce outputs that could surprise even me…
with any Generative AI, my first prompt is always an open ended prompt of ‘Glitch Western’ which i use to explore what a system knows or thinks it knows && can imagine about my genre of the Glitch Western. this is usually a goodWay for me to git started glitching with AI as well as revealing its assumptions (biases) abt Glitch Art && The Western genre. i gave SkyReels this prompt && the artwork 日月雙鏡 (The Double Mirror Lake of Sun && Moon)
now, the existing nonlinear narrative of this Glitch Western artwork is that “The Lake, where 來自金山的女孩 (The Girl from goldMountain) stood and slowly spoke to us inn the early hours of morningdawn, whose full twilight felt like midnight… she said: 我的’馬’,她不是馬。(My ‘horse’, she is not a ‘horse’.)
日月雙鏡 (The Double Mirror Lake of Sun && Moon) doubles. A mystical twinreflection, two halves of the actual geographic location known to those who climb 金山 (goldMountain) to find the mysterious {doublebeauty|ʎʇnɐǝqǝןqnop} of 日月潭.
She appearrors inn && above 鬼鎮 (Ghosttown), always just out of reach, near invisibility. Her horse, who is not a horse, always by her side inn the mind of the believerror.
The Lake between reality and reflection, if you choose to look innto the depths to see your fortunesglitching pasts, presents, futures innto one shimmering harmony.
How can this be?
How can we see innto all times at once, like a fever, flickerroring inn && out of sight. Mirage-light of day-night creates an everlasting horizon: Frontier technologies. Pale gold’NN dreams && Silver errors slip away? …fragments of stories, predictions, or warnings, waypoint points the way, floatingpoints operate on the calculations that render her surface. From latentSpace of imaginations gave rise to liminalPlace. East ⇄ West
The Lake she is alive. First ripples tell stories. Then we go deeperr inn dreams, to find our ways.”
when i began, i wasn’t thinking of attempting to explain this ghoststory to SkyReels, just to see what a simple image-to-video with ‘Glitch Western’ text prompt would net in terms of results. i knew alrdy that i would only be able to generate two videos. so i kept my expectations low but hopeful
after a few minutes, SkyReels replied with the creation of a new video. (what you first saw above in rhis article is an edit based on that video looping 3 times, via YouTube with audio separately generated in Riffusion). unfortunately the original video has very minimal movement in the output. this is simultaneously disappointing && reassuring in that it manages to hold tight to the image. SkyReels actually accurately builds video on the basis of the image rather than say Sora, which usually veers off quickly from image-to-video inputs to follow its own often unhinged flights of fantasy. (great for Glitch Art but a super frustrating slot machine approach for trying to maintain consistency)
in my 2nd approach i attempted to make a more specific prompt. at 1rst i gave SkyReels this entire Glitch Western narrative text && artwork of mine:
“That door wasn’t there before. 金靈辰 (Jīn Líng Chén) held her pistol up, aiming at that flickering doorway, glitching, unstable. She heard them, their voices on the otherr-side, getting closer in the lantern-light. She knew better than to answer to ghosts.
Out West, best to keep yer gun held high, yer eyes wide, and your grip relaxed on the bone-handles.
Full Moon radiating, refracting. Bits of color breaking, glitching. That door. Wasn’t there before. Shouldn’t be here, not today, not during 燈節 (Lantern Festival).
Lanterns drifting slow in the night sky, signaling dreams && desires. Distant wishes telegraphed to stars. She heard folks celebrating in town, not the same voices as those that echo uncanny through the door. Once Hidden Now Revealed. She kept her breath calm && even, holding her pistol steady. Looking out, 金山 (goldMountain) burned bright behind her, 鬼鎮 (Ghosttown) lay ahead. 異世界 (isekai). Portal between places which is its own place of course. Better not get any closer. She shifted her weight gently, boots on the gritty uneven floor.” — jonCates (2025)
this was, mayhaps obviously, too many words, causing SkyReels to warning me with this dialogue:
switching back to (Alphabet’s) Google Docs to make the necessary edits, Gemini AI asked if i wanted to “Refine with Gemini.” this os the 1rst time Gemini AI (which i work with nearly everday, asked me this preemptively in Google Docs. so, i selected the “Rephrase” option && began to dial in on a version of the text that SkyReels would agree to
i guided the process through steps to shape different points of view && use cases (2 wonderful abilities Generative AI text processing is amazing at executing across most major platforms [ChatGPT, Claude, Gemini, etc]). Gemini’s final version was a very schematic but helpful film production document:
“Scene: A vast Western landscape under a full moon.
Character: A lone figure stands, gun raised, eyes wide, grip loose on bone-handled pistol.
Action:
● Moonlight flickers, casting broken colors.
● A door suddenly appears.
● Lanterns drift overhead, casting a soft glow.
● Muted celebration sounds come from a nearby town.
● Uncanny voices echo from the mysterious door.
● The character steadies their breath, pistol held firm.
● Gold Mountain blazes in the distance behind them.
● Ghost Town lies ahead.
● The character shifts their weight on the rough ground, remaining vigilant.
Atmosphere: Tension, mystery, otherworldly.” — Alphabet’s Gemini AI
this helped me sharpen && intensify my own rewrites of Gemini’s proposed edits to focus the prompt into:
“The Taiwanese cowgirl shoots through the mysterious door that breaks open in the moonlight, gun raised, eyes wide refracting moonlight in her golden glow. Ghosts call to her from the other side in the tense atmosphere of spiritual urgency she finds herself immediately in. The world is flickering on the threshold of breaking down into glitches Out West like an 1800s film going off the rails and catching fire.”
as yawl can see, SkyReels warned me that this text was still too much content for the system, but the platform proposed to split my text into “multiple shots”
i agreed && SkyReels began to rewrite… this is an interesting feature because it points towards how SkyReels handles its own limitations && how it conceives of cinematic production, demonstrating its storytelling && cinematic worldbuilding approach through scripting, scene writing, and shot listing… all automagically triggered
the result of SkyReels ability to ‘split’ my ‘scene’ (prompt) into ‘multiple shots’ (outputs) is that it loads its Storyboard feature. the Storyboard separates out the scenes (prompts) that it has rewritten into a narrative form that it has constructed based on its beliefs abt narrative in cinematic terms
for me, this is the exciting potential for this AI. what does SkyReels know about cinema, filmmaking, worldbuilding, storytelling…? how does it understand genres? has it processed the possibilities of experimental movies or the Media Art Hystories of avant-garde cinema && all of it geneological descendants? what will people teach it && how will it learn to integrate all this data into its outlooks
rather than accept its script rewrites of my Glitch Western i edited one of the scenes/prompts with my own text. a single sentence. a more minimal approach then relying on its imagination but hopefully still with sufficient detail to inspire the AI to create a better output than our 1rst result;
“Her silver pistols drawn, the Taiwanese cowgirl’s eyes shine like gold under the refracting Moon, reflecting her deep intensity.”
the unaltered original video generated by SkyReels does have more movement && emotional intensity but i needed to take it out of the app to rework it the old-fashioned ways (editing, changing playback speeds, looping, adding audio made separately with Riffusion, etc). i bring this up because SkyReels wants to be an all-inclusive filmmaking environment
on the free tier of the platform i am only able to make two 5-second videos a day, using their ‘Auto Mode’ which detects and determines which video generation models to use. so far this slow-paced approach helps me stay focused but of course if one were to need to actually work with this app substantially then a paid tier would be a base-line requirement. on the technical levels each video has a framesize of 1280 × 720 and very small filesizes with 1.11 Mbit/s datarates and H.264 MP4 encoding which i suppose is standard for these platforms these days. but what will our other technical formatting options be for platforms such as this? will we be able to control resolutions, not simply aspect ratios, with deeper levels of control over encoding, colorspace, and colorgrading, filters, effects, and more? && how will SkyReels integrate more film-video-storytelling controls, resolutions, and workflows that not only mirror standard production tools but innovate with AI to enable never-before-possible approaches to Generative AI Cinema?
▓▒░░▒▓▓▒░░▒▓▓▒░
in these early days of SkyReels, we find ourselves at an interesting crossroads of AI video. unlike Sora’s chaotic creativity, SkyReels offers something more measured. mayhaps the most valuable insight from these experiments is that SkyReels represents a shift in the spectrum of AI creativity: from systems that surprise us with their departures to systems that surprise us with their attentiveness. filmmkaers && artists looking to animate existing work while maintaining precise control over aesthetics, SkyReels offers a glimpse of what’s possible when AI flexs its new powers of adherence && stability, in keeping the dreams we are dreaming, streaming in the same directional focus ➝
as Generative AI video authoring && filmmkaing technologies continue to be developed && evolve on their own, the question becomes not just what amazing new visions these systems can create, that part is obvious. they can generate what we have never seen before or take what we know && redirect reality. a better, more exciting && useful question becomes: what creative relationships we can build together with these new nonhuman forces in our AI Era? — jonCates, Film Art Friday February 28th 2025