Sora’s shutdown shows we no longer own our creative tools

Those who were working with Sora suddenly found themselves without a tool. An episode that reveals an increasingly widespread condition: the precariousness of creative software in the age of artificial intelligence.

On March 24, 2026, OpenAI announced the shutdown of Sora, its video-generation model. A sudden decision, driven by computational costs, strategic priorities, and competitive pressure, that points to something broader: how fragile creative tools based on artificial intelligence have become. The app had launched just six months earlier, with the fanfare that accompanies every move by Altman’s company; a few months ago, a deal had also been reached with Disney to bring characters from its catalogue onto the platform. Now Sora is gone, the agreement has collapsed, and Disney responded with a terse, diplomatic statement. The computational resources Sora required no longer justified its revenue. OpenAI decided that video generation was a luxury it could no longer afford.

More importantly, Sora was one of the first tools to truly astonish us, but it had already begun to show its limits. Those who work with these systems on a daily basis had known this for months: the videos it generated were less coherent, less controllable, and more expensive than those produced by competing models, particularly in Asia. OpenAI had aimed for a mass audience, hoping video generation would become a viral toy, but not everyone is eager to pay for AI-generated cat videos. I am, for the record; but professionals working with AI video also need tools that are flexible and efficient. Services that have targeted professionals, such as Kling by Kuaishou, are holding steady and growing, while those that chased a general audience have ended up with costly products and highly volatile users.

The tweet that announced Sora's closure

Geopolitics and models

Sora’s shutdown is part of a broader dynamic: the growing divergence between the West and China in video generation. In February, ByteDance released Seedance 2.0. Within hours, the internet was flooded with clips of remarkable quality — coherent motion, consistent characters, cinematic control of the frame. Disney issued a cease-and-desist letter over copyright infringement; SAG-AFTRA condemned the unauthorized use of its members’ likenesses; the Motion Picture Association spoke of structural violations. ByteDance responded with standard diplomatic statements and suspended the model’s international launch, leaving it accessible only within domestic Chinese apps.

Many of today’s most advanced video-generation models are developed by Chinese companies. China censors politically sensitive content but is more permissive regarding foreign copyright; the West, by contrast, protects intellectual property with a rigidity that risks slowing creators’ access to advanced tools, also to safeguard large corporations like Disney. The result is that, for independent artists who want to experiment with video generation (without touching politically sensitive content), the Chinese ecosystem can be more accessible. While Hollywood sues ByteDance, China accumulates experience and talent.
 


It is within this context that my own experience as an artist and researcher working with generative AI becomes relevant — an experience I share with many others I am in contact with: filmmakers, artists, experimenters, all bound together by what I would describe as a condition of permanent precariousness, one that forces us to move constantly from one platform to the next.

As generative AI systems become infrastructure of creative and intellectual work, vendor dependence becomes a collective problem.

Over the past two years I have used Midjourney, Stable Diffusion, Flux, and Krea for images; Sora, Kling, Midjourney, Luma, Seedance, and Veo for video; ChatGPT, Claude, Gemini, and Deepseek for text. I have frequently switched platforms in response to improvements, regressions, price changes, closures, restrictions, and unexpected openings. Each time, it requires relearning an interface, adapting to new interaction logics, and rebuilding workflows from scratch.

Working without ownership

A model update can suddenly make possible what was impossible the day before — but it can also degrade what once worked. Decisions are made elsewhere, according to financial, geopolitical, and legal logics that have little to do with the needs of those who actually use these tools. Sora is a perfect example: it was shut down, among other reasons, due to computational costs and increasing competition. Those working with it simply discovered, one Tuesday afternoon, that their tool no longer existed.

Sora's interface

This volatility is amplified by geopolitics. Tensions between the United States and China directly affect which tools are accessible and where. Seedance 2.0, among the most advanced models for video generation, is not available in the West due to legal pressure around copyright. In this landscape, the only real form of ownership lies with open-source models. Stable Diffusion, Flux, Chinese open models from Alibaba, and many others can be downloaded, run locally on personal hardware, and modified. No one can take them away; no forced update can degrade them without your consent.

The problem is that this freedom comes at a high technical cost. Running a video-generation model locally requires expensive hardware and programming skills that an artist may not have — or want to acquire. And open-source models are currently less powerful than proprietary ones, especially in video, where the gap remains significant. Open source must be defended and supported — this is beyond dispute — but it would be dishonest to present it as a complete solution. For most creatives working with AI, the reality is a subscription to proprietary services whose terms can change at any moment.

Sora glitch. Screen from Reddit

The closest precedent is digital graphics. Photoshop moved from a one-time purchase to a subscription model; the Creative Suite constantly evolves, features appear and disappear, prices rise. The industry has been through this before, and while the transition generated widespread dissatisfaction, it was ultimately absorbed.

Often, these changes are improvements, sometimes significant ones. Progress in this field is extremely rapid, and users benefit from it continuously. But the direction of improvement is decided by companies, and does not always align with users’ needs. A model can become more powerful overall while becoming less suited to a specific use; it can gain in safety while losing flexibility; it can be optimized for a general audience and become less interesting for those who experiment.

Frame of a video generated with Sora

There is also the question of portability. Moving from one system to another is not seamless. Prompts that work on Midjourney do not work on Gemini; techniques that produce excellent results on Claude do not transfer directly to ChatGPT. Each platform has its own idiosyncrasies, strengths, and internal logics, and the knowledge accumulated in one does not always carry over to another.

Decisions are made elsewhere, based on financial, geopolitical and legal logics that have nothing to do with the creative needs of those who use the tool.

This does not mean that knowledge is lost — on the contrary, those who have mastered one system tend to learn the next more quickly, because they develop an intuition for the underlying logic of generation that transcends any single tool. But each migration carries a cost in time, frustration, and temporarily worse results, and the absence of interoperability standards means these costs fall entirely on the user.

Sam Altman, co-founder of OpenAi. Photo from Wikipedia

As generative AI systems become infrastructure for creative and intellectual work, dependence on providers becomes a collective problem. The analogy — still somewhat premature — is with electricity grids or telecommunications: resources that, at a certain point, became too important to be left entirely to private actors. What is to be done?

Rejecting these tools is a form of moralizing Luddism destined for irrelevance: if a tool is useful — and this one is — and there are no real alternatives, people will use it. Nor does the nostalgic fantasy of technological self-sufficiency, which never truly existed, offer a solution: how many tools we consider “ours” would function without services like electricity? The truth is that we have never been independent. That is why the response must be collective. We need to demand public access to AI systems: publicly funded open models, accessible computational infrastructure, and interoperability standards that reduce the cost of switching. Europe, with all the limits of its technology policy, may be the place where this demand has the greatest chance of being articulated and heard, precisely because it has fewer dominant players to protect in the generative AI market and can therefore reason in terms of public interest rather than competitive advantage.

Frame of a video generated with Sora

Sora is dead, long live Sora. Its users migrate elsewhere, relearn, adapt-or more likely had already abandoned it. But accessibility to tools is increasingly determined by factors such as economic resources and nationality, with the inevitable consequence of creating or perpetuating inequality.