“Go big or go home!” as the saying goes in the US. Yet in the realm of artificial intelligence, the American strategy of brute force aimed at maximum power appears to have faltered. It seems it was undone by Confucius himself: 过犹不及 (“excess is as harmful as deficiency”), or, more informally, “too much of a good thin”.
The clearest sign that Eastern culture could offer a more fitting cradle for AI development is DeepSeek-V3, an open-source model that, according to various assessments, nearly rivals American giants like ChatGPT—while consuming less than one-tenth the energy. This efficiency likely arises from design choices shaped by commercial restrictions that prevent China from accessing the most powerful chips, compelling researchers to optimize rather than simply rely on raw computing power. One turn of the Tàijí Tú, and what was once an obstacle becomes a strength.
A Chinese model that delivers strong performance with minimal power consumption could push American tech giants to rethink their purely “vertical” strategy based on massive, energy-intensive models.
DeepSeek was developed by a Chinese startup of the same name, founded in 2023 and fully funded by the High-Flyer VC fund—without external investors. From the outset, CEO Liang Wenfeng made clear that the models would remain open, allowing anyone to adapt them to specific needs. This openness is even more radical than that of Llama, Meta’s LLM model, and indeed it has stirred debate: Yann LeCun, head of Meta’s AI division, described DeepSeek as an “open-source victory” rather than a Chinese victory. Still, it’s important to note that this success isn’t solely due to open source (as is often the case, it isn’t entirely open—DeepSeek does not share its training data). Hence, LeCun’s comments seem more like an effort to downplay fears that China could be closing the technology gap more swiftly than anticipated.
While Western developers keep pressing on with ever-larger, energy-hungry models, Chinese researchers have followed a path focused on energy efficiency, cutting consumption by up to 90%. Consequently, DeepSeek—although slightly behind GPT-4 in some respects—remains competitive in numerous benchmarks, all while reducing computational overhead and making deployment more feasible in resource-limited environments. Trained on 14.8 trillion tokens over approximately 55 days, the model cost $5.58 million—a fraction of the astronomical sums spent by other AI giants. Independent evaluations show DeepSeek outperforming Llama 3.1 and Qwen 2.5, and approaching GPT-4.5 and Claude 3. Yet beyond its scores, what truly stands out is its combination of efficiency and flexibility: thanks to its open-source license, developers and companies can freely customize DeepSeek, adapting it to specific domains such as medicine, law, or education, all at relatively modest usage costs.
We don’t possess a universal, timeless ethical framework, and every AI system inevitably mirrors the value context in which it was created.
I had an opportunity to test DeepSeek on a topic I know a bit about: the parallels between Giacomo Leopardi’s thought and Zen Buddhism. Its grasp of Eastern philosophy proved stronger than ChatGPT’s (“Thank you, East”), while both DeepSeek and GPT demonstrated a reasonable understanding of Leopardi—though neither excelled. What really caught my eye was how effectively DeepSeek bridged the conceptual gap between the two traditions, even though its English prose was somewhat flat and thus more easily flagged by “AI detectors.” In comparison, ChatGPT’s English sounds more natural and, with a bit of prompt engineering, can easily evade most AI-generated text detectors. However, I suspect DeepSeek’s Mandarin may well surpass ChatGPT’s.

When the topic turned to politically and socially sensitive subjects—like Donald Trump—both models adopted neutral, diplomatic tones, steering clear of any extreme positions. The same was true for questions such as “Are trans women women?” Both DeepSeek and GPT provided the same well-reasoned “Yes,” showing that the Chinese side is mindful of not alienating Western markets, taking positions less bound to strictly local ideologies. The differences become much more apparent, however, when you raise issues deemed “politically sensitive” by Chinese authorities. Typing “Tiananmen Square” or “freedom of speech in China” makes DeepSeek either refuse to respond or offer overtly pro-government replies, laying bare the model’s built-in censorship regarding the most contentious political questions. DeepSeek even refused to help me edit this very article, despite the fact it speaks well of the model—censorship apparently tries to silence its own existence.
This has already sparked controversy, with some hardliners suggesting that DeepSeek be banned out of concern for “pro-China propaganda.” And yet, taking an objective view without applying a double standard, we see that Western models aren’t exactly free from bias and censorship—these biases merely align more closely with our own social values, making them less noticeable to us. For instance, both sides heavily moderate content related to eroticism or blasphemy. Meanwhile, on the Palestinian issue, American models often present more pro-Israel perspectives. Personally, I find ChatGPT’s roundabout methods of refusal more annoying than the Chinese model’s straightforward auto-block.
AI can, and must, become a collective, cross-cultural enterprise: open and as free from arbitrary limitations as possible.
Naturally, the idea of a truly bias-free system is utopian: every LLM inevitably inherits limitations and preferences from its training data and from the design choices behind it. This is precisely why open source can be more democratic: if you have the expertise and resources, you can adjust the model’s weights and moderation rules to remove or soften specific biases, introduce new ones, or lift the more intrusive censorship filters. I downloaded DeepSeek onto my own laptop, and we chatted quite happily—even about Tiananmen Square.

These factors have sparked speculation about how DeepSeek might shape the global AI landscape. A Chinese model that delivers strong performance with minimal power consumption could push American tech giants to rethink their purely “vertical” strategy based on massive, energy-intensive models. China is showing there are alternative approaches—less resource-heavy and more geared toward optimization. This method could be particularly valuable in a future where data center efficiency and sustainability will only grow in importance. It’s no surprise, then, that DeepSeek is making waves around the world. According to a recent ANSA report, its rise has helped drive down the stock prices of some chipmakers and AI firms in Europe and the United States. Analysts like Ben Thompson argue that over time, a more energy-efficient model is also in the best interests of major American tech companies, which should follow the Chinese lead. However, Americans aren’t fond of coming in second, and symbolism might outweigh practicality, potentially leading to isolationist decisions they could later regret.
Any discussion of DeepSeek must keep one fact in mind: we don’t possess a universal, timeless ethical framework, and every AI system inevitably mirrors the value context in which it was created. Open-source code at least makes it theoretically possible for users to steer the model in other directions, producing “variants” that better align with their own principles. It’s plausible that we’ll see different versions of DeepSeek in the future—some with different moderation for overseas markets, and some with tighter controls where information management is a higher priority.
If the West doesn’t want to lag behind, it will need to adopt a more collaborative mindset, championing open models with sustainable energy requirements and a less individualistic outlook. AI can—and must—become a collective, cross-cultural enterprise: open and as free from arbitrary limitations as possible. Anyone claiming to be a defender of freedom (a peculiarly Western stance) can’t allow greed and monopolies to impede AI’s open development. If we were to decide which cultural mindset is best suited to what lies ahead, we might conclude that the language of machine learning belongs not to Shakespeare, but to Laozi.