cross-posted from: https://lemmy.sdf.org/post/42070306
In early 2025, the Chinese company DeepSeek launched a powerful LLM-based chatbot that quickly drew international attention. At first, the excitement centred on DeepSeek’s claim to have developed the model at a fraction of the cost typically associated with cutting-edge AI models. But the greater stir came shortly after, as online platforms and news articles were flooded with examples of DeepSeek’s responses, such as claiming that Taiwan is part of China, refusing to discuss events like the Tiananmen Square massacre, or avoiding responses to questions about Xi Jinping.
[…]
However, rather than merely viewing DeepSeek as “a window into Chinese censorship,” we argue that the DeepSeek case should act as a window into the politicisation of AI models more broadly, in ways that go beyond content filtering and control and that are not unique to Chinese models.
Of Course It’s Censored
The fact that DeepSeek filters out politically sensitive responses is hardly surprising. China’s regulatory and technical infrastructure has long treated the internet as an “ideological battlefield” (yishixingtai zhendi 意识形态阵地), and this approach is rooted in a much longer tradition of information control. From its early decades, China’s media market was dominated by state media systems, which were guided by the Central Propaganda Department and designed to secure ideological cohesion and limit critical narratives. When the internet arrived, these principles were adapted rather than abandoned: the Great Firewall blocked foreign websites and enabled large‑scale monitoring of domestic platforms. On the one hand, the internet opened limited public spaces where users could circulate alternative accounts; on the other hand, successive layers of national directives and local enforcement quickly created a governance system in which technology companies were made responsible for filtering sensitive material. Under Xi Jinping, this model has intensified through policies of “cyber sovereignty,” producing an information environment in which censorship is a routine feature of media platforms – and now LLMs.
[…]
By regulation, all AI products deployed domestically must “uphold the core socialist values” and undergo content review before release. Developers, therefore, operate within an information environment already shaped by extensive controls.
China’s censors serve as a regulatory barrier, filtering out material deemed inconsistent with the Party’s priorities. In practice, this means that
(1) the local training data available to developers is already censored, as certain content is largely absent from domestic news, search engines, and social media;
(2) the model‑building process itself is conducted under compliance requirements; and
(3) real‑time mechanisms are embedded, ensuring that certain prompts trigger avoidance scripts or canned replies.
[…]
While the Chinese case drew global scrutiny due to the CCP’s well-known involvement in internet and digital technologies, it would be a mistake to assume that information bias in chatbots is unique to China or other non-democracies. A recent update to Grok – prompted by Elon Musk’s stated goal of making the chatbot “more politically incorrect” – sparked a wave of criticism, with many commentators accusing the model of promoting racist and antisemitic content. Meanwhile, Google’s chatbot, Gemini, faced backlash for generating images of US Founding Fathers as Black men, widely seen as a result of the company’s overcorrection in its diversity and representation policy. If so, these models, too, are biased. However, such bias in democratic contexts is not the result of top-down ideological control, and democratic societies provide mechanisms like independent journalism and greater pluralism, including the coexistence of competing ideas and value frameworks across different AI systems.
[…]
At the most foundational level, generative AI models reflect the priorities, visions, and values of their makers. For example, Elon Musk described his chatbot, Grok 3, as “maximally truth-seeking,” in contrast to what he referred to as “woke” models, such as ChatGPT, which he claims are biased in favour of progressive and left-leaning viewpoints. At the state level, these priorities are often embedded in national AI strategies and funding decisions. Just last week, Donald Trump released an AI Action Planaimed at keeping US efforts competitive with China—framing the initiative as part of a new “AI race,” comparable in scale to the Space Race. Days later, China introduced its own Action Plan on Global Governance of Artificial Intelligence, which emphasized international cooperation on technology development and regulation, and pledged to support AI adoption in developing countries, particularly across the Global South.
[…]
Conclusion
Focusing narrowly on output censorship misses the forest for the trees. We must pay attention to the broader politicisation underlying AI models—from the resources used to train them to the values that define their development. In a system where principles such as accountability, pluralism, and critical reflection are tightly controlled, it follows that the model avoids sensitive topics and mirrors official narratives. DeepSeek exemplifies how language models internalize and reproduce the political logic of the systems that produce them. Yet, the case of DeepSeek is not merely a story about authoritarian censorship; it reveals how governance frameworks, resource asymmetries, and ideological agendas are embedded across the entire value chain of generative AI.
[…]
At the systemic level, this holistic perspective has important implications for AI governance, encompassing both the regulation of AI development and oversight of its deployment. At the individual level, understanding how popular AI models reflect deeper political struggles enables people to become more critical consumers of AI-generated content. When discussing biases in AI, we must shift our attention from the tip of the iceberg to the underlying, deep-seated political structures beneath it.