Annonce publicitairespot_imgspot_img

Chinese narratives around Anthropic highlight contradictions for the US

.NETWORKelboligrafo-geopoliticaChinese narratives around Anthropic highlight contradictions for the US

Bottom lines up front

TAIPEI—The dispute between the US artificial intelligence (AI) company Anthropic and the Department of Defense has garnered much attention in the Western press in the past few weeks. It has also been the subject of lively commentary in the People’s Republic of China (PRC). For one, there is no shortage of schadenfreude being directed toward Anthropic in PRC outlets: The company has been vocal in highlighting China’s abuses of its technology and restricting Chinese firms from using its models under the auspices of preventing Chinese entities from advancing capabilities that might threaten US national security.

Given this, Chinese outlets noted with glee that Anthropic, which “has long been one of Silicon Valley’s most vocal proponents of peddling the ‘China AI threat narrative’ to Washington,” later faced US government restrictions on national security grounds. One Chinese outlet argued that this revealed the “chaos at the heart of US tech governance.” Perhaps the most uncomfortable PRC media critique of the Pentagon’s move against Anthropic is one that has long been lodged at PRC-based companies: that the trustworthiness of US AI systems is undermined when the government can compel access to them without restraint.

An analysis of Chinese articles across social media, as well as official and semi-official media, reveals several key themes that PRC observers of the US tech landscape have drawn from this episode.

PRC commentary does highlight a real contradiction in US AI governance.

First, a throughline in many of the PRC sources surveyed is that the conflict between Anthropic and the Department of Defense has laid bare some of the fundamental bargains that US AI companies have made as they have sought to strike a delicate balance: They seek to position themselves and their technologies as core to US national security while also trying to uphold high ethical standards with regard to the development and deployment of AI. In the view of many commentators, US policy has come to increasingly frame AI as a strategic national security capability. As they seek advantageous market position, favorable regulatory policy, and government partnerships, tech companies have argued that there is a need to protect and develop US AI capabilities against Chinese encroachment. As Chinese academician Gao Lingyun put it, the episode shows that “so-called ‘national security’ has become a political tool aimed at making enterprises serve its own interests.”

Much of the Chinese commentary on the Anthropic dispute aims to highlight the consequences of this framing. In the view of several commentators, US technology firms such as Anthropic promote national security narratives to demonstrate their strategic importance; however, those same narratives in turn strengthen the state’s claim to control the technology. In the view of several commentators, Anthropic in particular has embraced narratives that have contributed to the securitization of AI. For example, Anthropic founder Dario Amodei once said that selling high-end chips to China would be like “selling nuclear weapons to North Korea.”  

In pushing these national security narratives, these analysts claim, companies such as Anthropic are now victims of their own success, as they are facing demands for full military access to their technologies. That a US firm is now facing a supply-chain risk designation—a provision once only applied to firms located in countries that are considered foreign adversaries—illustrates for many analysts in China the fundamental truth that as national security categories expand, governments will seek to assert greater sovereignty over advanced technologies. As one commentator put it, the dispute shows that the US government is “redefining the boundaries between technology and power within its AI national security framework.” One commentator similarly argues that the dispute “strips away the veil of so-called ‘technology neutrality,” showing that as AI capabilities grow, governments will increasingly deploy state power to integrate these systems into military operations.

More broadly, Chinese commentary examined the growing structural tensions between state power, corporate ethics, and AI militarization. Many commentators argued that the incident reveals a fundamental incompatibility with AI designed to curb its capabilities to not harm humans—as in Anthropic’s “Constitution” for its large language model Claude—while also claiming that developing the same technology for use in military applications is a determining factor in the “race for AI dominance.” In modern warfare, they argue, AI has become essential to intelligence analysis, targeting, and decision cycles, making its development and deployment a matter of strategic necessity, with corporate safeguards subsumed under the will of the state. In other words, when technology enters “efficiency-driven state machinery,” corporate restrictions become unsustainable. Companies may choose whether to participate in defense programs, these commentators argue, but they cannot dictate how militaries employ advanced technologies.

Some commentators pointed to Anthropic’s February announcement of a change in its Responsible Scaling Policy, in which the company would no longer pause training on new models whenever capabilities reached predefined danger thresholds, as evidence that in a battle between company ethics and state priorities, the latter always wins. Firms such as OpenAI and Anthropic, which once shaped global digital platforms and had broad leeway to operate as they saw fit, now face increasing pressure to align with state security priorities or face penalties.

In a bit of irony coming from PRC commentators, several analysts argued that this securitized language allows governments to redefine risks and obligations depending on their own strategic priorities. According to researcher Gao Lingyun, when national security definitions become “arbitrarily defined,” they lose moral authority as policy justifications. This argument mirrors criticisms that the United States has long directed at Chinese technology firms. US policymakers frequently warn that Chinese companies are compelled to assist PRC government authorities, including military and intelligence services, under existing legal frameworks such as the National Intelligence Law and the Data Security Law. PRC analysts have turned this criticism back on the United States, asking how much trust to put into AI technologies if governments possess the legal authority to compel access to them (and to US citizen data to enable surveillance, if Anthropic’s accounting is accurate).

Ultimately, PRC commentary on the Anthropic-Pentagon dispute reveals how the Chinese political apparatus is seeking to frame the incident internally for Chinese audiences. It behooves the PRC to present the US AI governance ecosystem as chaotic and to heighten the perception of risk around US military use of AI. However, PRC commentary does highlight a real contradiction in US AI governance. If US AI firms promote trust, safety, and independence as core advantages over their competitors, how durable are those claims in the long run when national security authorities intervene? And how does this impact the competitiveness and trustworthiness of US systems writ large? As AI systems are becoming increasingly central to military and economic competition, the answer to these questions will shape global perceptions of technological trust and jurisdictional risk beyond this dispute and the context of US-China competition.


Source:

www.atlanticcouncil.org

Découvrez nos autres contenus

Articles les plus populaires