Home War Hegseth warns Anthropic, the army must use its AI technology at will

Hegseth warns Anthropic, the army must use its AI technology at will

10
0

The US Secretary of Defense, Pete Hegseth, has given Anthropic’s CEO until Friday to authorize the unrestricted use of their artificial intelligence (AI) technology by the US military, or else risk losing their government contract, according to a source familiar with the matter.

Anthropic, the creator of the AI “Claude,” is the latest company in the sector to not provide its technology to the new internal network of the US military. Its CEO, Dario Amodei, has repeatedly expressed ethical concerns about the uncontrolled use of AI by the government, particularly the dangers associated with fully autonomous armed drones and AI-assisted mass surveillance that could track dissidents.

Defense officials have warned that they could designate Anthropic as a supply chain risk company or invoke the Defense Production Act to grant the military greater authority over the use of its products without approving their usage.

The information, first reported by Axios, fuels the debate on the role of AI in national security and concerns about its potential use in critical situations involving lethal force, sensitive information, or government surveillance. It also comes as Mr. Hegseth has pledged to eliminate what he calls “woke culture” within the armed forces.

“A powerful AI, capable of analyzing billions of conversations from millions of people, could assess public opinion, detect emerging centers of dissidence, and stifle them in the bud,” Mr. Amodei wrote in an article last month.

According to a source familiar with the matter, the meeting was cordial, but Mr. Amodei remained firm on two essential points that Anthropic will not cross: fully autonomous military targeting operations and domestic surveillance of American citizens.

The Pentagon opposes the ethical restrictions imposed on Anthropic, as military operations require tools without intrinsic limitations, explained a senior Pentagon official. The official stated that the Pentagon only gave legal orders and emphasized that the legal use of Anthropic’s tools was the responsibility of the military.

Last year, the Pentagon awarded defense contracts to four AI specialized companies: Anthropic, Google, OpenAI, and xAI by Elon Musk, with each contract worth up to $200 million.

Anthropic was the first AI company to receive authorization to operate on classified military networks, where it collaborates with partners like Palantir. According to a senior Pentagon official, Elon Musk’s xAI company, which operates the conversational agent Grok, claims that it is also preferred to be used in classified contexts.

The official indicated that other AI specialized companies were “on the verge” of reaching this goal. SpaceX, Musk’s space company, which recently merged with xAI, did not immediately respond to a request for comments on Tuesday.

In a speech at SpaceX in Texas in January, Pete Hegseth assured that he rejects any AI model “that would prevent us from waging wars.”

He explained that his vision of military AI systems involves functioning “without ideological constraints limiting legitimate military applications,” adding that the Pentagon’s AI “will not be woke.”

Anthropic presents itself as more concerned with security.

In a statement released after Tuesday’s meeting, Anthropic announced that it had “engaged in good-faith discussions regarding its usage policy to ensure that Anthropic can continue to support the government’s national security mission reliably and responsibly based on the capabilities of its models.”

The uncertainty at the Pentagon tests these intentions, according to Owen Daniels, deputy director of analysis and researcher at Georgetown University’s Center for Security and Emerging Technologies.

“The opponents of Anthropic, including Meta, Google, and xAI, have expressed their willingness to comply with the department’s directive on model use for all legal purposes,” said Mr. Daniels. He noted that the company’s bargaining power is limited, risking losing influence in the department’s decision to adopt AI.

In the context of the AI boom following the release of ChatGPT, Anthropic closely aligned itself with President Joe Biden’s Democratic administration by proposing to subject its AI systems to third-party review to guard against national security risks.

Mr. Amodei, the CEO, warned of potentially catastrophic dangers of AI while rejecting the label of “pessimist” regarding it. In an article published in January, he stated that “we are considerably closer to a real danger in 2026 than in 2023,” but that these risks must be addressed in a “realistic and pragmatic” manner.

Anthropic clashed with the Trump administration.

Anthropic strongly criticized Nvidia, a chip manufacturer, for considering selling computer chips for artificial intelligence in China, following Donald Trump’s proposals to relax export controls. However, the AI company remains a close partner of Nvidia.

The Trump administration and Anthropic also opposed each other in a lobbying campaign to regulate AI in US states.

David Sacks, Mr. Trump’s chief AI advisor, accused Anthropic of “implementing a sophisticated strategy of regulatory capture based on fear.” Mr. Sacks responded on X to Jack Clark, a co-founder of Anthropic, discussing his attempt to reconcile technological optimism with “justified fear” of the constant progression towards more powerful AI systems.

Anthropic hired several former Biden administration officials shortly after Donald Trump’s return to the White House, while striving to promote a bipartisan approach. The company recently added Chris Liddell, a former member of the Trump administration during his first term, to its board of directors.

The “rapid” adoption of AI by the Pentagon underscores the need for increased control or regulation of AI by Congress, especially if used to monitor American citizens, said Amos Toh, senior advisor for the National Security and Liberty program at the Brennan Center for Justice at New York University.

“Legislation is not keeping pace with the evolution of this technology,” Mr. Toh wrote in an article on Bluesky. “But that doesn’t mean the Department of Defense has a blank check.”