As the war against Iran intensifies, another battle is being fought behind closed doors. Over the course of a few months, the artificial intelligence giant OpenAI has methodically infiltrated the gears of the American security apparatus, recruiting former Pentagon, National Security Council, and Congressional officials left and right. This strategy paid off: while Anthropic was abruptly sidelined for refusing certain military uses of its models, OpenAI seized the opportunity and secured strategic contracts. A reshaping of the military-technological complex is taking shape, where the line between civilian innovation and war machines is becoming increasingly blurred.
Long before public concerns about the use of artificial intelligence in war zones – notably its role in civilian casualties during the Iranian conflict – OpenAI was quietly engaging in a strategy to embed itself within the American national security apparatus, with a focus on the prospects offered by algorithmic warfare.
This strategy led to the recruitment of a dozen figures from positions of power, both Republicans and Democrats, with decades of experience in national security institutions. It also included a partnership with a major military contractor close to Donald Trump’s circle.
To make its colossal investments profitable, OpenAI leverages a well-known lever of influence: the “revolving door” system between private and public sectors.
The effects of this strategy were not long in coming. Last month, one of these recruitments reportedly enabled OpenAI to secure a $200 million defense contract in just a few hours, at a time when the White House was sidelining the competing company Anthropic, which was reluctant to use its technologies for surveillance or automated weaponry purposes.
Unprecedented Hiring Wave
Since the beginning of 2024, OpenAI has been ramping up hiring of profiles from all political backgrounds, including those from Capitol Hill, the National Security Council, the Department of Defense, and other security components.
In February 2024, a month after the ban on military uses was lifted, the company recruited Katrina Mulligan as head of security partnerships. Her mission was to “structure agreements with the Department of Defense and clients in this sector.” She previously held positions in the Biden administration, working closely with high-ranking Pentagon officials.
She was part of the team advising the Deputy Secretary of Defense for Special Operations and Low-Intensity Conflict – a civilian post central to the American military architecture, as it oversees the United States Special Operations Command.
According to journalist Seth Harp, Mulligan’s superior at this position held “the highest position within the clandestine military structure established after September 11th to carry out assassinations and kidnappings around the world.” He believes these connections could be valuable if OpenAI seeks access to funding from the “black budget” of special operations – these classified funds allocated to the most sensitive activities. “There is a lot of money to be made in it,” he sums up.
Four months later, in June 2024, the company announced a new high-profile recruit: General Paul Nakasone joined its board of directors. Former director of the National Security Agency (NSA) and commander of the US Cyber Command between 2018 and 2024, this four-star officer held two of the most strategic positions in the American security apparatus. OpenAI claims that his presence will “enlighten its critical security decisions.”
The dynamic continued throughout the summer. In August 2024, the company hired Morgan Dwyer and Benjamin Schwartz, both involved in the implementation of the CHIPS and Science Act of 2022 under the Biden administration – a program of multi-billion-dollar public grants to support semiconductor production in the US, including for military applications.
Officially recruited to lead the development of data centers and infrastructures, Morgan Dwyer and Benjamin Schwartz are nevertheless typical products of the American security apparatus. Dwyer, in particular, was the chief civilian collaborator overseeing military research and cutting-edge technologies, including artificial intelligence. Schwartz, on the other hand, has long advised the Office of the Secretary of Defense under the Barack Obama administration, particularly on terrorism and South Asian policy issues.
In the same month of August 2024, OpenAI hired Sasha Baker to head its national security policy. Former advisor to Senator Elizabeth Warren, who also served in the cabinet of Defense Secretary Ashton Carter, she held high-ranking positions in both the National Security Council and the Department of Defense under Joe Biden.
In the following months, two more members of the Biden administration joined the company: a former deputy spokesperson for the National Security Council and a former collaborator from the Pentagon’s Indo-Pacific security office.
But this recruitment policy is not limited to the Democratic side. OpenAI is also working to consolidate its Republican connections.
In April 2024, the company hired Matt Rimkunas, former legislative director and deputy chief of staff to Senator Lindsey Graham, to lead its federal affairs. Shortly after, Meghan Dorn, also from Graham’s team, joined the company. Both are registered as lobbyists for OpenAI.
While their executive experience is limited, their proximity to one of the most interventionist senators in Washington – who is also a key figure in budget decisions – has undoubtedly contributed to strengthening their influence.
Top-Level Support
In June 2025, OpenAI reached a new milestone by announcing one of its first major defense industry projects. The company revealed a partnership with Anduril, a defense technology company supported by Peter Thiel, to “strengthen defense systems protecting American and allied forces against drone attacks.”
Anduril has greatly benefited from its connections with Donald Trump’s entourage. Its founder, Palmer Luckey, is notably linked to former Attorney General nominee Matt Gaetz. Beyond financial benefits, this partnership offers Sam Altman the opportunity to align with the conservative tech ecosystem, which has been growing since Trump’s return to the White House, alongside figures from Silicon Valley.
OpenAI’s integration into Washington’s security apparatus continues to accelerate in 2025. In June, the company secured a $200 million contract with the Department of Defense for the provision of AI capabilities “for combat operations and internal functions.”
The following month, the company continued its targeted recruitment efforts.
In July 2025, Joseph Larson was appointed as “head of government affairs.” A former Anduril executive and former deputy head of digital and artificial intelligence issues at the Pentagon, Larson is praised for his role in promoting “responsible” AI for national security. In 2026, he received a public contracts sector award. His networks quickly proved decisive for the company.
OpenAI also welcomed into its circle of advisors former California Senator Laphonza Butler, a member of the internal security commission. At the same time, the “revolving door” movements are occurring in the other direction: the Pentagon recruited two OpenAI executives – Kevin Weil and Bob McGrew – as reserve officers in an army innovation unit, to put their technological expertise at the service of the military institution. Neither of them is required to withdraw from the contractual discussions between OpenAI and the ministry.
In the following months, the company further strengthened its teams with Connie LaRossa, a former legislative relations manager at the Department of Homeland Security and the Pentagon. She also worked for Google, as well as for an armament group subsidiary of General Dynamics.
Simultaneously, OpenAI recruited a former Justice Department advisor on national security issues and a former communications official from the Pentagon’s research and engineering office.
In the Right Place at the Right Time
Despite this all-out offensive, OpenAI’s technologies were not initially the Pentagon’s preferred solution at the beginning of the war against Iran. Initially, a major $200 million contract was awarded to Anthropic.
However, the situation quickly changed. Faced with Anthropic’s reluctance to see its models used for military purposes without sufficient guarantees, the Trump administration turned away from the company. OpenAI seized the opportunity and secured an equivalent contract.
The agreement was reportedly reached under the guidance of Joseph Larson, whom the Pentagon turned to when negotiations with Anthropic stalled.
The use of artificial intelligence in the war against Iran has raised recent controversies in the United States as well. The Department of Defense has notably refused to specify if these technologies were used in the bombing of a girls’ primary school in Iran on February 28, resulting in the deaths of 175 people – the majority of whom were children.
Originally published by our partner Jacobin titled “OpenAI Is Bleeding Cash. Its Solution? Military Contracts.”




