AI and the U.S. Military: A Delicate Balance Between Efficiency and Ethics

Illustration of a military officer interacting with an AI interface, with high-tech displays and military assets in the background, symbolizing the collaboration between AI and defense systems.

Top artificial intelligence developers, including OpenAI and Anthropic, are carefully navigating the complex task of providing their technologies to the United States military. Their goal is to enhance the efficiency of the Pentagon without enabling AI to take lethal actions independently.

Although these AI tools are not currently used as weapons, they are proving to be invaluable in helping the Department of Defense (DoD) gain a “significant advantage” in identifying, tracking, and evaluating threats, as Dr. Radha Plumb, the Pentagon’s Chief Digital and AI Officer, explained in a recent interview with TechCrunch.

“We’re obviously expanding our ability to accelerate the execution of the kill chain so that commanders can act swiftly to protect our forces,” Plumb said. The “kill chain” refers to the military’s process of detecting, tracking, and neutralizing threats, which involves an intricate system of sensors, platforms, and weapons. According to Plumb, generative AI is proving useful during the early stages of planning and strategizing within the kill chain.

The collaboration between the Pentagon and AI companies is a relatively new development. In 2024, OpenAI, Anthropic, and Meta relaxed their usage policies, allowing their AI systems to be utilized by U.S. intelligence and defense agencies. However, they continue to impose restrictions to prevent their AI technologies from being used to harm humans.

“We’ve been very clear on what we will and won’t allow their technologies to be used for,” Plumb said, emphasizing the Pentagon’s stance on responsible use.

This change has triggered a flurry of partnerships between AI developers and defense contractors. Meta, for example, collaborated with Lockheed Martin and Booz Allen to integrate its Llama AI models into defense applications in November. That same month, Anthropic partnered with Palantir, and OpenAI followed suit with a deal with Anduril in December. Meanwhile, Cohere has quietly been working with Palantir to deploy its models.

As generative AI continues to demonstrate its value to the Pentagon, the pressure may grow on Silicon Valley to ease its AI usage restrictions and open the door to further military applications.

“Generative AI is really useful for playing through different scenarios,” Plumb said. “It helps commanders think creatively about various response options and potential trade-offs when dealing with multiple threats.”

However, it remains unclear which AI technologies are currently being used by the Pentagon, as employing generative AI in the kill chain (even during the planning phase) could violate the usage policies of leading model developers. For instance, Anthropic prohibits the use of its models for systems that are intended to cause harm or result in human fatalities.

Life, Death, and AI Weapons

The debate over AI’s role in military decision-making has heated up recently, especially regarding whether AI should be allowed to make life-or-death decisions. Some argue that the U.S. military has already deployed weapons capable of autonomous actions.

Anduril’s CEO, Palmer Luckey, pointed out on X that the DoD has a long history of using autonomous weapons systems, such as the Close-In Weapon System (CIWS) turret, which can operate without human intervention.

Despite this, when TechCrunch inquired whether the Pentagon uses fully autonomous weapons—systems with no human involvement—Plumb firmly rejected the notion.

“No, is the short answer,” she said. “For reasons of both reliability and ethics, we will always have humans involved in decisions about the use of force, including in our weapon systems.”

The term “autonomy” remains contentious, sparking ongoing discussions in the tech industry about the point at which automated systems—like AI agents, self-driving cars, or autonomous weapons—can be considered truly independent.

Plumb argued that the idea of automated systems making life-or-death choices is overly simplistic and misrepresents the Pentagon’s approach. Instead, she emphasized that the collaboration between humans and AI in the military is a nuanced process, with senior leaders making key decisions throughout the entire process.

“People often think of it like robots somewhere, and the ‘gonculator’ spits out a sheet of paper, and humans just check a box,” Plumb said. “That’s not how human-machine collaboration works, and it’s not an effective way to use AI systems.”

AI Safety in Military Partnerships

Silicon Valley’s involvement with the military hasn’t always been smooth, with protests arising in the past. For instance, in 2022, numerous Amazon and Google employees were either fired or arrested after protesting their companies’ military contracts with Israel under the “Project Nimbus” codename.

In contrast, there has been a relatively muted response from the AI community regarding the Pentagon’s involvement. Some AI researchers, such as Anthropic’s Evan Hubinger, believe that working directly with the military is inevitable, especially to ensure that AI technologies are used responsibly.

“If you take catastrophic risks from AI seriously, the U.S. government is a crucial player to engage with,” Hubinger wrote in a November post on the LessWrong forum. “Simply trying to block the U.S. government from using AI isn’t a viable approach. It’s not just about focusing on catastrophic risks but also preventing any misuse of your models.”

Related Posts