The new ‘land grab’ for AI companies, from Meta to OpenAI, is military contracts
Silicon Valley AI companies have a new BFF: the U.S. Department of Defense.
The leading companies developing generative AI technology have spun up, deepened, or started to pursue relationships with the military in recent months in some cases even revising or making exceptions to internal policies to remove roadblocks and restrictions on defense work.
Several agencies within the DoD, from The Air Force to various Intelligence groups, are actively testing out use cases for AI models and tools from Meta, Google, OpenAI, Anthropic, and Mistral, along with tech from startups like Gladstone AI and ScaleAI, several people with knowledge of the testing told Fortune.
It’s a remarkable turn of events for the internet companies, who until very recently treated defense work as if it were taboo, if not outright verboten. But with the cost to develop and run generative AI services already totaling hundreds of billions of dollars, and showing no signs of slowing, AI companies are feeling the pressure to show some returns on the massive investments. The DoD, with its essentially unlimited budget and long standing interest in cutting-edge technology, suddenly doesn’t look so bad.
Although landing a contract with Defense can be tricky, with layers of certifications to receive and strict compliance standards to follow, “the rewards are significant” and the money can come in for years, Erica Brescia, a managing partner at Redpoint Ventures who focuses on AI investing, said
“DoD contracts provide substantial annual contract values, or ACVs, and create long-term opportunities for growth and market defensibility,” Brescia said.
Brescia added that going after Defense work has recently become more socially acceptable in tech circles. Not only are company leaders looking at the hundreds of millions of dollars in contracts that defense-focused startups like Palantir and Anduril are raking in, but the “changing political landscape” has made “pursuing defense as a primary market segment an increasingly attractive option for companies prepared to navigate longer sales cycles and handle complex deployments.”
An embrace of military work may indeed suit the political moment well, with a business-friendly Trump administration set to take office in January, and a cohort of hawkish Silicon Valley insiders, led by “First Buddy” Elon Musk, in the president elect’s inner circle. Musk’s mandate in his official role as co-head of the new Department of Government Efficiency is to sharply curtail spending. But few expect the Pentagon’s budget to see serious cutbacks, particularly when it comes to AI at a time when the U.S. and China are locked in battle for AI supremacy.
For now, much of the military’s work with generative AI appears to be small-scale projects and tests, but the potential for generative AI tech to become a fundamental aspect of computing in the future means the relationship between Silicon Valley and the Pentagon could be huge.
Defense uses of AI do not necessarily entail drone warfare or blowing things up. A lot of AI-specific work within the DoD is the more mundane activity that any office would gladly hand off to a capable technology. Data labeling, collection and sorting are common uses of AI within the department, as is the use of Chat-GPT and Claude chatbots that most people can access online, but which require extra security when used by the DoD. Large language models could also prove handy for analyzing and searching classified information, aiding in government cybersecurity work, and providing better computer vision and autonomy for things like robotic tools, drones and tanks.
Some tech companies do attempt to specifically avoid being involved in DoD projects that could be utilized in “the kill chain,” a military term referring to the structure of an attack on an enemy, a former official within the DoD told Fortune regarding companies that win procurement contracts. Such concerns sometimes dissipate however, as millions, or billions, of dollars become available. “Once you get in, you want to expand,” the person added.
A changing set of rules
Some tech companies, like Palantir and Anduril have for years made Defense uses and contracts the backbone of their entire business.
Within Silicon Valley’s established internet companies and some of its younger AI startups, however, military work was eschewed as the firms sought to recruit and retain left-leaning engineering talent. When Google acquired DeepMind in 2014, it reportedly committed never to use the startup’s technology for military purposes. And in 2018 Alphabet CEO Sundar Pichai faced an internal backlash over Google’s participation in Project Maven, a Pentagon drone warfare effort. While Google insisted its technology was used for only “non offensive purposes” such as analyzing drone video footage, the employee outcry was loud enough that Pichai cancelled a vacation to reassure staff and eventually promised Google would not develop its AI for weapons.
Chip Somodevilla/Getty Images
Google’s “AI principles” now stipulate that it will “not pursue… weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” nor for “surveillance violating internationally accepted norms.” But the policy leaves plenty of wiggle room and the company has explicitly said it will not swear off working with the military entirely.
The story is similar at other big AI players. Meta initially prohibited its Llama model from being used in military work, as did OpenAI, while Anthropic initially built its Claude model to be “harmless.” Now, all three have announced that such work with their models is fine, and they’re actively pursuing such uses. Sam Altman, who co-founded OpenAI on the principle of developing AI to “benefit humanity as a whole,” and who once said there were things he would “never do with the Department of Defense,” has since removed any commitment to such restrictions from its usage policy.
One venture capitalist focused on investing in AI companies pointed to VC firm Andreessen Horowitz’s “American Dynamism” essay two years ago as a moment when avoidance of defense contracting started to shift. The essay explicitly said tech companies working on defense were working in support of America’s national interest.
“Executives started to think, ‘Oh, ok, defending America, working with the military, is good actually,” the VC said.
The widespread post-pandemic layoffs at tech companies has also had a chilling effect on employee protests, giving tech employers more freedom to pursue military business.
The DoD has already paid out close to $1 billion in official contracts to AI companies in the last two years, according to a Fortune analysis. While details of such contracts are vague, they have been awarded to companies like Morsecorp, which is focused on autonomous vehicle technology, and a subsidiary of ASGN, a management and consulting company, to develop new AI prototypes.
Not all such contracts are made public. But any government procurement contract awarded to a major AI company would likely be worth tens of millions to hundreds of millions, if not billions, of dollars in revenue for those companies — and for their largest backers.
OpenAI’s largest investor is Microsoft, which recently said its Azure cloud service had been approved for DoD agencies to use OpenAI’s AI models for information at lower levels of security clearance – something that took years of investment in specialized infrastructure to achieve. Similarly, Anthropic’s largest backer is Amazon. Amazon Web Services is perhaps the single largest cloud provider to the DoD and has tens of billions of dollars worth of government contracts. For both companies, being able to add new AI services and tools to DoD offerings could prove valuable. Same goes for a company like Google, which also has secured valuable government contracts, and its Gemini AI model.
“They’re basically building the airplane while they’re flying it, so it’s a massive land grab,” one AI executive told Fortune, referring to more tech companies suddenly eager to have their AI tools and models in the hands of the DoD.
A “critical” technology for the DoD
The DoD defines AI among its 14 “critical technology areas,” as it holds “tremendous promise” and is “imperative to dominate future conflicts.”
About a year ago, the DoD officially created the Office of Strategic Capital, a new federal credit program in partnership with the Small Business Administration, in order to ensure that critical technologies like AI receive funding through direct loans. For fiscal 2024, OSC made $984 million available that it intended to hand out to 10 companies focused on things like autonomous robotics and microelectronics fabrication, which typically includes AI chip fabrication. The DoD is investing another roughly $700 million in chip fabrication and the build out of domestic semiconductor manufacturing, which is critical to the creation of AI chips.
Despite billions in investment and no signs of that slowing down within Defense, the AI executive admitted that most AI products today are simply “not that useful yet,” either for Defense or the public at large. But having them applied at scale in a government or defense environment could make them more useful, more quickly. “The military effectively created the Internet, too.” ARPANET, a key technological foundation of the modern Internet, was built within the DoD, as were now common technologies like radar and GPS systems.
Although a department like Defense wants useful products, it also famously sees its budget increase year after year, hitting just below $1 trillion in 2024. About half of that budget is awarded to companies that contract with the department.
“Honestly, yeah, they really love to blow money,” the executive said.
Additional reporting by Jeremy Kahn and Sharon Goldman.
Are you a tech company employee or someone with insight or a tip to share? Contact Kali Hays securely through Signal at +1-949-280-0267 or at kali.hays@fortune.com.