Sunday, December 22, 2024
Business

Meta’s military push is as much about the battle for open-source AI as it is about actual battles

Hello and welcome to Eye on AI! In this newsletter…Intel’s Gaudi disappointment…Prime Video gets AI…OpenAI and Anthropic hiring news…Sleep pays…and nuclear setbacks.

Meta wants to get the U.S. government using its AI—even the military.

The company said yesterday it had assembled a smorgasbord of partners for this effort, including consultancies like Accenture and Deloitte, cloud providers like Microsoft and Oracle, and defense contractors like Lockheed Martin and Palantir.

Policy chief Nick Clegg wrote in a blog post that Oracle was tweaking Meta’s Llama AI model to “synthesize aircraft maintenance documents so technicians can more quickly and accurately diagnose problems,” while Lockheed Martin is using it for code generation and data analysis. Scale AI, a defense contractor that happens to count Meta among its investors, is “fine-tuning Llama to support specific national security team missions, such as planning operations and identifying adversaries’ vulnerabilities.”

“As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security, and economic prosperity of America—and of its closest allies too,” trilled the former British deputy prime minister.

But Clegg’s post wasn’t just about positioning Meta AI as the patriot’s choice. Perhaps more than anything else, it was an attempt to frame Meta’s version of open-source AI as the correct and desirable one.

Meta has always pitched Llama as “open source,” in the sense that it gives away not only the model but also its weights—the parameters that make it easier to modify—along with various other safety tools and resources.

Many in the traditional open-source software community have disagreed with Meta’s “open source” framing, mainly because the company doesn’t disclose the training data that it uses to create its Llama models, and because it places restrictions on Llama’s use—most pertinently in the context of Monday’s announcement, Llama’s license says it’s not supposed to be used in military applications.

The Open Source Initiative, which came up with the term “open source” and continues to act as its steward, recently issued a definition of open-source AI that clearly doesn’t apply to Llama for these reasons. Ditto the Linux Foundation, whose equally fresh definition isn’t exactly the same as the OSI’s, but still plainly demands information about training data, and the ability for anyone at all to reuse and improve the model.

Which is probably why Clegg’s post (which invokes “open source” 13 times in its body) proposes that Llama’s U.S. national security deployments “will not only support the prosperity and security of the United States, they will also help establish U.S. open source standards in the global race for AI leadership.” Per Clegg, a “global open source standard for AI models” is coming—think Android but for AI—and it “will form the foundation for AI development around the world and become embedded in technology, infrastructure and manufacturing, and global finance and e-commerce.”

If the U.S. drops the ball, Clegg suggests, China’s take on open-source AI will become that global standard.

However, the timing of this lobbying extravaganza is slightly awkward, as it comes just a few days after Reuters reported that Chinese military-linked researchers have used a year-old version of Llama as the basis for ChatBIT, a tool for processing intelligence and aiding operational decision-making. This is kind of what Meta is now letting military contractors do with Llama in the U.S., only without its permission.

There are plenty of reasons to be skeptical about how big an impact Llama’s sinicization will actually have. Given the hectic pace of AI development, the version of Llama in question (13B) is far from cutting-edge. Reuters says ChatBIT “was found to outperform some other AI models that were roughly 90% as capable as OpenAI’s powerful ChatGPT-4,” but it’s not clear what “capable” means here. It’s not even clear if ChatBIT is actually being used.

“In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than $1 trillion to surpass the U.S. technologically, and Chinese tech companies are releasing their own open AI models as fast—or faster—than companies in the U.S.,” Meta said in a statement responding to the Reuters piece.

Not everyone is so convinced that the Llama-ChatBIT connection is irrelevant. The U.S. House Select Committee on the Chinese Communist Party made clear on X that it has taken note of the story. The chair of the House Committee on Foreign Affairs, Rep. Michael McCaul (R-TX), also tweeted that the CCP “exploiting U.S. AI applications like Meta’s Llama for military use” demonstrated the need for export controls (in the form of the ENFORCE Act bill) to “keep American AI out of China’s hands.”

Meta’s Monday announcement isn’t likely to have been a reaction to this episode—that would be a heck of lot of partnerships to assemble in a couple days—but it is also clearly motivated in part by the sort of reaction that followed the Reuters story.

There are live battles not only for the definition of “open-source AI,” but also for the concept’s survival in the face of the U.S.-China geopolitical struggle. And these two battles are connected. As the Linux Foundation explained in a 2021 whitepaper, open-source encryption software can fall foul of U.S. export restrictions—unless it’s made “publicly available without restrictions on its further dissemination.”

Meta certainly wouldn’t love to see the same logic applied to AI—but, in this case, it may be far more difficult to convince the U.S. that a truly open “open source” AI standard is in its national security interest.

More news below.

David Meyer
david.meyer@fortune.com
@superglaze

Request your invitation for the Fortune Global Forum in New York City on Nov. 11-12. Speakers include Honeywell CEO Vimal Kapur and Lumen CEO Kate Johnson who will be discussing AI’s impact on work and the workforce. Qualtrics CEO Zig Serafin and Eric Kutcher, McKinsey’s senior partner and North America chair, will be discussing how businesses can build the data pipelines and infrastructure they need to compete in the age of AI.

AI IN THE NEWS

Intel’s Gaudi disappointment. Intel CEO Pat Gelsinger admitted last week that the company won’t hit its $500 million revenue target for its Gaudi AI chips this year. Gelsinger: “The overall uptake of Gaudi has been slower than we anticipated as adoption rates were impacted by the product transition from Gaudi 2 to Gaudi 3 and software ease of use.” Considering that Intel was telling Wall Street about a $2 billion deal pipeline for Gaudi at the start of this year, before it lowered its expectations to that $500 million figure, this does not reflect well on the struggling company.

Prime Video gets AI. Amazon is adding an AI-powered feature called X-Ray Recaps to its Prime Video streaming service. The idea is to help viewers remember what happened in previous seasons of the shows they’re watching—or specific episodes, or even fragments of episodes—with guardrails supposedly protecting against spoilers.

OpenAI and Anthropic hiring news. Caitlin Kalinowski, who previously led Meta’s augmented-reality glasses project, is joining OpenAI to lead its robotics and consumer hardware efforts, TechCrunch reports. OpenAI has also hired serial entrepreneur Gabor Cselle, one of the cofounders of the defunct Twitter/X rival Pebble, to work on some kind of secret project. Meanwhile, Alex Rodrigues, the former cofounder and CEO of self-driving truck developer Embark, is joining Anthropic. Rodrigues posted on X that he will be working as an AI alignment researcher alongside recent OpenAI refugees Jan Leike and John Schulman.

FORTUNE ON AI

ChatGPT releases a search engine, an opening salvo in a brewing war with Google for dominance of the AI-powered internet —by Paolo Confino

The leading LLMs have accessibility blind spots, says data from startup Evinced—by Allie Garfinkle

Amazon’s CEO dropped a big hint about how a new AI version of Alexa is going to compete with chatbots like ChatGPT—by Jason Del Rey

Countries seeking to gain an edge in AI should pay close attention to India’s whole-of-society approach—by Arun Subramaniyan (Commentary)

AI CALENDAR

Oct. 28-30: Voice & AI, Arlington, Va.

Nov. 19-22: Microsoft Ignite, Chicago

Dec. 2-6: AWS re:Invent, Las Vegas

Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia

Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here)

EYE ON AI RESEARCH

Sleep pays. A team of Google cybersecurity analysts has been coordinating with DeepMind on an LLM-powered agent called Big Sleep, which they say has found its first vulnerability in the real world: an exploitable bug in the ubiquitous SQLite database engine.

Fortunately, the flaw was only present in a developer branch of the open-source engine, so users weren’t affected—SQLite developers fixed it as soon as Google made them aware. “Finding vulnerabilities in software before it’s even released, means that there’s no scope for attackers to compete: the vulnerabilities are fixed before attackers even have a chance to use them,” wrote Google’s researchers.

They stressed that these were experimental results and Big Sleep probably wouldn’t be able to outperform a well-targeted automated software testing tool just yet. However, they suggested that their approach could one day result in “an asymmetric advantage for defenders.”

BRAIN FOOD

Nuclear setbacks. The Financial Times reports that Meta had to call off plans to build an AI data center next to a nuclear power plant somewhere in the U.S.—details remain scarce—because rare bees were discovered on the site.

There’s currently a big push to power AI data centers with nuclear energy, because of its 24/7 reliability, and because Big Tech has to square the circle of satisfying AI’s enormous power requirements without blowing its decarbonization commitments. However, setbacks abound.

In plans that appear similar to Meta’s, Amazon earlier this year bought a data center that’s collocated with the Susquehanna nuclear plant in Pennsylvania. But regulators on Friday rejected the plant owner’s plan to give Amazon all the power it wants from the station’s reactors—up to 960 megawatts, versus the already-allowed 300MW—because doing so could lead to price rises for other customers and perhaps affect grid reliability.

source

Leave a Reply

Your email address will not be published. Required fields are marked *