Tuesday, November 19, 2024
Business

At Web Summit last week, no sign of an AI slowdown—even if some wouldn’t mind one

Hello and welcome to Eye on AI. In this edition…no sign of an AI slowdown at Web Summit; work on Amazon’s new Alexa plagued by further technical issues; a general purpose robot model; trying to bend Trump’s ear on AI policy.

Last week, I was at Web Summit in Lisbon, where AI was everywhere. There was a strange disconnect, however, between the mood at the conference, where so many companies were touting AI-powered products and features, and the tenor of AI news last week—much of which was focused on reports that the AI companies building foundation models were seeing diminishing returns from building ever larger AI models and rampant speculation in some quarters that the AI hype cycle was about to end.

I moderated a center stage panel discussion on whether the AI bubble is about to burst, and I heard two very different, but not diametrically opposed, takes. (You can check it out on YouTube.) Bhavin Shah, the CEO of Moveworks, which offers an AI-powered service to big companies that allows employees to get their IT questions automatically answered, argued—as you might expect—that not only is the bubble not about to burst, that it isn’t even clear there’s a bubble.

AI is not like tulip bulbs or crypto

Sure, Shah said, the valuations for a few tech companies might be too high. But AI itself was very different from something like crypto or the metaverse or the tulip mania of the 17th century. Here was a technology that was having real impact on how the world’s largest companies operate—and it was only just getting going. He said it was only now, two years after the launch of ChatGPT, that many companies were finding AI use cases that would create real value.

Rather than being concerned that AI progress might be plateauing, Shah argued that companies were still exploring all the possible, transformative use cases for the AI that already exists today—and the transformative effects of the technology were predicated on further progress in LLM capabilities. In fact, he said, there was far too much focus on what the underlying LLMs could do and not nearly enough on how to build systems and workflows around LLMs and other, different kinds of AI models, that could as a whole deliver significant return-on-investment (ROI) for businesses.

The idea that some people might have had that just throwing an LLM at a problem would magically result in ROI was always naïve, Shah argued. Instead, it was always going to involve systems architecting and engineering to create a process in which AI could deliver value.

AI’s environmental and social cost argue for a slowdown

Meanwhile, Sarah Myers West, the coexecutive director of the AI Now Institute, argued not so much that the AI bubble is about to burst—but rather that it might be better for all of us if it did. West argued that the world could not afford a technology with the energy footprint, appetite for data, and problems around unknown biases that today’s generative AI systems have. In this context, though, a slowdown in AI progress at the frontier might not be a bad thing, as it might force companies to look for ways to make AI both more energy and data efficient.

West was skeptical that smaller models, which are more efficient, would necessarily help. She said they might simply result in the Jevons paradox, the economic phenomenon where making the use of a resource more efficient only results in more overall consumption of that resource.

As I mentioned last week, I think that for many companies that are trying to build applied AI solutions for specific industry verticals, the slowdown at the frontier of AI model development matters very little. Those companies are mostly bets that those teams can use the current AI technology to build products that will find product-market fit. Or, at least, that’s how they should be valued. (Sure, there’s a bit of “AI pixie dust” in the valuation too, but those companies are valued mostly on what they can create using today’s AI models.)

Scaling laws do matter for the foundational model companies

But for the companies whose whole business is creating foundation models—OpenAI, Anthropic, Cohere, and Mistral—their valuations are very much based around the idea of getting to artificial general intelligence (AGI), a single AI system that is at least as capable as humans at most cognitive tasks. For these companies, diminishing returns from scaling LLMs does matter.

But even here, it’s important to note a few things—while returns from the pre-training larger and larger  AI models seems to be slowing, AI companies are just starting to look at the returns from scaling up “test time compute” (i.e. giving an AI model that runs some kind of search process over possible answers more time—or more computing resources—to conduct that search). That is what OpenAI’s o1 model does, and it is likely what future models from other AI labs will do too.

Also, while OpenAI has always been most closely associated with LLMs and the “scale is all you need” hypothesis, most of these frontier labs have employed, and still employ, researchers with expertise in other flavors of deep learning. If progress from scale alone is slowing, that is likely to encourage them to push for a breakthrough using a slightly different method—search, reinforcement learning, or perhaps even a completely different, non-Transformer architecture.

Google DeepMind and Meta are also in a slightly different camp here, because those companies have huge advertising businesses that support their AI efforts. Their valuations are less directly tied to frontier AI development—especially if it seems like the whole field is slowing down.

It would be a different story if one lab were achieving results that Meta or Google could not replicate—which is what some people thought was happening when OpenAI leapt out ahead with the debut of ChatGPT. But since then, OpenAI has not managed to maintain a lead of more than three months for most new capabilities.

As for Nvidia, its GPUs are used for both training and inference (i.e. applying an AI model once it has been trained)—but it has optimized its most advanced chips for training. If scale stops yielding returns during training, Nvidia could potentially be vulnerable to a competitor with chips better optimized for inference. (For more on Nvidia, check out my feature on company CEO Jensen Huang that accompanied Fortune’s inaugural 100 Most Powerful People in Business list.)

With that, here’s more AI News.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction, Nov. 15: Due to erroneous information provided by Robin AI, last Tuesday’s edition of this newsletter incorrectly identified billionaire Michael Bloomberg’s family office Willets as an investor in the company’s “Series B+” round. Willets was not an investor.

**Before we get the news: If you want to learn more about what’s next in AI and how your company can derive ROI from the technology, join me in San Francisco on Dec. 9-10 for Fortune Brainstorm AI. We’ll hear about the future of Amazon Alexa from Rohit Prasad, the company’s senior vice president and head scientist, artificial general intelligence; we’ll learn about the future of generative AI search at Google from Liz Reid, Google’s vice president, search; and about the shape of AI to come from Christopher Young, Microsoft’s executive vice president of business development, strategy, and ventures; and we’ll hear from former San Francisco 49er Colin Kaepernick about his company Lumi and AI’s impact on the creator economy. You can view the agenda and apply to attend here. (And remember, if you write the code KAHN20 in the “Additional comments” section of the registration page, you’ll get 20% off the ticket price—a nice reward for being a loyal Eye on AI reader!)

AI IN THE NEWS

Amazon’s launch of a new AI-powered Alexa plagued by further technical issues. My Fortune colleague Jason Del Rey has obtained internal Amazon emails that show staff working on the new version of Amazon Alexa have written managers to warn that the product is not yet ready to be launched. In particular, emails from earlier this month show that engineers worry that latency—or how long it takes the new Alexa to generate responses—make the product potentially too frustrating for users to enjoy or pay an additional subscription fee to use. Other emails indicate the new Alexa may not be compatible with older Amazon Echo smart speakers and that staff worry that the new Alexa won’t offer enough “skills”—or actions that a user can perform through the digital voice assistant—to justify an increased price for the product. You can read Jason’s story here.

Anthropic is working with the U.S. government to test if its AI chatbot will leak nuclear secrets. That’s according to a story from Axios that quotes the AI company as saying it has been working with the Department of Energy’s National Nuclear Security Administration since April to test its Claude 3 Sonnet and Claude 3.5 Sonnet models to see if the model can be prompted to give responses that might help someone develop a nuclear weapon or perhaps figure out how to attack a nuclear facility. Neither Anthropic nor the government would reveal what the tests—which are classified—have found so far. But Axios points out that Anthropic’s work with the DOE on secret projects may pave the way for it to work with other U.S. national security agencies and that several of the top AI companies have recently been interested in obtaining government contracts.

Nvidia’s struggling to overcome heating issues with Blackwell GPU racks. Unnamed Nvidia employees and customers told The Information that the company has faced problems in keeping large racks of its latest Blackwell GPU from overheating. The company has asked suppliers to redesign the racks, which house 72 of the powerful chips, several times and the issue may delay shipment of large numbers of GPU racks to some customers, although Michael Dell has said that his company has shipped some of the racks to Nvidia-backed cloud service provider CoreWeave. Blackwell has already been hit by a design flaw that delayed full production of the chip by a quarter. Nvidia declined to comment on the report.

OpenAI employees raise questions about gender diversity at the company. Several women at OpenAI have raised concerns about the company’s culture following the departures of chief technology officer Mira Murati and another senior female executive, Lilian Weng, The Information reported. A memo shared internally by a female research program manager and seen by the publication called for more visible promotion of women and nonbinary individuals already making significant contributions. The memo also highlights challenges in recruiting and retaining female and nonbinary technical talent, a problem exacerbated by Murati’s departure and her subsequent recruitment of former OpenAI staff to her new startup. OpenAI has since filled some leadership gaps with male co-leads, and its overall workforce and leadership remain predominantly male.

EYE ON AI RESEARCH

A foundation model for household robots. Robotic software startup Physical Intelligence, which recently raised $400 million in funding from Jeff Bezos, OpenAI, and others, has released a new foundation model for robotics. Like LLMs for language tasks, the idea is to create AI models for robots that will let any robot perform a host of basic motions and tasks in any environment.

In the past, robots often had to be trained specifically for a particular setting in which they would operate—either through actual experience in that setting, or through having their software brains learn in a simulated virtual environment that closely matched the real world setting into which they would be deployed. The robot could usually only perform one task or a limited range of tasks in that specific environment. And the software controlling the robot only worked for one specific robot model.

But the new model from Physical Intelligence—which it calls π0 (Pi-Zero) allows different kinds of robots to perform a whole range of household tasks—from loading and unloading a dishwasher to folding laundry to taking out the trash to delicately handling eggs. What’s more, the model works across multiple types of robots. Physical Intelligence trained π0 by building a huge dataset of eight different kinds of robots performing a whole multitude of tasks. The new model may help speed the adoption of robots, yes, in households, but also in warehouses, factories, restaurants, and other work settings too. You can see Physical Intelligence’s blog here.

FORTUNE ON AI

How Mark Zuckerberg has fully rebuilt Meta around Llama —by Sharon Goldman

Exclusive: Perplexity’s CEO says his AI search engine is becoming a shopping assistant—but he can’t explain how products it recommends are chosen —by Jason Del Rey

Tesla jumps as Elon Musk’s ‘bet for the ages’ on Trump is seen paying off with federal self-driving rules —by Jason Ma

Commentary: AI will help us understand the very fabric of reality —by Demis Hassabis and James Manyka

AI CALENDAR

Nov. 19-22: Microsoft Ignite, Chicago

Nov. 20: Cerebral Valley AI Summit, San Francisco 

Nov. 21-22: Global AI Safety Summit, San Francisco

Dec. 2-6: AWS re:Invent, Las Vegas

Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia

Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here)

Dec. 10-15: NeurlPS, Vancouver

Jan. 7-10: CES, Las Vegas

BRAIN FOOD

What is Trump going to do about AI? A lobbying group called BSA | The Software Alliance, which represents OpenAI, Microsoft, and other tech companies, is calling on President-elect Donald Trump to preserve some Biden Administration initiatives on AI. These include a national AI research pilot Biden funded and a new framework developed by the U.S. Commerce Department to manage high-risk use cases of AI. It also wants Trump’s administration to continue international collaboration on AI safety standards, enact a national privacy law, negotiate data transfer agreements with more countries, and coordinate U.S. export controls with allies. It also wants to see Trump consider lifting Biden-era controls on the export of some computer hardware and software to China. You read more about the lobbying effort in this Semafor story.

The tech industry group is highly unlikely to get its entire wish list. Trump has signaled he plans to repeal Biden’s Executive Order on AI, which resulted in the Commerce Department’s framework, the creation of the U.S. AI Safety Institute, and several other measures. And Trump is likely to be even more hawkish on trade with China than Biden was. But trying to figure out exactly what Trump will do on AI is difficult—as my colleague Sharon Goldman detailed in this excellent explainer. It may be that Trump winds up being more favorable to AI regulation and international cooperation on AI safety than many expect.

source

Leave a Reply

Your email address will not be published. Required fields are marked *