Friday, November 22, 2024
Uncategorized

Anthropic CEO lays out A.I.'s short, medium, and long-term risks

Anxiety about the dangers of A.I. is a very 2023 problem, fanned by the rapid adoption of tools like text-to-image generators and life-like chatbots.

The good news, for those prone to worry, is that you can organize your unease into three neat buckets: short-term A.I. risks, medium-term risks, and long-term risks. That’s the way that Dario Amodei, the cofounder and CEO of Anthropic, does it.

Amodei should know. In 2020 he left OpenAI, the maker of ChatGPT, to cofound Anthropic on the principle that large language models have the power to become exponentially more capable the more computing power is poured into them—and that as a result, these models must be designed from the ground up with safety in mind. In May, the company raised $450 million in funding.

Speaking at the Fortune Brainstorm Tech conference in Deer Valley, Utah on Monday, Amodei laid out his three-tiered fear model in response to a question by Fortune’s Jeremy Kahn about the existential risks posed by A.I. Here’s how Amodei worries about A.I.:

  • Short-term risks: The kind of issues we’re facing today, “around things like bias and misinformation.”
  • Medium-term risks: “I think in a couple years as models get better at things like science, engineering, biology, you can just do very bad things with the models that you wouldn’t have been able to do without them.”
  • Long-term risks: “As we go into models that have the key property of agency—which means that they don’t just output text, but they can do things, whether it’s with a robot or on the internet—then I think we have to worry about them becoming too autonomous, and it being hard to stop or control what they do. And I think the extreme end of that is concerns about existential risk.” 

Large language models are incredibly versatile. They can be applied across a broad number of uses and scenarios—”most of them are good. But there’s some bad ones lurking in there and we have to find them and prevent them,” Amodei said.

We shouldn’t “freak out about” the existential long term risk scenario, he advised. “They’re not going to happen tomorrow, but as we continue on the AI exponential, we should understand that those risks are at the end of that exponential.”

But when asked by Kahn if he was ultimately an optimist or a pessimist about A.I., the Anthropic CEO offered an ambivalent response that will either be comforting or terrifying, depending on whether you’re a glass-half-full or half-empty type of person: “My guess is that things will go really well. But there’s a risk, maybe 10% or 20%, that this will go wrong, and it’s incumbent on us to make sure that doesn’t happen.”

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

source

Leave a Reply

Your email address will not be published. Required fields are marked *