Friday, November 22, 2024
Technology

The week in AI: OpenAI attracts deep-pocketed rivals in Anthropic and Musk

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of the last week’s stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

The biggest news of the last week (we politely withdraw our Anthropic story from consideration) was the announcement of Bedrock, Amazon’s service that provides a way to build generative AI apps via pretrained models from startups including AI21 LabsAnthropic and Stability AI. Currently available in “limited preview,” Bedrock also offers access to Titan FMs (foundation models), a family of AI models trained in-house by Amazon.

It makes perfect sense that Amazon would want to have a horse in the generative AI race. After all, the market for AI systems that create text, audio, speech and more could be worth more than $100 billion by 2030, according to Grand View Research.

But Amazon has a motive beyond nabbing a slice of a growing new market.

In a recent Motley Fool piece, TImothy Green presented compelling evidence that Amazon’s cloud business could be slowing, The company reported 27% year-over-year revenue growth for its cloud services in Q3 2022, but the uptick slowed to a mid-20% rate by the tail-end of the quarter. Meanwhile, operating margin for Amazon’s cloud division was down 4 percentage points year over year in the same quarter, suggesting that Amazon expanded too quickly.

Amazon clearly has high hopes for Bedrock, going so far as to train the aforementioned in-house models ahead of the launch — which was likely not an insignificant investment. And lest anyone cast doubt on the company’s seriousness about generative AI, Amazon hasn’t put all of its eggs in one basket. It this week made CodeWhisperer, its system that generates code from text prompts, free for individual developers.

So, will Amazon capture a meaningful piece of the generative AI space and, in the process, reinvigorate its cloud business? It’s a lot to hope for — especially considering the tech’s inherent risks. Time will tell, ultimately, as the dust settles in generative AI and competitors large and small emerge.

Here are the other AI headlines of note from the past few days:

  • The wide, wide world of AI regulation: Everyone seems to have their own ideas about how to regulate AI, and that means about 20 different frameworks across every major country and economic zone. Natasha gets deep into the nitty gritty with this exhaustive (at present) list of regulation frameworks (including outright bans like Italy’s of ChatGPT) and their potential effects on the AI industry where they are. China is doing their own thing, though.
  • Musk takes on OpenAI: Not satisfied with dismantling Twitter, Elon Musk is reportedly planning to take on his erstwhile ally OpenAI, and is currently attempting to collect the money and people necessary to do so. The busy billionaire may tap the resources of his several companies to accelerate the work, but there’s good reason to be skeptical of this endeavor, Devin writes.
  • The elephant in the room: AI research startup Anthropic aims to raise as much as $5 billion over the next two years to take on rival OpenAI and enter over a dozen major industries, according to company documents obtained by TechCrunch. In the documents, Anthropic says that it plans to build a “frontier model” — tentatively called “Claude-Next” — 10 times more capable than today’s most powerful AI, but that this will require a billion dollars in spending over the next 18 months.
  • Build your own chatbot: An app called Poe will now let users make their own chatbots using prompts combined with an existing bot, like OpenAI’s ChatGPT, as the base. First launched publicly in February, Poe is the latest product from the Q&A site Quora, which has long provided web searchers with answers to the most Googled questions.
  • Beyond diffusion: Though the diffusion models used by popular tools like Midjourney and Stable Diffusion may seem like the best we’ve got, the next thing is always coming — and OpenAI might have hit on it with “consistency models,” which can already do simple tasks an order of magnitude faster than the likes of DALL-E, Devin reports.
  • A little town with AI: What would happen if you filled a virtual town with AIs and set them loose? Researchers at Stanford and Google sought to find out in a recent experiment involving ChatGPT. Their attempt to create a “believable simulacra of human behavior” was successful, by all appearances — the 25 ChatGPT-powered AIs were convincingly, surprisingly human-like in their interactions.
Interactive Simulacra of Human Behavior

Image Credits: Google / Stanford University

  • Generative AI in the enterprise: In a piece for TC+, Ron writes about how transformative technologies like ChatGPT could be if applied it to the enterprise applications people use on a daily basis. He notes, though, that getting there will require creativity to design the new AI-powered interfaces in an elegant way, so that they don’t feel bolted on.

More machine learnings

Image Credits: Meta

Meta open-sourced a popular experiment that let people animate drawings of people, however crude they were. It’s one of those unexpected applications of the tech that is both delightful yet totally trivial. Still, people liked it so much that Meta is letting the code run free so anyone can build it into something.

Another Meta experiment, called Segment Anything, made a surprisingly large splash at all. LLMs are so hot right now that it’s easy to forget about computer vision — and even then, a specific part of the system that most people don’t think about. But segmentation (identifying and outlining objects) is an incredibly important piece of any robot application, and as AI continues to infiltrate “the real world” it’s more important than ever that it can… well, segment anything.

Image Credits: Meta

Professor Stuart Russell has graced the TechCrunch stage before, but our half-hour conversations only scratch the surface of the field. Fortunately the man routinely gives lectures and talks and classes on the topic, which due to his long familiarity with it are very grounded and interesting, even if they have provocative names like “How not to let AI destroy the world.”

You should check out this recent presentation, introduced by another TC friend, Ken Goldberg:


source

Leave a Reply

Your email address will not be published. Required fields are marked *