Friday, November 22, 2024
Business

OpenAI debuts a speedier model for powering ChatGPT and more free features

OpenAI is launching a faster and cheaper version of the artificial intelligence model that underpins its chatbot, ChatGPT, as the startup works to hold on to its lead in an increasingly crowded market.

During a livestreamed event on Monday, OpenAI debuted GPT-4o. It’s an updated version of its GPT-4 model, which is now more than a year old. The new large language model, trained on vast amounts of data from the internet, will be better at handling text, audio and video in real-time. The updates will be available in the coming weeks.

Ask a question verbally, and the system can reply with an audio response in milliseconds, according to the company, allowing for a more fluid conversation. Likewise, if you feed the system an image prompt, it can respond with an image. 

The update will bring a number of features to free users that previously had been limited to those with a paid subscription to ChatGPT, such as the ability to search the web for answers to queries, speak to the chatbot and hear response in various voices, and command it to store details that the chatbot can recall in the future.

The release of GPT-4o is poised to shake up the rapidly evolving AI landscape, where GPT-4 remains the gold standard. A growing number of startups and Big Tech companies, including Anthropic, Cohere and Alphabet Inc.’s Google, have recently pushed out AI models that they say match or surpass the performance of GPT-4 in certain benchmarks.

OpenAI’s announcement also comes the day before the Google I/O developer conference. Google, an early leader in the artificial intelligence space, is expected to use the event to unveil more AI updates after racing to keep pace with Microsoft Corp.-backed OpenAI.

Rather than relying on different AI models to process these inputs, GPT-4o — the “o” stands for omni — combines voice, text and vision into a single model, allowing it to be faster than its predecessor. 

But the new model hit some snags. The audio frequently cut out as the researchers spoke during their demo. The AI system also surprised the audience when, after coaching a researcher through the process of solving an algebra problem, it chimed in with a flirtatious-sounding voice: “Wow, that’s quite the outfit you’ve got on.”

OpenAI is beginning to roll out GPT-4o’s new text and image capabilities to some paying ChatGPT Plus and Team users today, and is offering those capabilities to enterprise users soon. The company will make the new version of its “voice mode” assistant available to ChatGPT Plus users in the coming weeks. 

As part of its updates, OpenAI said it’s also enabling anyone to access its GPT Store, which includes customized chatbots made by users. Previously, it was only available to paying customers.

Speculation about OpenAI’s next launch has become a Silicon Valley parlor game in recent weeks. A mysterious new chatbot caused a stir among AI watchers after it showed up on a benchmarking website and appeared to rival GPT-4’s performance. OpenAI Chief Executive Officer Sam Altman offered winking references to the chatbot on X, fueling rumors that his company was behind it. 

The company is working on a wide range of products, including voice technology and video software. OpenAI is also developing a search feature for ChatGPT, Bloomberg previously reported.

On Friday, the company quelled some of the feverish speculation by saying it wouldn’t imminently launch GPT-5, a much anticipated version of its model that some in the tech world think expect to be radically more capable than current AI systems. It also said it wouldn’t unveil a new search product, a tool that could compete with Google. Google’s stock ticked higher on the news. 

But after the event wrapped, Altman was quick to keep the speculation going. “We’ll have more stuff to share soon,” he wrote on X. 

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

source

Leave a Reply

Your email address will not be published. Required fields are marked *