Friday, November 22, 2024
Technology

Apple is reportedly experimenting with language-generating AI

If not for last week’s Silicon Valley Bank (SVB) collapse almost every conversation in tech seems to be centered around AI and chatbots. In the last few days, Microsoft-backed OpenAI released a new language model called GPT-4. Its competitor Anthropic released the Claude chatbot. Google said that it is integrating AI into its Workspace tools like Gmail and Docs. Microsoft Bing has brought attention to itself with a chatbot-enabled search. The one name missing from the action? Apple.

Last month, the Cupertino-based company held an internal event that focused on AI and large language models. According to a report from The New York Times, many teams, including people working on Siri, are testing “language-generating concepts” regularly.

People have complained about Siri not understanding queries (including mine) for a long time. Siri (and other assistants like Alexa and Google Assistant) have failed to understand different accents and phonetics of people living in different parts of the world even if they are speaking the same language.

The newly minted fame of ChatGPT and text-based search makes it easier for people to interact with different AI models. But currently, the only way to chat with Apple’s AI assistant Siri is to enable a feature under accessibility settings.

In an interview with NYT, former Apple engineer John Burke, who worked on Siri, said that Apple’s Assistant has had a slow evolution because of “clunky code,” which made it harder to push even basic feature updates. He also mentioned that Siri had a big database pile with a ton of words. So when engineers needed to add features or phrases, the database had to be rebuilt — a process that allegedly took up to six weeks.

The NYT report didn’t specify if Apple is building its own language models or wants to adopt an existing model. But just like Google and Microsoft, the Tim Cook-led company wouldn’t want to restrict itself to offering a Siri-powered chatbot. As Apple has long prided itself to be an ally of artists and creators, it would expectedly want to apply advances in language models to those areas.

The company has been using AI-powered features for a while now, even if they are not apparent at first. This includes better suggestions on the keyboard, processing in photography, mask unlocks with Face ID, separation of objects from the background across the system, handwashing and crash detection on Apple Watch, and, most recently, karaoke feature on Apple Music. But none of them might be in-the-face as chatbots.

Apple has been generally quiet about its AI efforts. But in January, the company started a program offering authors AI-powered narration services to turn their books into audiobooks. This indicated that the iPhone maker is already thinking about use cases for generative AI. I won’t be surprised if we hear more about the company’s efforts in these areas at the Worldwide Developer Conference (WWDC) in a few months.

source

Leave a Reply

Your email address will not be published. Required fields are marked *