Friday, November 22, 2024
Technology

The week in AI: Apple makes machine learning moves

Keeping up with an industry as fast moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of the last week’s stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

It could be said that last week, Apple very visibly, and with intention, threw its hat into the ultra-competitive AI race. It’s not that the company hadn’t signaled its investments in — and prioritization of — AI previously. But at its WWDC event, Apple made it abundantly clear that AI was behind many of the features in both its forthcoming hardware and software.

For instance, iOS 17, which is set to arrive later this year, can suggest recipes for similar dishes from an iPhone photo using computer vision. AI also powers Journal, a new interactive diary that makes personalized suggestions based on activities across other apps.

iOS 17 will also feature an upgraded autocorrect powered by an AI model that can more accurately predict the next words and phrases that a user might use. Over time, it’ll become tailored, learning a user’s most frequently used words — including swear words, entertainingly.

AI is central to Apple’s Vision Pro augmented reality headset, too — specifically FaceTime on the Vision Pro. Using machine learning, the Vision Pro can create a virtual avatar of the wearer, interpolating out a full range of facial contortions — down to the skin tension and muscle work.

facial scan for digital persona for Vision Pro

Image Credits: Apple

It might not be generative AI, which is without a doubt the hottest subcategory of AI today. But Apple’s intention, it seems to me, was to mount a comeback of sorts — to show that it’s not one to be underestimated after years of floundering machine learning projects, from the underwhelming Siri to the self-driving car in production hell.

Projecting strength isn’t just a marketing ploy. Apple’s historical underperformance in AI has led to serious brain drain, reportedly, with The Information reporting that talented machine learning scientists — including a team that had been working on the type of tech underlying OpenAI’s ChatGPT — left Apple for greener pastures.

Showing that it’s serious about AI by actually shipping products with AI imbued feels like a necessary move — and a benchmark some of Apple’s competitors have, in fact, failed to meet in the recent past. (Here’s looking at you, Meta.) By all appearances, Apple made inroads last week — even if it wasn’t particularly loud about it.

Here are the other AI headlines of note from the past few days:

  • Meta makes a music generator: Not to be outdone by Google, Meta has released its own AI-powered music generator — and, unlike Google, open sourced it. Called MusicGen, Meta’s music-generating tool can turn a text description into about 12 seconds of audio.
  • Regulators examine AI safety: Following the U.K. government’s announcement last week that it plans to host a “global” AI safety summit this fall, OpenAI, Google DeepMind and Anthropic have committed to provide “early or priority access” to their AI models to support research into evaluation and safety.
  • AI, meet cloud: Salesforce is launching a new suite of products aimed at bolstering its position in the ultra-competitive AI space. Called AI Cloud, the suite, which includes tools designed to deliver “enterprise ready” AI, is Salesforce’s latest cross-disciplinary attempt to augment its product portfolio with AI capabilities.
  • Testing text-to-video AI: TechCrunch went hands on with Gen-2, Runway’s AI that generates short video clips from text. The verdict? There’s a long way to go before the tech comes close to generating film-quality footage.
  • More money for enterprise AI: In a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, last week announced that it raised $270 million as part of its Series C round.
  • No GPT-5 for you: OpenAI is still not training GPT-5, OpenAI CEO Sam Altman said at a recent conference hosted by Economic Times — months after the Microsoft-backed startup pledged to not work on the successor to GPT-4 “for some time” after many industry executives and academics expressed concerns about the fast rate of advancements by Altman’s large language models. 
  • AI writing assistant for WordPress: Automattic, the company behind WordPress.com and the main contributor to the open source WordPress project, launched an AI assistant for the popular content management system last Tuesday.
  • Instagram gains a chatbot: Instagram may be working on an AI chatbot, according to images leaked by app researcher Alessandro Paluzzi. According to the leaks, which reflect in-progress app developments that may or may not ship, these AI agents can answer questions or give advice.

Other machine learnings

If you’re curious how AI might affect science and research over the next few years, a team across six national labs authored a report, based on workshops conducted last year, about exactly that. One may be tempted to say that, being based on trends from last year and not this one, in which things have progressed so fast, the report may already be obsolete. But while ChatGPT has made huge waves in tech and consumer awareness, the truth is it’s not particularly relevant for serious research. The larger-scale trends are, and they’re moving at a different pace. The 200-page report is definitely not a light read, but each section is helpfully divided into digestible pieces.

Elsewhere in the national lab ecosystem, Los Alamos researchers are hard at work on advancing the field of memristors, which combine data storage and processing — much like our own neurons do. It’s a fundamentally different approach to computation, though one that has yet to bear fruit outside the lab, but this new approach appears to move the ball forward, at least.

AI’s facility with language analysis is on display in this report on police interactions with people they’ve pulled over. Natural language processing was used as one of several factors to identify linguistic patterns that predict escalation of stops — especially with Black men. The human and machine learning methods reinforce each other. (Read the paper here.)

Image Credits: Cyrille Verdon / Renaud Defrancesco BUREAU 141 / EPFL

DeepBreath is a model trained on recordings of breathing taken from patients in Switzerland and Brazil that its creators at EPFL claim can help identify respiratory conditions early. The plan is to put it out there in a device called the Pneumoscope, under spinout company Onescope. We’ll probably follow up with them for more info on how the company is doing.

Another AI health advance comes from Purdue, where researchers have made software that approximates hyperspectral imagery with a smartphone camera, successfully tracking blood hemoglobin and other metrics. It’s an interesting technique: using the phone’s super-slow-mo mode, it gets a lot of information about every pixel in the image, giving a model enough data to extrapolate from. It could be a great way to get this kind of health information without special hardware.

Image Credits: MIT

I wouldn’t trust an autopilot to take evasive maneuvers just yet, but MIT is inching the tech closer with research that helps AI avoid obstacles while maintaining a desirable flight path. Any old algorithm can propose wild changes to direction in order to not crash, but doing so while maintaining stability and not pulping anything inside is harder. The team managed to get a simulated jet to perform some Top Gun-like maneuvers autonomously and without losing stability. It’s harder than it sounds.

Last this week is Disney Research, which can always be counted on to show off something interesting that also just happens to apply to filmmaking or theme park operations. At CVPR they showed off a powerful and versatile “facial landmark detection network” that can track facial movements continuously and using more arbitrary reference points. Motion capture is already working without the little capture dots, but this should make it even higher quality — and more dignified for the actors.


source

Leave a Reply

Your email address will not be published. Required fields are marked *