Friday, November 22, 2024
Uncategorized

Meta open sources framework for generating sounds and music

The day is fast approaching when generative AI won’t only write and create images in a convincingly human-like style, but compose music and sounds that pass for a professional’s work, too.

This morning, Meta announced AudioCraft, a framework to generate what it describes as “high-quality,” “realistic” audio and music from short text descriptions, or prompts. It’s not Meta’s first foray into audio generation — the tech giant open sourced an AI-powered music generator, MusicGen, in June — but Meta claims that it’s made advances that vastly improve the quality of AI-generated sounds, such as dogs barking, cars honking and footsteps on a wooden floor.

In a blog post shared with TechCrunch, Meta explains that the AudioCraft framework was designed to simplify the use of generative models for audio compared to prior work in the field (e.g. RiffusionDance Diffusion and OpenAI’s Jukebox). AudioCraft, the code for which is available in open source, provides a collection of sound and music generators plus compression algorithms that can be used to create and encode songs and audio without having to switch between different codebases.

AudioCraft contains three generative AI models: MusicGen, AudioGen and EnCodec.

MusicGen isn’t new. But Meta’s released the training code for it, enabling users to train the model on their own dataset of music.

That could raise major ethical and legal issues, considering MusicGen “learns” from existing music to produce similar effects — a fact with which not all artists or generative AI users are comfortable.

Increasingly, homemade tracks that use generative AI to conjure familiar sounds that can be passed off as authentic, or at least close enough, have been going viral. Music labels have been quick to flag them to streaming partners, citing intellectual property concerns — and they’ve generally been victorious. But there’s still a lack of clarity on whether “deepfake” music violates the copyright of artists, labels and other rights holders.

Meta makes it clear that the pretrained, out-of-the-box version of MusicGen was trained with “Meta-owned and specifically licensed music,” specifically 20,000 hours of audio — 400,000 recordings along with text descriptions and metadata — from the company’s own Meta Music Initiative Sound Collection, Shutterstock’s music library and Pond5, a large stock media library. And Meta removed vocals from the training data to prevent the model from replicating artists’ voices. But while the MusicGen terms of use discourage using the model for “out-of-scope” use cases beyond research, Meta doesn’t expressly prohibit any commercial applications.

AudioGen, the other audio-generating model contained in AudioCraft, focuses on generating environmental sounds and sound effects as opposed to music and melodies.

AudioGen is a diffusion-based model, like most modern image generators (see OpenAI’s DALL-E 2, Google’s Imagen and Stable Diffusion). In diffusion, a model learns how to gradually subtract noise from starting data made entirely of noise — for example, audio or images — moving it closer step by step to the target prompt.

Given a text description of an acoustic scene, AudioGen can generate environmental sounds with “realistic recording conditions” and “complex scene content.” Or so Meta says — we weren’t given the chance to test AudioGen or listen to its samples ahead of the model’s release. According to a whitepaper published alongside AudioGen this morning, AudioGen can also generate speech from prompts in addition to music, reflecting the makeup of its diverse training data.

In the whitepaper, Meta acknowledges that AudioCraft could be misused to deepfake a person’s voice. And, given AudioCraft’s generative music capabilities, the model raises the same ethical questions as MusicGen. But, as with MusicGen, Meta isn’t placing much of the way in restrictions on ways in which AudioCraft — and its training code — can be used, for better or worse.

The last of AudioCraft’s three models, EnCodec, is an improvement over a previous Meta model for generating music with fewer artifacts. Meta claims that it more efficiently models audio sequences, capturing different levels of information in training data audio waveforms to help craft novel audio.

“EnCodec is a lossy neural codec that was trained specifically to compress any kind of audio and reconstruct the original signal with high fidelity,” Meta explains in the blog post. “The different streams capture different levels of information of the audio waveform, allowing us to reconstruct the audio with high fidelity from all the streams.”

So what’s one to make of AudioCraft? Meta emphasizes the potential upsides, unsurprisingly, like providing inspiration for musicians and helping people iterate on their compositions “in new ways.” But as the advent of image and text generators has shown us, there are drawbacks — and probably lawsuits — lurking in the shadows.

Consequences be damned, Meta says that it plans to keep investigating better controllability and ways to improve the performance of generative audio models, as well as ways to mitigate the limitations and biases of such models. On the subject of biases, MusicGen, Meta notes, doesn’t perform well on descriptions in languages other than English and musical styles and cultures that aren’t Western — owing to very obvious biases in its training data.

“Rather than keeping the work as an impenetrable black box, being open about how we develop these models and ensuring that they’re easy for people to use — whether it’s researchers or the music community as a whole — helps people understand what these models can do, understand what they can’t do and be empowered to actually use them,” Meta writes in the blog post. “Through the development of more advanced controls, we hope that such models can become useful to both music amateurs and professionals.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *