par facebook
Open source · 579 downloads · 25 likes
MAGNeT Small 10secs est un modèle d'intelligence artificielle capable de générer des échantillons audio de haute qualité à partir de descriptions textuelles, que ce soit pour de la musique ou des sons ambiants. Contrairement aux approches traditionnelles, il produit directement les quatre codebooks audio en une seule passe grâce à un transformeur non autorégressif, sans nécessiter de conditionnement sémantique ou de cascades de modèles. Ce modèle se distingue par sa simplicité et son efficacité, offrant une alternative performante pour la création sonore guidée par texte. Il s'adresse principalement aux chercheurs et aux passionnés souhaitant explorer les capacités de génération audio par IA, tout en ouvrant des perspectives pour des applications créatives ou scientifiques. Ses performances sont évaluées selon des critères objectifs comme la distance de Fréchet audio et la divergence de Kullback-Leibler.
MAGNeT is a text-to-music and text-to-sound model capable of generating high-quality audio samples conditioned on text descriptions. It is a masked generative non-autoregressive Transformer trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike prior work, MAGNeT doesn't require neither semantic token conditioning nor model cascading, and it generates all 4 codebooks using a single non-autoregressive Transformer.
MAGNeT was published in Masked Audio Generation using a Single Non-Autoregressive Transformer by Alon Ziv, Itai Gat, Gael Le Lan, Tal Remez, Felix Kreuk, Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
Six checkpoints are released:
Coming soon...
You can run MAGNeT locally through the original Audiocraft library:
audiocraft librarypip install git+https://github.com/facebookresearch/audiocraft.git
ffmpeg installed:apt-get install ffmpeg
from audiocraft.models import MAGNeT
from audiocraft.data.audio import audio_write
model = MAGNeT.get_pretrained("facebook/magnet-small-10secs")
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
Organization developing the model: The FAIR team of Meta AI.
Model date: MAGNeT was trained between November 2023 and January 2024.
Model version: This is the version 1 of the model.
Model type: MAGNeT consists of an EnCodec model for audio tokenization, an non-autoregressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B; and two variants: a model trained for text-to-music generation task and a model trained for text-to-audio generation.
Paper or resources for more information: More information can be found in the paper Masked Audio Generation using a Single Non-Autoregressive Transformer.
Citation details:
@misc{ziv2024masked,
title={Masked Audio Generation using a Single Non-Autoregressive Transformer},
author={Alon Ziv and Itai Gat and Gael Le Lan and Tal Remez and Felix Kreuk and Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2024},
eprint={2401.04577},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
License: Code is released under MIT, model weights are released under CC-BY-NC 4.0.
Where to send questions or comments about the model: Questions and comments about MAGNeT can be sent via the Github repository of the project, or by opening an issue.
Primary intended use: The primary use of MAGNeT is research on AI-based music generation, including:
Primary intended users: The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
Out-of-scope use cases: The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
Models performance measures: We used the following objective measure to evaluate the model on a standard music benchmark:
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
More details on performance measures and human studies can be found in the paper.
Decision thresholds: Not applicable.
The model was evaluated on the MusicCaps benchmark and on an in-domain held-out evaluation set, with no artist overlap with the training set.
The model was trained on licensed data using the following sources: the Meta Music Initiative Sound Collection, Shutterstock music collection and the Pond5 music collection. See the paper for more details about the training set and corresponding preprocessing.
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we used the state-of-the-art music source separation method, namely the open source Hybrid Transformer for Music Source Separation (HT-Demucs), in order to keep only instrumental tracks. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency |
|---|---|---|---|
| facebook/magnet-small-10secs | 4.22 | 1.11 | 0.28 |
| facebook/magnet-medium-10secs | 4.61 | 1.14 | 0.28 |
| facebook/magnet-small-30secs | 4.35 | 1.17 | 0.28 |
| facebook/magnet-medium-30secs | 4.63 | 1.20 | 0.28 |
More information can be found in the paper Masked Audio Generation using a Single Non-Autoregressive Transformer, in the Results section.
Data: The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 16K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
Mitigations: Tracks that include vocals have been removed from the data source using corresponding tags, and using a state-of-the-art music source separation method, namely using the open source Hybrid Transformer for Music Source Separation (HT-Demucs).
Limitations:
Biases: The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
Risks and harms: Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
Use cases: Users must be aware of the biases, limitations and risks of the model. MAGNeT is a model developed for artificial intelligence research on music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
The audio-magnet models were trained on the following data sources: a subset of AudioSet (Gemmeke et al., 2017), BBC sound effects, AudioCaps (Kim et al., 2019), Clotho v2 (Drossos et al., 2020), VGG-Sound (Chen et al., 2020), FSD50K (Fonseca et al., 2021), Free To Use Sounds, Sonniss Game Effects, WeSoundEffects, Paramount Motion - Odeon Cinematic Sound Effects.
The audio-magnet models (sound effect generation) were evaluated on the AudioCaps benchmark.
Below are the objective metrics obtained with the released audio-magnet models on AudioCaps (consisting of 10-second long samples).
| Model | Frechet Audio Distance | KLD |
|---|---|---|
| facebook/audio-magnet-small | 3.21 | 1.42 |
| facebook/audio-magnet-medium | 2.32 | 1.64 |