par stabilityai
Open source · 3k downloads · 254 likes
Stable Audio Open Small est un modèle d'IA générative capable de produire des séquences audio stéréo de jusqu'à 11 secondes à une qualité de 44,1 kHz, simplement à partir de descriptions textuelles en anglais. Il se distingue par sa capacité à générer des sons variés et réalistes, adaptés à des usages créatifs comme la production musicale, les ambiances sonores ou les effets audio. Contrairement à d'autres outils, il repose sur une architecture de diffusion latente optimisée pour des performances équilibrées entre qualité et rapidité. Son entraînement sur des données libres de droits (CC0, CC BY) garantit une utilisation plus flexible, bien que son usage commercial nécessite une licence dédiée. Ce modèle se positionne comme une solution accessible pour les créateurs souhaitant automatiser la génération audio sans compétences techniques approfondies.
Please note: For commercial use, please refer to https://stability.ai/license
Stable Audio Open Small generates variable-length (up to 11s) stereo audio at 44.1kHz from text prompts. It comprises three components: an autoencoder that compresses waveforms into a manageable sequence length, a T5-based text embedding for text conditioning, and a transformer-based diffusion (DiT) model that operates in the latent space of the autoencoder.
This model can be used with the stable-audio-tools library.
stable-audio-toolsThis model is made to be used with the stable-audio-tools library for inference, for example:
import torch
import torchaudio
from einops import rearrange
from stable_audio_tools import get_pretrained_model
from stable_audio_tools.inference.generation import generate_diffusion_cond
device = "cuda" if torch.cuda.is_available() else "cpu"
# Download model
model, model_config = get_pretrained_model("stabilityai/stable-audio-open-small")
sample_rate = model_config["sample_rate"]
sample_size = model_config["sample_size"]
model = model.to(device)
# Set up text and timing conditioning
conditioning = [{
"prompt": "128 BPM tech house drum loop",
"seconds_total": 11
}]
# Generate stereo audio
output = generate_diffusion_cond(
model,
steps=8,
cfg_scale=1.0,
conditioning=conditioning,
sample_size=sample_size,
sampler_type="pingpong",
device=device
)
# Rearrange audio batch to a single sequence
output = rearrange(output, "b d n -> d (b n)")
# Peak normalize, clip, convert to int16, and save to file
output = output.to(torch.float32).div(torch.max(torch.abs(output))).clamp(-1, 1).mul(32767).to(torch.int16).cpu()
torchaudio.save("output.wav", output, sample_rate)
Stable Audio Open Small is a latent diffusion model based on a transformer architecture.To further optimize this model for maximum performance on Arm CPUs, you can follow the step by step guide to deployment via the Arm Learning Path.
Our dataset consists of 486492 audio recordings, where 472618 are from Freesound and 13874 are from the Free Music Archive (FMA). All audio files are licensed under CC0, CC BY, or CC Sampling+. The Freesound and Free Music Archive datasets were both used to train the autoencoder. The DiT was trained solely on the Freesound dataset. We use a publicly available pre-trained T5 model (t5-base) for text conditioning.
Attribution for all audio recordings used to train Stable Audio Open Small can be found on our attribution page.
We conducted an in-depth analysis to ensure no unauthorized copyrighted music was present in our training data before we began training.
To that end, we first identified music samples in Freesound using the PANNs music classifier based on AudioSet classes. The identified music samples had at least 30 seconds of music that was predicted to belong to a music-related class with a threshold of 0.15 (PANNs output probabilities range from 0 to 1). This threshold was determined by classifying known music examples from FMA and ensuring no false negatives were present.
The identified music samples were sent to Audible Magic’s identification services, a trusted content detection company, to ensure the absence of copyrighted music. Audible Magic flagged suspected copyrighted music, which we subsequently removed before training on the dataset. The majority of the removed content was field recordings in which copyrighted music was playing in the background. Following this procedure, we were left with 266324 CC0, 194840 CC-BY, and 11454 CC Sampling+ audio recordings.
We also conducted an in-depth analysis to ensure no copyrighted content was present in FMA's subset. In this case, the procedure was slightly different because the FMA subset consists of music signals. We did a metadata search against a large database of copyrighted music (https://www.kaggle.com/datasets/maharshipandya/-spotify-tracks-dataset) and flagged any potential match. The flagged content was reviewed individually by humans. After this process, we ended up with 8967 CC-BY and 4907 CC0 tracks.
The primary use of Stable Audio Open Small is research and experimentation on AI-based music and audio generation, including:
The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate audio or music pieces that create hostile or alienating environments for people.
The source of data is potentially lacking diversity and all cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres and sound effects that exist. The generated samples from the model will reflect the biases from the training data.