par utter-project
Open source · 49k downloads · 103 likes
mHuBERT-147 est un modèle de reconnaissance vocale multilingue compact et performant, conçu pour traiter 147 langues à partir de 90 000 heures de données audio sous licence ouverte. Contrairement aux modèles HuBERT traditionnels, il utilise des unités de parole discrètes optimisées via l'indexation Faiss, améliorant ainsi son efficacité et sa précision. Ce modèle excelle notamment dans des tâches comme la reconnaissance automatique de la parole (ASR), la traduction automatique de la parole (ST) et la synthèse vocale (TTS), tout en atteignant des scores de pointe sur des benchmarks comme ML-SUPERB. Ses cas d'usage incluent la transcription multilingue, l'analyse linguistique ou encore l'assistance vocale dans des environnements multiculturels. Ce qui le distingue, c'est sa capacité à gérer un large éventail de langues avec une architecture légère, tout en intégrant des mécanismes d'apprentissage adaptatifs pour une meilleure généralisation.
This repository contains the best mHuBERT-147 pre-trained model.
MODEL DETAILS: 3rd iteration, K=1000, HuBERT base architecture (95M parameters), 147 languages.
mHuBERT-147 are compact and competitive multilingual HuBERT models trained on 90K hours of open-license data in 147 languages. Different from traditional HuBERTs, mHuBERT-147 models are trained using faiss IVF discrete speech units. Training employs a two-level language, data source up-sampling during training. See more information in our paper.
This repository contains:
Related Models:
Manifest list available here. Please note that since training, there were CommonVoice removal requests. This means that some of the listed files are no longer available.
Fairseq fork contains the scripts for training with multilingual batching with two-level up-sampling.
mHubert-147 reaches second and first position in the 10min and 1h leaderboards respectively. We achieve new SOTA scores for three LID tasks. See more information in our paper.

Datasets: For ASR/ST/TTS datasets, only train set is used.
Languages present not indexed by Huggingface: Asturian (ast), Basaa (bas), Cebuano (ceb), Central Kurdish/Sorani (ckb), Hakha Chin (cnh), Hawaiian (haw), Upper Sorbian (hsb) Kabyle (kab), Moksha (mdf), Meadow Mari (mhr), Hill Mari (mrj), Erzya (myv), Taiwanese Hokkien (nan-tw), Sursilvan (rm-sursilv), Vallader (rm-vallader), Sakha (sah), Santali (sat), Scots (sco), Saraiki (skr), Tigre (tig), Tok Pisin (tpi), Akwapen Twi (tw-akuapem), Asante Twi (tw-asante), Votic (vot), Waray (war), Cantonese (yue).
For allowing research in training dynamics, the intermediate checkpoints for the three iterations are made available under the CC-BY-NC-SA-4.0 license via a protected link.
@inproceedings{boito2024mhubert,
author={Boito, Marcely Zanon and Iyer, Vivek and Lagos, Nikolaos and Besacier, Laurent and Calapodescu, Ioan},
title={{mHuBERT-147: A Compact Multilingual HuBERT Model}},
year=2024,
booktitle={Interspeech 2024},
This is an output of the European Project UTTER (Unified Transcription and Translation for Extended Reality) funded by European Union’s Horizon Europe Research and Innovation programme under grant agreement number 101070631.
For more information please visit https://he-utter.eu/
NAVER LABS Europe: https://europe.naverlabs.com/