by utter-project
Open source · 49k downloads · 103 likes
mHuBERT-147 is a compact yet high-performance multilingual speech recognition model designed to process 147 languages using 90,000 hours of open-licensed audio data. Unlike traditional HuBERT models, it employs discrete speech units optimized through Faiss indexing, enhancing both efficiency and accuracy. The model excels in tasks such as automatic speech recognition (ASR), speech-to-speech translation (ST), and text-to-speech synthesis (TTS), while achieving state-of-the-art results on benchmarks like ML-SUPERB. Its use cases include multilingual transcription, linguistic analysis, and voice assistance in multicultural settings. What sets it apart is its ability to handle a wide range of languages with a lightweight architecture, while incorporating adaptive learning mechanisms for improved generalization.
This repository contains the best mHuBERT-147 pre-trained model.
MODEL DETAILS: 3rd iteration, K=1000, HuBERT base architecture (95M parameters), 147 languages.
mHuBERT-147 are compact and competitive multilingual HuBERT models trained on 90K hours of open-license data in 147 languages. Different from traditional HuBERTs, mHuBERT-147 models are trained using faiss IVF discrete speech units. Training employs a two-level language, data source up-sampling during training. See more information in our paper.
This repository contains:
Related Models:
Manifest list available here. Please note that since training, there were CommonVoice removal requests. This means that some of the listed files are no longer available.
Fairseq fork contains the scripts for training with multilingual batching with two-level up-sampling.
mHubert-147 reaches second and first position in the 10min and 1h leaderboards respectively. We achieve new SOTA scores for three LID tasks. See more information in our paper.

Datasets: For ASR/ST/TTS datasets, only train set is used.
Languages present not indexed by Huggingface: Asturian (ast), Basaa (bas), Cebuano (ceb), Central Kurdish/Sorani (ckb), Hakha Chin (cnh), Hawaiian (haw), Upper Sorbian (hsb) Kabyle (kab), Moksha (mdf), Meadow Mari (mhr), Hill Mari (mrj), Erzya (myv), Taiwanese Hokkien (nan-tw), Sursilvan (rm-sursilv), Vallader (rm-vallader), Sakha (sah), Santali (sat), Scots (sco), Saraiki (skr), Tigre (tig), Tok Pisin (tpi), Akwapen Twi (tw-akuapem), Asante Twi (tw-asante), Votic (vot), Waray (war), Cantonese (yue).
For allowing research in training dynamics, the intermediate checkpoints for the three iterations are made available under the CC-BY-NC-SA-4.0 license via a protected link.
@inproceedings{boito2024mhubert,
author={Boito, Marcely Zanon and Iyer, Vivek and Lagos, Nikolaos and Besacier, Laurent and Calapodescu, Ioan},
title={{mHuBERT-147: A Compact Multilingual HuBERT Model}},
year=2024,
booktitle={Interspeech 2024},
This is an output of the European Project UTTER (Unified Transcription and Translation for Extended Reality) funded by European Union’s Horizon Europe Research and Innovation programme under grant agreement number 101070631.
For more information please visit https://he-utter.eu/
NAVER LABS Europe: https://europe.naverlabs.com/