by Marvis-AI
Open source · 734 downloads · 29 likes
Marvis TTS 100M v0.2 is a conversational text-to-speech model designed to generate speech in real time from text, with natural fluidity suited for interactive exchanges. Thanks to its optimized architecture, it produces a continuous and coherent audio stream without the typical choppy artifacts of conventional solutions, while remaining lightweight (300 MB in quantized form) to run efficiently on mobile or consumer devices like iPhones, iPads, or Macs. The model supports multiple languages, including English, French, and German, and can even process sequences combining text and audio, offering great flexibility for various applications. Its main strengths lie in its ability to clone voices from audio samples, integrate easily into accessibility or content creation tools, and ensure minimal latency for live interactions. Unlike other solutions, Marvis avoids rule-based approaches for text segmentation, instead favoring a global contextual analysis for more human-like intonation. Ideal for voice assistants, personalized narrations, or automated media, it stands out for its balance between performance, sound quality, and respect for hardware constraints.
[code]
Marvis is a cutting-edge conversational speech model designed to enable real-time streaming text-to-speech synthesis. Built with efficiency and accessibility in mind, Marvis addresses the growing need for high-quality, real-time voice synthesis that can run on consumer devices such as Apple Silicon, iPhones, iPads, Macs and others.
Currently optimized for English, French, and German.
Real audio streaming:
pip install -U mlx-audio
mlx_audio.tts.generate --model Marvis-AI/marvis-tts-100m-v0.2 --stream \
--text "Marvis TTS is a new text-to-speech model that provides fast streaming on edge devices."
Voice cloning:
mlx_audio.tts.generate --model Marvis-AI/marvis-tts-100m-v0.2 --stream \
--text "Marvis TTS is a new text-to-speech model that provides fast streaming on edge devices." --ref_audio ./conversational_a.wav
You can pass any audio to clone the voice from or select sample audio file from here.
Marvis is built on the Sesame CSM-1B (Conversational Speech Model) architecture, a multimodal transformer that operates directly on Residual Vector Quantization (RVQ) tokens and uses Kyutai's mimi codec. The architecture enables end-to-end training while maintaining low-latency generation and employs a dual-transformer approach:
Multimodal Backbone (100M parameters): Processes interleaved text and audio sequences to model the zeroth codebook level, providing semantic understanding and context.
Audio Decoder (60M parameters): A smaller, specialized transformer that models the remaining 31 codebook levels to reconstruct high-quality speech from the backbone's representations.
Unlike models that require text chunking based on regex patterns, Marvis processes entire text sequences contextually, resulting in more natural speech flow and intonation.
If you use Marvis in your research or applications, please cite:
@misc{marvis-tts-2025,
title={Marvis-TTS: Efficient Real-time Voice Cloning with Streaming Speech Synthesis},
author={Prince Canuma and Lucas Newman},
year={2025}
}
Special thanks to Sesame and Kyutai for their groundbreaking open-source contributions that inspired our work, and to the broader open-source community for their unwavering support and collaboration.
Version: 0.2
Release Date: 20/10/2025
Creators: Prince Canuma & Lucas Newman