by forkjoin-ai
Open source · 515 downloads · 0 likes
VibeVoice 1.5B is a language model optimized to run locally on edge devices, such as browsers or smartphones, thanks to its conversion into GGUF format. It excels in real-time text comprehension and generation while strictly adhering to memory and latency constraints. Its primary use cases include voice assistants, automatic transcription, and applications requiring natural, fluid interaction. What sets it apart is its optimized adaptation for cloud-independent deployment, ensuring both privacy and speed. Its lightweight architecture makes it an ideal tool for high-performance, accessible embedded solutions.
Forkjoin.ai conversion of VibeVoice/VibeVoice-1.5B to GGUF format for edge deployment.
./llama-cli -m vibevoice-1.5b-gguf.gguf -p "Your prompt here" -n 256
Create a Modelfile:
FROM ./vibevoice-1.5b-gguf.gguf
ollama create vibevoice-1.5b-gguf -f Modelfile
ollama run vibevoice-1.5b-gguf
Forkjoin.ai runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities.
All conversions are optimized for edge deployment within browser and mobile memory constraints.
Apache 2.0 (follows upstream model license)