by litert-community
Open source · 135k downloads · 39 likes
The Qwen2.5 1.5B Instruct model is an optimized version of the Qwen2.5 model, specifically designed to run efficiently on mobile devices such as Android and iOS. It enables on-device language inference tasks without relying on an external server, ensuring improved privacy and reduced latency. Its core capabilities include text generation, query comprehension, and conversational question answering. This model stands out for its lightweight design and performance optimized for mobile environments, while still delivering high-quality responses for a variety of use cases, such as virtual assistants, educational applications, or productivity tools.
This model provides a few variants of Qwen/Qwen2.5-1.5B-Instruct that are ready for deployment on Android using the LiteRT (fka TFLite) stack, MediaPipe LLM Inference API and LiteRT-LM.
Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.
Download or build the app from GitHub.
Install the app from Google Play.
Follow the instructions in the app.
To build the demo app from source, please follow the instructions from the GitHub repository.
Note that all benchmark stats are from a Samsung S25 Ultra and multiple prefill signatures enabled.
| Backend | Quantization scheme | Context length | Prefill (tokens/sec) | Decode (tokens/sec) | Time-to-first-token (sec) | Model size (MB) | Peak RSS Memory (MB) | GPU Memory (RSS in MB) | |
|---|---|---|---|---|---|---|---|---|---|
CPU | fp32 (baseline) | 1280 | 49.50 | 10 tk/s | 21.25 s | 6182 MB | 6254 MB | N/A | |
CPU | dynamic_int8 | 1280 | 297.58 | 34.25 tk/s | 3.71 s | 1598 MB | 1997 MB | N/A | |
CPU | dynamic_int8 | 4096 | 162.72 tk/s | 26.06 tk/s | 6.57 s | 1598 MB | 2216 MB | N/A | |
GPU | dynamic_int8 | 1280 | 1667.75 tk/s | 30.88 tk/s | 3.63 s | 1598 MB | 1846 MB | 1505 MB | |
GPU | dynamic_int8 | 4096 | 933.45 tk/s | 27.30 tk/s | 4.77 s | 1598 MB | 1869 MB | 1505 MB |