by litert-community
Open source · 152k downloads · 35 likes
DeepSeek R1 Distill Qwen 1.5B is a lightweight language model optimized for efficient execution on mobile devices and embedded systems. It generates text smoothly and contextually, making it ideal for applications requiring responsive artificial intelligence without relying on a remote server. Its primary use cases include integrated conversational assistants, writing assistance tools, and automated response systems on smartphones and tablets. What sets it apart is its ability to deliver high performance even on mobile devices through hardware optimizations such as GPUs or dedicated accelerators, while maintaining a compact size to simplify deployment.
This model provides a few variants of deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B that are ready for deployment on Android using the LiteRT (fka TFLite) stack, MediaPipe LLM Inference API and LiteRt-LM.
Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.
Download or build the app from GitHub.
Install the app from Google Play
Follow the instructions in the app.
To build the demo app from source, please follow the instructions from the GitHub repository.
Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size with multiple prefill signatures enabled.
| Backend | Quantization | Context Length | Prefill (tokens/sec) | Decode (tokens/sec) | Time-to-first-token (sec) | Model size (MB) | Peak RSS Memory (MB) | GPU Memory (MB) |
|---|---|---|---|---|---|---|---|---|
CPU | dynamic_int8 | 4096 | 166.50 tk/s | 26.35 tk/s | 6.41 s | 1831.43 MB | 2221 MB | N/A |
GPU | dynamic_int8 | 4096 | 927.54 tk/s | 26.98 tk/s | 5.46 s | 1831.43 MB | 2096 MB | 1659 MB |