by LiquidAI
Open source · 224k downloads · 155 likes
The LFM2.5 1.2B Instruct GGUF model is a lightweight AI solution designed to run directly on local devices, optimized for edge deployment. It combines an enhanced hybrid architecture with extended training and reinforcement learning, delivering high performance while remaining resource-efficient. This model excels in text comprehension and generation, accurately and consistently responding to complex instructions. It is particularly well-suited for applications requiring low latency and data privacy, such as embedded personal assistants or local analytics tools. Its ability to operate on modest hardware sets it apart from traditional cloud-based models while ensuring a smooth and responsive user experience.
LFM2.5 is a new family of hybrid models designed for on-device deployment. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct
Example usage with llama.cpp:
llama-cli -hf LiquidAI/LFM2.5-1.2B-Instruct-GGUF