by MaziyarPanahi
Open source · 91k downloads · 12 likes
Phi 4 mini instruct GGUF is a compact and high-performance language model optimized to run locally on personal machines. Designed to understand and generate text fluently, it excels in tasks such as writing, answering questions, or conversational assistance while remaining accessible without relying on the cloud. Its GGUF version, which is lighter and compatible with numerous tools, makes it particularly well-suited for developers and users seeking a private and efficient solution. This model stands out for its ability to balance performance and simplicity, offering a credible alternative to larger models while remaining manageable on modest hardware. Ideal for a variety of uses, from task automation to content creation, it represents an advancement in the accessibility of generative AI.
MaziyarPanahi/Phi-4-mini-instruct-GGUF contains GGUF format model files for microsoft/Phi-4-mini-instruct.
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.