by MaziyarPanahi
Open source · 87k downloads · 8 likes
The phi-4 GGUF model is an optimized version of Microsoft’s phi-4 model, converted to the GGUF format for efficient local use. Designed to run on consumer-grade hardware, it delivers smooth and fast interactions through hardware optimizations such as GPU acceleration. Its core capabilities include text generation, contextual understanding, and answering complex questions, making it suitable for a variety of applications like conversational assistance, writing, and document analysis. What sets it apart is its lightweight design and compatibility with numerous open-source tools, providing a high-performance alternative to cloud solutions while ensuring data privacy.
MaziyarPanahi/phi-4-GGUF contains GGUF format model files for microsoft/phi-4.
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.