by nvidia
Open source · 143k downloads · 24 likes
The GLM-5 NVFP4 model is a quantized version of the GLM-5 language model, an autoregressive transformer-based architecture optimized for performance. Designed for deployment on NVIDIA GPU-accelerated systems, it delivers enhanced inference speed and energy efficiency while remaining accessible for both commercial and non-commercial use. Its core capabilities include text generation, contextual understanding over long sequences (up to 200,000 tokens), and seamless integration into applications such as conversational agents, AI-augmented search systems (RAG), or chatbots. The model stands out for its compatibility with frameworks like vLLM or SGLang, as well as its optimization for NVIDIA Blackwell hardware architectures, ensuring high-performance execution. Aimed at developers seeking ready-to-use solutions, the GLM-5 NVFP4 simplifies deployment while offering flexibility tailored to diverse use cases, from virtual assistants to advanced AI pipelines.
The NVIDIA GLM-5 NVFP4 model is the quantized version of ZAI’s GLM-5 model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check here. The NVIDIA GLM-5 NVFP4 model is quantized with Model Optimizer.
This model is ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA (GLM-5) Model Card from ZAI.
Nvidia Model Optimizer: https://github.com/NVIDIA/Model-Optimizer
Global
Developers looking to take off-the-shelf, pre-quantized models for deployment in AI Agent systems, chatbots, RAG systems, and other AI-powered applications.
Huggingface 03/16/2026 via https://huggingface.co/nvidia/GLM-5-NVFP4
Architecture Type: Transformers
Network Architecture: GLM-5
Number of Model Parameters: 744B in total and 40B activated
Input Type(s): Text
Input Format(s): String
Input Parameters: One-Dimensional (1D)
Other Properties Related to Input: Context length up to 200K
Output Type(s): Text
Output Format: String
Output Parameters: 1D (One-Dimensional): Sequences
Other Properties Related to Output: N/A
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Supported Runtime Engine(s):
Supported Hardware Microarchitecture Compatibility:
Preferred Operating System(s):
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
The model version is NVFP4 1.0 version and is quantized with nvidia-modelopt v0.42.0
** Link: cnn_dailymail, Nemotron-Post-Training-Dataset-v2
** Data Collection Method by dataset: Automated.
** Labeling method: Automated.
** Properties: The cnn_dailymail dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail.
** Data Modality: Undisclosed
** Data Collection Method by dataset: Undisclosed
** Labeling Method by dataset: Undisclosed
** Properties: Undisclosed
** Data Collection Method by dataset: Undisclosed
** Labeling Method by dataset: Undisclosed
** Properties: Undisclosed
** Data Collection Method by dataset: Hybrid: Human, Automated
** Labeling Method by dataset: Hybrid: Human, Automated
** Properties: We evaluated the model on benchmarks including GPQA, which is a dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry.
Acceleration Engine: vLLM, SGLang
Test Hardware: B300
This model was obtained by quantizing the weights and activations of GLM-5 to NVFP4 data type, ready for inference with vLLM and SGLang. Only the weights and activations of the linear operators within transformer blocks in MoE are quantized.
To serve this checkpoint with vLLM, you can start the docker vllm/vllm-openai:latest and run the sample command below:
vllm serve nvidia/GLM-5-NVFP4 --tensor-parallel-size 8 --trust-remote-code --enable-auto-tool-choice --tool-call-parser glm47 --reasoning-parser glm45 --enable-chunked-prefill --max-num-batched-tokens 131072 --gpu-memory-utilization 0.80
To serve this checkpoint with SGLang, you can start the docker lmsysorg/sglang:nightly-dev-cu13-20260305-33c92732 and run the sample command below (when the nightly docker becomes unavailable, use lmsysorg/sglang:latest):
python3 -m sglang.launch_server --model nvidia/GLM-5-NVFP4 --tensor-parallel-size 8 --quantization modelopt_fp4 --tool-call-parser glm47 --reasoning-parser glm45 --trust-remote-code --chunked-prefill-size 131072 --mem-fraction-static 0.80
If you would like to enable expert parallel when launch the SGLang endpoint, please build docker with provided dockerfile.
The accuracy benchmark results are presented in the table below:
| Precision | MMLU Pro | GPQA Diamond | SciCode | IFBench | HLE |
| FP8 | 0.858 | 0.862 | 0.488 | 0.717 | 0.274 |
| NVFP4 | 0.861 | 0.855 | 0.478 | 0.712 | 0.275 |
Baseline: GLM-5-FP8. Benchmarked with temperature=1.0, top_p=0.95, max num tokens 131072
The base model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.