AI ExplorerAI Explorer
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium

—

AI Tools

—

Sites & Blogs

—

LLMs & Models

—

Categories

AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • All tools
  • Sites & Blogs
  • LLMs & Models
  • Compare
  • Chatbots
  • AI Images
  • Code & Dev

Company

  • Premium
  • About
  • Contact
  • Blog

Legal

  • Legal notice
  • Privacy
  • Terms

© 2026 AI Explorer. All rights reserved.

HomeLLMsLlama 3.1 8B Instruct FP8

Llama 3.1 8B Instruct FP8

by nvidia

Open source · 563k downloads · 34 likes

1.9
(34 reviews)ChatAPI & Local
About

The Llama 3.1 8B Instruct FP8 model is an optimized and quantized version of the Llama 3.1 8B Instruct model, specifically designed for efficient inference. It is an autoregressive language model based on a transformer architecture, capable of understanding and generating text fluently and contextually. Thanks to its FP8 quantization, it halves storage space and GPU memory requirements while delivering improved performance, with an approximate 1.3x speed increase on NVIDIA H100 GPUs. Its key capabilities include text generation, language comprehension, logical reasoning, and answering complex questions, all while maintaining high accuracy across diverse tasks. This model is particularly well-suited for commercial or non-commercial deployments requiring high energy efficiency and optimized performance. It stands out for its compatibility with frameworks like TensorRT-LLM and vLLM, as well as its support for long sequences up to 128,000 tokens, making it ideal for applications such as chatbots, document analysis, or automated assistance.

Documentation

Model Overview

Description:

The NVIDIA Llama 3.1 8B Instruct FP8 model is the quantized version of the Meta's Llama 3.1 8B Instruct model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check here. The NVIDIA Llama 3.1 8B Instruct FP8 model is quantized with TensorRT Model Optimizer.

This model is ready for commercial and non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA (Meta-Llama-3.1-8B-Instruct) Model Card.

License/Terms of Use:

nvidia-open-model-license

llama3.1

Model Architecture:

Architecture Type: Transformers
Network Architecture: Llama3.1

Input:

Input Type(s): Text
Input Format(s): String
Input Parameters: Sequences
Other Properties Related to Input: Context length up to 128K

Output:

Output Type(s): Text
Output Format: String
Output Parameters: Sequences
Other Properties Related to Output: N/A

Software Integration:

Supported Runtime Engine(s):

  • Tensor(RT)-LLM
  • vLLM

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Blackwell
  • NVIDIA Hopper
  • NVIDIA Lovelace

Preferred Operating System(s):

  • Linux

Model Version(s):

The model is quantized with nvidia-modelopt v0.27.0

Datasets:

  • Calibration Dataset: cnn_dailymail
  • Evaluation Dataset: MMLU

Inference:

Engine: Tensor(RT)-LLM or vLLM
Test Hardware: H100

Post Training Quantization

This model was obtained by quantizing the weights and activations of Meta-Llama-3.1-8B-Instruct to FP8 data type, ready for inference with TensorRT-LLM and vLLM. Only the weights and activations of the linear operators within transformers blocks are quantized. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. On H100, we achieved 1.3x speedup.

Usage

To deploy the quantized checkpoint with TensorRT-LLM, follow the sample commands below with the TensorRT-LLM GitHub repo:

  • Checkpoint convertion:
Sh
python examples/llama/convert_checkpoint.py --model_dir Llama-3.1-8B-Instruct-FP8 --output_dir /ckpt --use_fp8
  • Build engines:
Sh
trtllm-build --checkpoint_dir /ckpt --output_dir /engine
  • Throughputs evaluation:

Please refer to the TensorRT-LLM benchmarking documentation for details.

Evaluation

Precision MMLU GSM8K (CoT) ARC Challenge IFEVAL TPS
BF16 69.4 84.5 83.4 80.4 8,579.93
FP8 68.7 83.1 83.3 81.8 11,062.90

We benchmarked with tensorrt-llm v0.13 on 8 H100 GPUs, using batch size 1024 for the throughputs with in-flight batching enabled. We achieved ~1.3x speedup with FP8.

Deploy with vLLM

To deploy the quantized checkpoint with vLLM, follow the instructions below:

  1. Install vLLM from directions here.
  2. To use a Model Optimizer PTQ checkpoint with vLLM, quantization=modelopt flag must be passed into the config while initializing the LLM Engine.

Example deployment on H100:

INI
from vllm import LLM, SamplingParams

model_id = "nvidia/Llama-3.1-8B-Instruct-FP8"
sampling_params = SamplingParams(temperature=0.8, top_p=0.9)

prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]

llm = LLM(model=model_id, quantization="modelopt")
outputs = llm.generate(prompts, sampling_params)

# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

This model can be deployed with an OpenAI Compatible Server via the vLLM backend. Instructions here.

Capabilities & Tags
transformerssafetensorsllamatext-generationconversationaltext-generation-inferenceendpoints_compatible
Links & Resources
Specifications
CategoryChat
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Parameters8B parameters
Rating
1.9

Try Llama 3.1 8B Instruct FP8

Access the model directly