AI/EXPLORER
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium
—AI Tools
—Sites & Blogs
—LLMs & Models
—Categories
AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • ›All tools
  • ›Sites & Blogs
  • ›LLMs & Models
  • ›Compare
  • ›Chatbots
  • ›AI Images
  • ›Code & Dev

Company

  • ›Premium
  • ›About
  • ›Contact
  • ›Blog

Legal

  • ›Legal notice
  • ›Privacy
  • ›Terms

© 2026 AI Explorer·All rights reserved.

HomeLLMsApriel 5B Instruct

Apriel 5B Instruct

by ServiceNow-AI

Open source · 119k downloads · 53 likes

2.2
(53 reviews)ChatAPI & Local
About

Apriel 5B Instruct is a versatile and high-performance AI model designed to excel across a wide range of tasks thanks to an architecture optimized for efficiency and speed. It stands out for its ability to follow complex instructions, reason in a structured manner, and generate responses that are both safe and aligned with user expectations while maintaining low latency. Its primary use cases include programming assistance, content generation and synthesis, logical problem-solving, and creative creation, with a strong emphasis on safety and ethics in interactions. Unlike more specialized models, it strikes a balance between generality and precision, making it suitable for both developers and non-technical users. Its thorough training and alignment mechanisms enhance its reliability for natural dialogues and multi-step tasks while remaining accessible for large-scale deployment.

Documentation

Apriel-5B

/ˈɑː.pri.əl/

Table of Contents

  1. Model Summary
  2. Evaluation
  3. Intended Use
  4. Limitations
  5. Security and Responsible Use
  6. License
  7. Citation

Model Summary

Apriel is a family of models built for versatility, offering high throughput and efficiency across a wide range of tasks.

Apriel-5B-Base

Apriel-5B-base is a decoder-only transformer trained on 4.5T+ tokens of data. It is the first release in the Apriel model family, designed to support research on foundation models. Apriel-5B-base achieves strong performance across common benchmarks for models under 5B parameters.

Apriel-5B-Instruct

Apriel-5B-Instruct is built on top of Apriel-5B-base using continual pretraining (CPT), supervised finetuning (SFT), and post-training alignment with DPO and RLVR.

Both CPT and SFT stages involved training multiple domain-biased variants with overlapping datasets (e.g., instruction, code, math). These were then merged to form a more general-purpose model before alignment. The final model is aligned for instruction following, reasoning, and safety-aware dialogue.

graph

The y-axis shows average downstream benchmark scores. Throughput (x-axis) was measured using vLLM with batch size 8, 256 input tokens, and 32 output tokens.

How to Use

Bash
pip install transformers

Running the Base model

Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "ServiceNow-AI/Apriel-5B-Base"
device = "cuda"  # or "cpu"

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device)

inputs = tokenizer.encode("Snow is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 9664.14 MB

Running the Instruct model

Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "ServiceNow-AI/Apriel-5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
device = "cuda" if torch.cuda.is_available() else "cpu"

model = AutoModelForCausalLM.from_pretrained(
    checkpoint, 
    torch_dtype=torch.bfloat16 if device == "cuda" else torch.float32
).to(device)

messages = [
    {"role": "system", "content": "You are a helpful AI assistant that provides accurate and concise information."},
    {"role": "user", "content": "Tell me about artificial intelligence"}
]

input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(device)

generation_params = {
    "max_new_tokens": 512,
    "temperature": 0.2,
    "top_p": 0.9,
    "do_sample": True
}

outputs = model.generate(**inputs, **generation_params)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Chat Template

SQL
<|system|>
System message here (optional)
<|end|>
<|user|>
User message here
<|end|>
<|assistant|>
Assistant response here
<|end|>

If no system message is provided, the model inserts a blank system prompt to maintain format structure. The model supports structured interaction patterns, including tool calling and reasoning steps for more advanced workflows.

Evaluation

Evaluations were conducted using lm-eval-harness and evalchemy.

Apriel-5B-Base

Task NameApriel-5B-BaseOLMo-2-1124-7BLlama-3.1-8BMistral-Nemo-Base-2407
Average58.758.7161.7266.01
ARC Challenge56.762.758.262.9
ARC Easy82.486.085.786.7
MMMLU44.535.347.454.7
Global MMLU57.452.461.168.4
GSM8k64.263.254.858.5
HellaSwag74.480.578.882.7
MUSR39.139.638.039.9
MBPP27.622.446.054.6
MMLU61.363.966.069.6
PIQA78.981.181.282.1

Apriel-5B-Instruct

Task NameApriel-5B-InstructOLMo-2-1124-7B-InstructLlama-3.1-8B-InstructMistral-Nemo-Instruct-2407
Average49.6443.9152.6048.63
ARC Challenge59.0461.4564.2566.38
GSM8k80.3679.6882.6377.63
Hellaswag74.5280.2178.4381.71
BBH39.8239.9550.8650.06
GPQA28.3627.8529.1929.45
IF Eval80.7872.6479.6762.85
MMLU Pro29.1926.5737.7435.09
MUSR36.7734.3938.3639.02
MBPP45.8028.0059.0057.60
TruthfulQA56.0956.4655.0557.69
Winogrande62.3565.3567.0170.01
Minerva Math39.809.9636.7221.46
MATH50053.0031.445.8034.40
AMC2329.0016.421.0011.50
MixEval Hard29.7028.4043.3034.60

Intended Use

The Apriel family of models are designed for a variety of general-purpose instruction tasks, including:

  • Question answering and information retrieval
  • Content generation and summarization
  • Code assistance and generation
  • Logical reasoning and multi-step tasks
  • Creative writing and ideation

They are not intended for use in safety-critical applications without human oversight or in scenarios requiring guaranteed factual accuracy.

Limitations

  • Factual accuracy: May produce incorrect, misleading, or outdated content. Outputs should be verified before use in critical contexts.
  • Bias: May reflect societal, cultural, or systemic biases present in training data.
  • Ethics: Do not use the model to produce harmful, unlawful, or unethical content.
  • Language: Strongest performance is in English. Output quality may degrade in underrepresented languages.
  • Critical use: Not suitable for medical, legal, financial, or other high-risk applications without safeguards.

Security and Responsible Use

Security Responsibilities:
Deployers and users are strongly encouraged to align their security practices with established frameworks and regulatory guidelines such as the EU AI Act and the NIST AI Risk Management Framework (RMF).

Guidelines for Deployers:

  • Regularly conduct robustness assessments to identify and mitigate adversarial inputs.
  • Implement validation and filtering processes to prevent harmful or biased outputs.
  • Continuously perform data privacy checks to guard against unintended data leaks.
  • Document and communicate the model's limitations, intended usage, and known security risks to all end-users.
  • Schedule periodic security reviews and updates to address emerging threats and vulnerabilities.

Guidelines for Users:

  • Follow established security policies and usage guidelines provided by deployers.
  • Protect and manage sensitive information when interacting with the model.
  • Report anomalies, suspicious behavior, or unsafe outputs to deployers or developers.
  • Maintain human oversight and apply judgment to mitigate potential security or ethical risks during interactions.

Disclaimer:
Users accept responsibility for securely deploying, managing, and using this open-source LLM. The model is provided "as-is," without explicit or implied warranty regarding security or fitness for any specific application or environment.

Pretraining

Model

  • Architecture: Transformer decoder with grouped-query attention and YARN rotary embeddings
  • Tokens: 4.5T
  • Precision: bfloat16
  • Knowledge cutoff: April 2024

Hardware

  • Compute: 480 × H100 GPUs
  • GPU-hours: ~91,000 H100-hours

Software

  • Training stack: Fast-LLM

License

MIT

Citation

Bibtex
@misc{Apriel-small-language-models,  
    author = {Slam labs team},  
    title = {{Apriel - a Family of performant small language models}},  
    howpublished = {https://huggingface.co/ServiceNow-AI/Apriel-5B-Instruct},
    publisher = {SLAM - ServiceNow Language Models Lab}  
    year = {2025}
}
Capabilities & Tags
transformerssafetensorsaprieltext-generationconversationalcustom_codeen
Links & Resources
Specifications
CategoryChat
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Parameters5B parameters
Rating
2.2

Try Apriel 5B Instruct

Access the model directly