AI/EXPLORER
OutilsCatégoriesSitesLLMsComparerQuiz IAAlternativesPremium
—Outils IA
—Sites & Blogs
—LLMs & Modèles
—Catégories
AI Explorer

Trouvez et comparez les meilleurs outils d'intelligence artificielle pour vos projets.

Fait avecen France

Explorer

  • ›Tous les outils
  • ›Sites & Blogs
  • ›LLMs & Modèles
  • ›Comparer
  • ›Chatbots
  • ›Images IA
  • ›Code & Dev

Entreprise

  • ›Premium
  • ›À propos
  • ›Contact
  • ›Blog

Légal

  • ›Mentions légales
  • ›Confidentialité
  • ›CGV

© 2026 AI Explorer·Tous droits réservés.

AccueilLLMsgranite 4.0 micro base

granite 4.0 micro base

par ibm-granite

Open source · 163k downloads · 40 likes

2.0
(40 avis)ChatAPI & Local
À propos

Granite-4.0-Micro-Base est un modèle de langage génératif conçu pour traiter une grande variété de tâches de génération de texte à partir de texte, avec une attention particulière portée aux contextes longs. Développé par IBM, il excelle notamment dans la complétion de code grâce à sa capacité à gérer le Fill-in-the-Middle (FIM), permettant d'insérer du code entre des segments existants. Entraîné sur un vaste corpus multilingue de 15 000 milliards de tokens, il supporte 12 langues et peut être affiné pour d'autres, offrant ainsi une grande flexibilité d'utilisation. Ses principaux cas d'usage incluent la synthèse de texte, la classification, l'extraction d'informations, les systèmes de questions-réponses et la génération de contenu long. Ce qui le distingue, c'est sa robustesse sur des benchmarks variés et sa capacité à servir de base pour créer des modèles spécialisés, tout en maintenant une performance équilibrée entre efficacité et polyvalence.

Documentation

mof-class3-qualified

Granite-4.0-Micro-Base

Model Summary: Granite-4.0-Micro-Base is a decoder-only, long-context language model designed for a wide range of text-to-text generation tasks. It also supports Fill-in-the-Middle (FIM) code completion through the use of specialized prefix and suffix tokens. The model is trained from scratch on approximately 15 trillion tokens following a four-stage training strategy: 10 trillion tokens in the first stage, 2 trillion in the second, another 2 trillion in the third, and 0.5 trillion in the final stage.

  • Developers: Granite Team, IBM
  • HF Collection: Granite 4.0 Language Models HF Collection
  • GitHub Repository: ibm-granite/granite-4.0-language-models
  • Website: Granite Docs
  • Release Date: October 2nd, 2025
  • License: Apache 2.0

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.0 models for languages beyond these languages.

Intended Use: Prominent use cases of LLMs in text-to-text generation include summarization, text classification, extraction, question-answering, code-completion (including FIM), and long-context generation tasks. All Granite Base models are able to handle these tasks as they were trained on a large amount of data from various domains. Moreover, they can serve as baseline to create specialized models for specific application scenarios.

Generation: This is a simple example of how to use Granite-4.0-Micro-Base model.

Install the following libraries:

Shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the code snippet below to run the example.

Python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"

model_path = "ibm-granite/granite-4.0-h-micro-base"

tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "The capital of France is"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, max_length=10)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

Shell
The capital of France is Paris.

Evaluation Results:

BenchmarksMetricMicro DenseH Micro DenseH Tiny MoEH Small MoE
General Tasks
MMLU5-shot66.4767.4368.9075.85
MMLU-Pro5-shot,CoT37.1634.0335.4748.94
BBH3-shot, CoT63.8457.6559.6775.84
AGI EVAL3-shot54.3254.5953.6962.05
DROP5-shot66.0467.4464.9274.69
Math Tasks
GSM8K8-shot72.9363.7672.5582.11
Minerva Math4-shot3839.740.3446.28
Code Tasks
HumanEval pass@1 [StarCoder Prompt]76.1973.7277.5983.66
HumanEvalpass@159.7670.7371.3476.22
HumanEval+pass@154.2767.0764.0269.51
MBPPpass@181.4874.8781.4883.07
MBPP+pass@168.2563.2368.7870.37
Multilingual Tasks
MMMLU5-shot56.5958.562.7771.18
INCLUDE5-shot51.7752.1653.7866.04
MGSM8-shot58.4847.0454.6465.2
Multilingual Benchmarks and thr included languages:
Benchmarks# LangsLanguages
MMMLU11ar, de, en, es, fr, ja, ko, pt, zh, bn, hi
INCLUDE14hi, bn, ta, te, ar, de, es, fr, it, ja, ko, nl, pt, zh
MGSM5en, es, fr, ja, zh

Model Architecture:

Granite-4.0-Micro-Base is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA, RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.

ModelMicro DenseH Micro DenseH Tiny MoEH Small MoE
Embedding size2560204815364096
Number of layers40 attention4 attention / 36 Mamba24 attention / 36 Mamba24 attention / 36 Mamba2
Attention head size6464128128
Number of attention heads40321232
Number of KV heads8848
Mamba2 state size-128128128
Number of Mamba2 heads-6448128
MLP / Shared expert hidden size8192819210241536
Num. Experts--6472
Num. active Experts--610
Expert hidden size--512768
MLP activationSwiGLUSwiGLUSwiGLUSwiGLU
Sequence length128K128K128K128K
Position embeddingRoPENoPENoPENoPE
# Parameters3B3B7B32B
# Active parameters3B3B1B9B

Training Data: This model is trained on a mix of open source and proprietary data following a four-stage training strategy.

StageCharacteristicsMicro DenseH Micro DenseH Tiny MoEH Small MoE
IGeneral mixture of training data, warmup, and power scheduler for learning rate.10101515
IIGeneral mixture of training data with higher percentages of code and math with power scheduler for learning rate.2555
IIIHigh quality training data, exponential decay of learning rate.2222
IVHigh quality training data, linear decay to zero for learning rate.0.50.50.50.5

Infrastructure: We trained the Granite 4.0 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: The use of Large Language Models involves risks and ethical considerations people must be aware of, including but not limited to: bias and fairness, misinformation, and autonomous decision-making. Granite-4.0-Micro-Base model is not the exception in this regard. Even though this model is suited for multiple generative AI tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying text verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use Granite-4.0-Micro-Base model with ethical intentions and in a responsible way.

Resources

  • ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
  • 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
  • 💡 Learn about the latest Granite learning resources: https://github.com/ibm-granite-community/
Liens & Ressources
Spécifications
CatégorieChat
AccèsAPI & Local
LicenceOpen Source
TarificationOpen Source
Note
2.0

Essayer granite 4.0 micro base

Accédez directement au modèle