AI ExplorerAI Explorer
OutilsCatégoriesSitesLLMsComparerQuiz IAAlternativesPremium

—

Outils IA

—

Sites & Blogs

—

LLMs & Modèles

—

Catégories

AI Explorer

Trouvez et comparez les meilleurs outils d'intelligence artificielle pour vos projets.

Fait avecen France

Explorer

  • Tous les outils
  • Sites & Blogs
  • LLMs & Modèles
  • Comparer
  • Chatbots
  • Images IA
  • Code & Dev

Entreprise

  • Premium
  • À propos
  • Contact
  • Blog

Légal

  • Mentions légales
  • Confidentialité
  • CGV

© 2026 AI Explorer. Tous droits réservés.

AccueilLLMsNVIDIA Nemotron 3 Nano 30B A3B Base BF16

NVIDIA Nemotron 3 Nano 30B A3B Base BF16

par nvidia

Open source · 114k downloads · 122 likes

2.6
(122 avis)ChatAPI & Local
À propos

NVIDIA Nemotron 3 Nano 30B A3B Base BF16 est un grand modèle de langage (LLM) de base développé par NVIDIA, conçu pour servir de point de départ solide à l'affinage par instruction. Entraîné à prédire le prochain jeton, il excelle dans la compréhension et la génération de texte en plusieurs langues, couvrant un large éventail de cas d'usage comme le traitement du langage naturel, la traduction automatique ou la création de contenu. Ce modèle se distingue par son approche open, offrant des poids, des données d'entraînement et des recettes accessibles pour favoriser l'innovation et l'adaptation commerciale. Il est particulièrement adapté aux développeurs et chercheurs souhaitant personnaliser un modèle performant pour des applications spécifiques. Grâce à sa licence ouverte et ses performances optimisées, il représente une solution flexible et efficace pour intégrer des capacités avancées d'IA dans divers projets.

Documentation

NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16

Chat Chat Pre-Training Datasets
Homepage Homepage
License

Model Overview

Model Developer: NVIDIA Corporation

Model Dates:

September 2025 - December 2025

Data Freshness:

The pre-training data has a cutoff date of June 25, 2025.

Description

NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 is a base large language model (LLM) trained from scratch by NVIDIA, with the next token prediction loss. It provides a good starting point for instruction fine-tuning.

This model is ready for commercial use.

What is Nemotron?

NVIDIA Nemotron™ is a family of open models with open weights, training data, and recipes, delivering leading efficiency and accuracy for building specialized AI agents.

License/Terms of Use

GOVERNING TERMS: Use of this model is governed by the NVIDIA Nemotron Open Model License Agreement.

Base Benchmark Evaluations

We evaluated our model on the following benchmarks:

TaskQwen3 30B-A3B-BaseNVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16
General Knowledge
MMLU (5-shot, acc)81.0778.56
MMLU-Pro (5-shot, CoT EM)61.7165.05
AGIEval-En (3/5-shot, CoT acc)63.1268.32
Code
HumanEval (0-shot)70.7378.05
MBPP-Sanitized (3-shot)73.1575.49
Math
GSM8K (8-shot, acc)89.0192.34
MATH (4-shot, acc)61.1482.88
MATH-500 (4-shot, avg@32)55.0878.63
Commonsense Understanding
ARC-Challenge (25-shot, acc_norm)94.4591.89
HellaSwag (10-shot, acc_norm)83.1485.56
OpenBookQA (0-shot, acc_norm)44.8046.20
PIQA (0-shot, acc_norm)81.0184.33
WinoGrande (5-shot, acc)78.2279.64
Reading Comprehension
RACE (0-shot, acc)90.0588.04
Multilingual
MMLU Global Lite (5-shot, avg acc)76.8474.47
MGSM (8-shot, avg acc)82.5383.00
Long Context
RULER (64K, 0-shot, acc)63.5587.50
RULER (128K, 0-shot, acc)60.6982.92
RULER (256K, 0-shot, acc)Not Supported75.44
RULER (512K, 0-shot, acc)Not Supported70.56

All evaluation results were collected via Nemo Evaluator SDK and LM Evaluation Harness. The open source container on LM Evaluation Harness packaged via NVIDIA's Nemo Evaluator SDK used for evaluations can be found here. A reproducibility tutorial along with all configs can be found in Nemo Evaluator SDK examples.

Deployment Geography: Global

Use Case

This model is intended for developers and researchers building instruction-following LLMs.

Supported languages include: English, Spanish, French, German, Japanese, Italian, Chinese, Arabic, Hebrew, Hindi, Korean, Czech, Danish, Dutch, Finnish, Polish, Portuguese, Thai, Swedish, and Russian.

Release Date:

December 15, 2025 via Hugging Face

Reference(s)

  • NVIDIA Nemotron 3 model family on Hugging Face
  • NVIDIA Nemotron 2 model family on Hugging Face
  • NVIDIA Nemotron 3 White Paper

Model Architecture

  • Architecture Type: Mamba2-Transformer Hybrid Mixture of Experts (MoE)

  • Network Architecture: Nemotron Hybrid MoE

  • Number of model parameters: 30B

Training Methodology

Stage 1: Pre-Training

  • NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 model was pre-trained using crawled and synthetic code, math, science, and general knowledge data. All datasets are disclosed in the Training, Testing, and Evaluation Datasets section of this document. Major portions of the pre-training corpus are released in the Nemotron-Pre-Training-Datasets collection.
  • Software used for pre-training: Megatron-LM

The end-to-end training recipe is available in the NVIDIA Nemotron Developer Repository. Evaluation results can be replicated using the NeMo Evaluator SDK. More details on the datasets and synthetic data generation methods can be found in the technical report NVIDIA Nemotron 3 Nano.

Input

  • Input Type(s): Text

  • Input Format(s): String

  • Input Parameters: One-Dimensional (1D): Sequences

  • Maximum input size: 128K tokens

  • Other Properties Related to Input: Supported languages include: English, Spanish, French, German, Japanese, Italian, Chinese, Arabic, Hebrew, Hindi, Korean, Czech, Danish, Dutch, Finnish, Polish, Portuguese, Thai, Swedish, and Russian.

Output

  • Output Type(s): Text

  • Output Format: String

  • Output Parameters: One-Dimensional (1D): Sequences

  • Maximum output size: 128K tokens

Our AI models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration

  • Runtime Engine(s): NeMo 25.11.01
  • Supported Hardware Microarchitecture Compatibility: NVIDIA H100-80GB, NVIDIA A100
  • Operating System(s): Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Use it with Transformers

The snippet below shows how to use this model with Huggingface Transformers (tested on version 4.57.3). We recommend using NeMo Framework 25.11.01 to ensure all required libraries are available.

Please note that the model supports up to a 1M context size, although the default context size in the Hugging Face configuration is 256k due to higher VRAM requirements.

INI
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto"
)

prompt = "The capital of France is"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=32,
    do_sample=False,
    eos_token_id=tokenizer.eos_token_id
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Model Version(s)

  • v1.0

Training, Testing, and Evaluation Datasets

Data Modality: Text
The total size: 10,648,823,153,919 Tokens
Total number of datasets: 141
Dataset partition: Training [100%], testing [0%], validation [0%]
Time period for training data collection: 2013 to May 1, 2025
Time period for testing data collection: 2013 to May 1, 2025
Time period for validation data collection: 2013 to May 1, 2025
Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic

NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 is pre-trained on a large corpus of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 19 other languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was trained for approximately 25 trillion tokens.

Alongside the model, we release our final pre-training data, as outlined in this section. For ease of analysis, there is a sample set that is ungated. For all remaining code, math and multilingual data, gating and approval is required, and the dataset is permissively licensed for model training purposes.

More details on the datasets and synthetic data generation methods can be found in the technical report NVIDIA Nemotron 3 Nano.

Public dataset

DatasetCollection Period
GSM8K4/23/2025
CC-NEWS4/23/2025
Common Crawl4/23/2025
Wikimedia4/23/2025
Bespoke-Stratos-17k4/23/2025
tigerbot-kaggle-leetcodesolutions-en-2k4/23/2025
glaive-function-calling-v24/23/2025
APIGen Function-Calling4/23/2025
LMSYS-Chat-1M4/23/2025
Open Textbook Library - CC BY-SA & GNU subset and OpenStax - CC BY-SA subset4/23/2025
Advanced Reasoning Benchmark, tigerbot-kaggle-leetcodesolutions-en-2k, PRM800K, and SciBench4/23/2025
FineWeb-24/23/2025
Court ListenerLegacy Download
peS2oLegacy Download
OpenWebMathLegacy Download
BioRxivLegacy Download
PMC Open Access SubsetLegacy Download
OpenWebText2Legacy Download
Stack Exchange Data DumpLegacy Download
PubMed AbstractsLegacy Download
NIH ExPorterLegacy Download
arXivLegacy Download
BigScience Workshop DatasetsLegacy Download
Reddit DatasetLegacy Download
SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR)Legacy Download
Advanced Mathematical Problem SolvingLegacy Download
MathPileLegacy Download
NuminaMath CoTLegacy Download
PMC ArticleLegacy Download
FLANLegacy Download
Advanced Reasoning BenchmarkLegacy Download
SciBenchLegacy Download
WikiTableQuestionsLegacy Download
FinQALegacy Download
RiddlesLegacy Download
Problems in Elementary Mathematics for Home StudyLegacy Download
MedMCQALegacy Download
Cosmos QALegacy Download
MCTestLegacy Download
AI2's Reasoning ChallengeLegacy Download
OpenBookQALegacy Download
MMLU Auxiliary TrainLegacy Download
social-chemestry-101Legacy Download
Moral StoriesLegacy Download
The Common Pile v0.1Legacy Download
FineMathLegacy Download
MegaMathLegacy Download
MegaMathLegacy Download
MultiverseMathHard10/2/2025
News Commentary10/2/2025
Essential-Web10/2/2025
finepdfs10/2/2025
HotpotQA10/2/2025
SQuAD2.010/2/2025
NLTK Words Lists10/2/2025

Private Non-publicly Accessible Datasets of Third Parties

Dataset
Global Regulation
TAUS Translation Memory
Scale HLE
HackerRank Coding

Private Non-publicly Accessible Datasets by NVIDIA

Dataset
Machine Translation of STEM data using Qwen2.5-14B-Instruct

Crawled and Scraped from Online Sources by NVIDIA

The English Common Crawl data was downloaded from the Common Crawl Foundation (see their FAQ for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the Nemotron-CC paper. Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC.

The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the technical report).

DatasetModalityDataset SizeCollection PeriodCollecting Organisation
English Common CrawlText3.36T4/8/2025NVIDIA Advanced Deep Learning Research
English Common Crawl 1.1TextNot disclosed10/2/2025NVIDIA Advanced Deep Learning Research
Multilingual Common CrawlText812.7B5/1/2025NVIDIA Advanced Deep Learning Research
GitHub CrawlText747.4B4/29/2025NVIDIA Advanced Deep Learning Research

NVIDIA-Sourced Synthetic Datasets

DatasetModalityDataset SizeSeed DatasetModel(s) used for generation
Synthetic Art of Problem Solving from DeepSeek-R1Text40BArt of Problem Solving; American Mathematics Competitions 8; American Mathematics Competitions 10;DeepSeek-R1
Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1Text327Msocial-chemestry-101; Moral StoriesMixtral-8x22B-v0.1
Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72BText83.6MOpenStax - CC BY-SA subsetDeepSeek-V3; Mixtral-8x22B-v0.1; Qwen2.5-72B
Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72BText9.7MOpenStax - CC BY-SA subsetDeepSeek-V3; Mixtral-8x22B-v0.1; Qwen2.5-72B
Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72BText175MOpenStax - CC BY-SA subset; GSM8K; Open Textbook Library - CC BY-SA & GNU subsetDeepSeek-R1, DeepSeek-V3; DeepSeek-V3-0324; Qwen2.5-72B
Nemotron-PrismMathText4.6BBig-Math-RL-Verified; OpenR1-Math-220kQwen2.5-0.5B-instruct, Qwen2.5-72B-Instruct; DeepSeek-R1-Distill-Qwen-32B
Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-InstructText350MarXiv; National Institutes of Health ExPorter; BioRxiv; PMC Article; USPTO Backgrounds; peS2o; Global Regulation; CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTDQwen2.5-72B-Instruct
Synthetic Rephrased Math Data from Common Crawl from phi-4Text73BCommon Crawlphi-4
Synthetic Math Data from Common Crawl 4plusText52.3BCommon Crawlphi-4
Synthetic Math Data from Common Crawl 3Text80.9BCommon Crawlphi-4
Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324Text4.0BAQUA-RAT; LogiQA; AR-LSATDeepSeek-V3; DeepSeek-V3-0324
Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3BText4.2BAQUA-RAT; LogiQA; AR-LSATQwen3-30B-A3B
Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-InstructTextArt of Problem Solving; American Mathematics Competitions 8; American Mathematics Competitions 10; GSM8K; PRM800KQwen2.5-32B-Instruct; Qwen2.5-Math-72B; Qwen2.5-Math-7B; Qwen2.5-72B-Instruct
Synthetic MMLU Auxiliary Train from DeepSeek-R1Text0.5BMMLU Auxiliary TrainDeepSeek-R1
Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-InstructTextarXiv; National Institutes of Health ExPorter; BioRxiv; PMC Article; USPTO Backgrounds; peS2o; Global Regulation; CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTDQwen2.5-72B-Instruct
Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-InstructText415.8BCommon CrawlQwen3-30B-A3B; Mistral-NeMo-12B-Instruct
Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3BTextCommon CrawlQwen3-30B-A3B
Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3BTextWikimediaQwen3-30B-A3B
Synthetic Math Data from Wikimedia from Nemotron-4-340B-InstructText-Nemotron-4-340B-Instruct
Synthetic Common Crawl Code from phi-4Text427.9BCommon Crawlphi-4
Synthetic Scientific Coding from Qwen3-235B-A22BText1.2BWikimediaQwen3-235B-A22B
Tool Calling DataText26.2BQwen3-235B-A22B-2507; gpt-oss-120b
Synthetic Essential-Web from QwQ-32BText28.1BEssential-WebQwQ-32B
Translated Synthetic CrawlText389.9BCommon CrawlQwen3-30B-A3B
Translated Synthetic WikipediaText7.9BWikimediaQwen3-30B-A3B
Synthetic Long Context from Qwen3-235B-A22B-Instruct-2507TextUndisclosedCORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTDQwen3-235B-A22B-Instruct-2507
Synthetic Search STEM OPENQ from DeepSeek-R1-0528TextUndisclosed-DeepSeek-R1-0528
Synthetic MCQ from Qwen2.5-32B-Instruct and DeepSeek-R1-0528TextUndisclosed-Qwen2.5-32B-Instruct; DeepSeek-R1-0528
Synthetic Offline Search MCQA HLE from DeepSeek-R1-0528TextUndisclosed-DeepSeek-R1-0528
Synthetic Offline Search MCQA GPQA from Qwen3-235B-A22B and DeepSeek-R1-0528TextUndisclosed-Qwen3-235B-A22B; DeepSeek-R1-0528
Synthetic Human Preference from QwQ-32B, Qwen3-30B-A3B, Qwen3-235B-A22B, Qwen3-235B-A22B-Instruct-2507, Mistral-Small-3.1-24B-Instruct-2503, Mistral-Small-3.2-24B-Instruct-2506, MiniMax-M1-80k, MiniMax-M1-40k, Kimi-K2-Instruct, DeepSeek-V3-0324, DeepSeek-R1-0528TextUndisclosed-QwQ-32B; Qwen3-30B-A3B; Qwen3-235B-A22B; Qwen3-235B-A22B-Instruct-2507; Mistral-Small-3.1-24B-Instruct-2503; Mistral-Small-3.2-24B-Instruct-2506; MiniMax-M1-80k; MiniMax-M1-40k; Kimi-K2-Instruct; DeepSeek-V3-0324; DeepSeek-R1-0528
Synthetic Code from Qwen3-32BTextUndisclosedEnglish Common Crawl; English Common Crawl 1.1Qwen3-32B
Synthetic OpenCodeReasoning from DeepSeek-R1TextUndisclosedOpenCodeReasoningDeepSeek-R1
Synthetic LIMO from DeepSeek-R1-0528TextUndisclosedLIMODeepSeek-R1-0528
Synthetic SCP from DeepSeek-R1-0528TextUndisclosedSCP-116KDeepSeek-R1-0528
Synthetic Stack Exchange from DeepSeek-R1-0528TextUndisclosedStack ExchangeDeepSeek-R1-0528
Synthetic Common Crawl from Qwen3-30B-A3BTextUndisclosedCommon CrawlQwen3-30B-A3B
Synthetic Wikipedia from Qwen3-30B-A3BTextUndisclosedWikimediaQwen3-30B-A3B
Synthetic Essential-Web from Qwen3-30B-A3B and Qwen3-235B-A22B-Thinking-2507TextUndisclosedEssential-WebQwen3-30B-A3B; Qwen3-235B-A22B-Thinking-2507
Synthetic Textbook Math from Qwen3-30B-A3B, Qwen3-235B-A22B, phi-4TextUndisclosedCommon Crawl; FineMathQwen3-30B-A3B; Qwen3-235B-A22B; phi-4
Synthetic Math and Code from DeepSeek-R1 and DeepSeek-R1-0528TextUndisclosedMagicoder-Evol-Instruct-110K; opc-sft-stage2; TACO; OpenCodeReasoning; OpenMathReasoning; NuminaMath CoTDeepSeek-R1; DeepSeek-R1-0528

Training Dataset

Dataset# of Tokens in Nemotron Nano 2# of Tokens in Nemotron 3 Nano
English Common Crawl3,360,110,334,8183,456,523,212,210
English Synthetic CC1,949,464,641,1234,340,740,677,920
Crawl++360,389,153,262360,389,153,262
Math124,606,230,663154,217,502,165
Synthetic Math73,007,767,15573,007,767,155
Code747,409,228,7241,043,856,922,136
Synthetic Code175,067,553,293453,117,917,176
Common Crawl Code0263,072,374,097
English Wiki17,349,266,92617,349,266,926
Synthetic Wiki07,850,648,552
Books00
Papers191,586,493,365191,586,493,365
PDF-to-text141,096,578,533141,096,578,533
Code SFT60,025,726,817102,863,752,325
STEM SFT272,680,426,295359,826,214,274
General SFT6,057,478,6456,057,478,645
Tool-Calling SFT026,244,716,867
Multilingual2,172,261,909,3501,743,892,490,859
Synthetic multilingual997,710,364,950595,140,661,135
Total10,648,823,153,91913,336,833,827,602

We use a considerable amount of synthetic data. Out of 10.6 trillion tokens, 3,534,013,958,278 tokens are synthetically generated.

We extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. Additionally, we used data from Wikipedia and FineWeb-2 (Penedo et al., 2025) for these fifteen languages.

LanguageTotal Tokens
Arabic118,056,362,726
Danish117,747,321,618
German146,613,691,781
Spanish469,156,575,409
French139,982,002,289
Italian298,858,370,174
Japanese682,755,693,336
Korean127,099,747,538
Dutch89,041,592,681
Polish105,356,493,147
Portuguese243,249,275,089
Russian185,314,014,057
Swedish74,954,953,299
Thai160,778,944,467
Chinese211,007,236,689

We collect a total of 922,476,782,017 tokens of code in 43 different languages.

LanguageTokens
Assembly750,628,764
C42,657,300,868
C#56,153,329,307
C++67,773,701,658
CommonLisp263,234,672
CSS38,848,760,035
Cuda400,222,993
Dart3,816,960,470
Dockerfile474,958,084
Fortran1,105,049,387
Go8,332,419,480
Haskell1,294,613,669
HTML69,082,117,487
Java131,440,465,822
JavaScript75,573,420,861
JSON15,366,881,241
Julia621,046,949
JupyterNotebook2,241,893,197
Lua4,146,420,802
Makefile12,640,010,879
Markdown64,796,743,311
Mathematica320,504,225
OmniversePython26,946,093
Pascal1,625,013,876
Perl1,575,314,434
PHP61,575,339,005
Python126,916,727,384
R19,811,381,935
reStructuredText1,779,876,391
Ruby6,446,962,615
Rust4,438,640,533
Scala3,343,959,154
Shell18,758,779,250
SQL23,205,633,085
Swift5,976,714,881
SystemVerilog233,056,185
TeX7,347,157,527
TypeScript15,657,838,582
Verilog811,884,369
VHDL648,401,444
VisualBasic.NET1,005,680,881
XML12,616,779,741
YAML10,574,010,491

Evaluation Dataset

  • Data Collection Method by dataset: Hybrid: Human, Synthetic
  • Labeling Method by dataset: Hybrid: Automated, Human, Synthetic

Inference

  • Engines: HF, vLLM, TRT-LLM

  • Test Hardware: NVIDIA A100 80GB, H100 80GB

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our Trustworthy AI terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

We advise against circumvention of any provided safety guardrails contained in the Model without a substantially similar guardrail appropriate for your use case. For more details: Safety & Security.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Citation

INI
@misc{nvidia_nemotron_nano_v3_2025,
  title  = {{Nemotron 3 Nano}: Open, Efficient Mixture-of-Experts Hybrid {Mamba}-{Transformer} Model for {Agentic} Reasoning},
  author = {{NVIDIA}},
  year   = {2025},
  url    = {https://arxiv.org/abs/2512.20848},
  note   = {Technical report}
}
Liens & Ressources
Spécifications
CatégorieChat
AccèsAPI & Local
LicenceOpen Source
TarificationOpen Source
Paramètres30B parameters
Note
2.6

Essayer NVIDIA Nemotron 3 Nano 30B A3B Base BF16

Accédez directement au modèle