by Qwen
Open source · 484k downloads · 97 likes
Qwen3 8B Base is an advanced language model designed to understand and generate text with high precision. Trained on a corpus of 36 trillion tokens across 119 languages, it excels in diverse domains such as reasoning, programming, sciences, and general knowledge. Its optimized architecture, incorporating techniques like *qk layernorm* and a three-phase training process, enables it to handle long contexts of up to 32,000 tokens while maintaining enhanced stability and performance. Ideal for applications requiring deep language comprehension, it stands out for its versatility and adaptability to complex tasks. Whether for content automation, technical assistance, or multilingual analysis, Qwen3 8B Base provides a robust foundation for intelligent and high-performance solutions.
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:
Qwen3-8B-Base has the following features:
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
The code of Qwen3 has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.
With transformers<4.51.0, you will encounter the following error:
KeyError: 'qwen3'
Detailed evaluation results are reported in this 📑 blog.
If you find our work helpful, feel free to give us a cite.
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}