by Qwen
Open source · 327k downloads · 70 likes
The Qwen3 1.7B Base model is a streamlined version of the Qwen3 family, designed to deliver advanced natural language processing capabilities. Trained on a multilingual corpus of 36 trillion tokens spanning 119 languages, it excels in diverse domains such as programming, science, logical reasoning, and long-form text comprehension. Through an optimized architecture and a three-phase training process, it achieves a balance of stability, performance, and superior handling of extended contexts up to 32,000 tokens. Ideal for applications requiring nuanced language understanding, it stands out for its efficiency and versatility while remaining accessible for lightweight deployments.
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:
Qwen3-1.7B-Base has the following features:
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
The code of Qwen3 has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.
With transformers<4.51.0, you will encounter the following error:
KeyError: 'qwen3'
Detailed evaluation results are reported in this 📑 blog.
If you find our work helpful, feel free to give us a cite.
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}