AI ExplorerAI Explorer
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium

—

AI Tools

—

Sites & Blogs

—

LLMs & Models

—

Categories

AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • All tools
  • Sites & Blogs
  • LLMs & Models
  • Compare
  • Chatbots
  • AI Images
  • Code & Dev

Company

  • Premium
  • About
  • Contact
  • Blog

Legal

  • Legal notice
  • Privacy
  • Terms

© 2026 AI Explorer. All rights reserved.

HomeLLMsQwen2.5 Math 1.5B

Qwen2.5 Math 1.5B

by Qwen

Open source · 426k downloads · 105 likes

2.5
(105 reviews)ChatAPI & Local
About

The Qwen2.5 Math 1.5B model is an artificial intelligence specialized in solving mathematical problems in English and Chinese. It combines two reasoning approaches: *Chain-of-Thought* (CoT) for detailed explanations and *Tool-integrated Reasoning* (TIR) for precise calculations and symbolic manipulations, such as solving equations or computing matrices. Its performance surpasses that of its predecessor, particularly on demanding mathematical benchmarks, while remaining accessible for basic tasks or few-shot inference. Designed primarily for educational, scientific, or technical applications requiring mathematical rigor, it stands out for its linguistic versatility and ability to handle complex reasoning. Its lightweight format (1.5 billion parameters) makes it suitable for resource-constrained environments while providing a solid foundation for further adaptations or fine-tuning.

Documentation

Qwen2.5-Math-1.5B

[!Warning]

🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.

Introduction

In August 2024, we released the first series of mathematical LLMs - Qwen2-Math - of our Qwen family. A month later, we have upgraded it and open-sourced Qwen2.5-Math series, including base models Qwen2.5-Math-1.5B/7B/72B, instruction-tuned models Qwen2.5-Math-1.5B/7B/72B-Instruct, and mathematical reward model Qwen2.5-Math-RM-72B.

Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.

While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR.

Model Details

For more details, please refer to our blog post and GitHub repo.

Requirements

  • transformers>=4.37.0 for Qwen2.5-Math models. The latest version is recommended.

[!Warning]

🚨 This is a must because transformers integrated Qwen2 codes since 4.37.0.

For requirements on GPU memory and the respective throughput, see similar results of Qwen2 here.

Quick Start

[!Important]

Qwen2.5-Math-1.5B-Instruct is an instruction model for chatting;

Qwen2.5-Math-1.5B is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning.

Citation

If you find our work helpful, feel free to give us a citation.

INI
@article{yang2024qwen25mathtechnicalreportmathematical,
  title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement}, 
  author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang},
  journal={arXiv preprint arXiv:2409.12122},
  year={2024}
}
Capabilities & Tags
transformerssafetensorsqwen2text-generationconversationalentext-generation-inferenceendpoints_compatible
Links & Resources
Specifications
CategoryChat
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Parameters5B parameters
Rating
2.5

Try Qwen2.5 Math 1.5B

Access the model directly