AI/EXPLORER
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium
—AI Tools
—Sites & Blogs
—LLMs & Models
—Categories
AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • ›All tools
  • ›Sites & Blogs
  • ›LLMs & Models
  • ›Compare
  • ›Chatbots
  • ›AI Images
  • ›Code & Dev

Company

  • ›Premium
  • ›About
  • ›Contact
  • ›Blog

Legal

  • ›Legal notice
  • ›Privacy
  • ›Terms

© 2026 AI Explorer·All rights reserved.

HomeLLMsChatQwen2.5 0.5B unsloth bnb 4bit

Qwen2.5 0.5B unsloth bnb 4bit

by unsloth

Open source · 97k downloads · 3 likes

0.8
(3 reviews)ChatAPI & Local
About

The Qwen2.5 0.5B model optimized by Unsloth in 4-bit format is a lightweight yet high-performance version of the Qwen2.5 language model, specifically designed for efficiency in both memory usage and computational requirements. Through Unsloth’s dynamic 4-bit quantization, it achieves superior accuracy compared to traditional 4-bit models while significantly reducing memory footprint and resource demands. With a compact 0.5 billion parameters, this base model excels in text generation, structured data comprehension, and producing formatted outputs like JSON, while supporting contexts up to 32,768 tokens. Ideal for applications requiring a powerful yet compact model, it is particularly well-suited for developers looking to fine-tune or adapt the model for specialized tasks, such as long-form text generation or tabular data analysis. Its multilingual versatility, covering over 29 languages, makes it a versatile tool for international projects. What sets it apart is its balance between performance and accessibility, enabling smooth operation even on modest hardware configurations.

Documentation

See our collection for versions of Qwen2.5 including 4-bit formats.

Unsloth's Dynamic 4-bit Quants is selectively quantized, greatly improving accuracy over standard 4-bit.

Finetune LLMs 2-5x faster with 70% less memory via Unsloth!

We have a free Google Colab Tesla T4 notebook for Qwen2.5 (7B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supportsFree NotebooksPerformanceMemory use
Llama-3.2 (3B)▶️ Start on Colab2.4x faster58% less
Llama-3.2 (11B vision)▶️ Start on Colab2x faster60% less
Qwen2 VL (7B)▶️ Start on Colab1.8x faster60% less
Qwen2.5 (7B)▶️ Start on Colab2x faster60% less
Llama-3.1 (8B)▶️ Start on Colab2.4x faster58% less
Phi-3.5 (mini)▶️ Start on Colab2x faster50% less
Gemma 2 (9B)▶️ Start on Colab2.4x faster58% less
Mistral (7B)▶️ Start on Colab2.2x faster62% less

  • This Llama 3.2 conversational notebook is useful for ShareGPT ChatML / Vicuna templates.
  • This text completion notebook is for raw text. This DPO notebook replicates Zephyr.
  • * Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.

Qwen2.5

Introduction

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

  • Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
  • Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
  • Long-context Support up to 128K tokens and can generate up to 8K tokens.
  • Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

This repo contains the base 0.5B Qwen2.5 model, which has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
  • Number of Parameters: 0.49B
  • Number of Paramaters (Non-Embedding): 0.36B
  • Number of Layers: 24
  • Number of Attention Heads (GQA): 14 for Q and 2 for KV
  • Context Length: Full 32,768 tokens

We do not recommend using base language models for conversations. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.

For more details, please refer to our blog, GitHub, and Documentation.

Requirements

The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.

With transformers<4.37.0, you will encounter the following error:

VB.NET
KeyError: 'qwen2'

Evaluation & Performance

Detailed evaluation results are reported in this 📑 blog.

For requirements on GPU memory and the respective throughput, see results here.

Citation

If you find our work helpful, feel free to give us a cite.

INI
@misc{qwen2.5,
    title = {Qwen2.5: A Party of Foundation Models},
    url = {https://qwenlm.github.io/blog/qwen2.5/},
    author = {Qwen Team},
    month = {September},
    year = {2024}
}

@article{qwen2,
      title={Qwen2 Technical Report}, 
      author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
      journal={arXiv preprint arXiv:2407.10671},
      year={2024}
}
Capabilities & Tags
transformerssafetensorsqwen2text-generationunslothqwenzhoengfraspa
Links & Resources
Specifications
CategoryChat
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Parameters5B parameters
Rating
0.8

Try Qwen2.5 0.5B unsloth bnb 4bit

Access the model directly