by unsloth
Open source · 365k downloads · 22 likes
The Mistral 7B v0.3 model in its 4-bit quantized (bnb) version is an optimized iteration of the Mistral 7B, designed for more efficient fine-tuning and deployment. It enables training language models on specific tasks with a 70% reduction in memory usage and up to a fivefold speed increase compared to traditional methods, while remaining accessible to beginners. Through user-friendly notebooks, it simplifies customization for various use cases, such as conversational assistants, text generation, or model alignment using techniques like DPO. This model stands out for its fast execution and lightweight design while maintaining the performance of the base model, making it ideal for local deployments or use on limited resources.
We have a Google Colab Tesla T4 notebook for Mistral v3 7b here: https://colab.research.google.com/drive/1_yNCks4BTD5zOnjozppphh5GzMFaMKq_?usp=sharing
For conversational ShareGPT style and using Mistral v3 Instruct: https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing
All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|---|---|---|---|
| Llama-3.2 (3B) | ▶️ Start on Colab | 2.4x faster | 58% less |
| Llama-3.2 (11B vision) | ▶️ Start on Colab | 2x faster | 60% less |
| Llama-3.1 (8B) | ▶️ Start on Colab | 2.4x faster | 58% less |
| Qwen2 VL (7B) | ▶️ Start on Colab | 1.8x faster | 60% less |
| Qwen2.5 (7B) | ▶️ Start on Colab | 2x faster | 60% less |
| Phi-3.5 (mini) | ▶️ Start on Colab | 2x faster | 50% less |
| Gemma 2 (9B) | ▶️ Start on Colab | 2.4x faster | 58% less |
| Mistral (7B) | ▶️ Start on Colab | 2.2x faster | 62% less |
| DPO - Zephyr | ▶️ Start on Colab | 1.9x faster | 19% less |