AI/EXPLORER
OutilsCatégoriesSitesLLMsComparerQuiz IAAlternativesPremium
—Outils IA
—Sites & Blogs
—LLMs & Modèles
—Catégories
AI Explorer

Trouvez et comparez les meilleurs outils d'intelligence artificielle pour vos projets.

Fait avecen France

Explorer

  • ›Tous les outils
  • ›Sites & Blogs
  • ›LLMs & Modèles
  • ›Comparer
  • ›Chatbots
  • ›Images IA
  • ›Code & Dev

Entreprise

  • ›Premium
  • ›À propos
  • ›Contact
  • ›Blog

Légal

  • ›Mentions légales
  • ›Confidentialité
  • ›CGV

© 2026 AI Explorer·Tous droits réservés.

AccueilLLMsQwen3.5 4B Claude 4.6 Opus Reasoning Distilled GGUF

Qwen3.5 4B Claude 4.6 Opus Reasoning Distilled GGUF

par Jackrong

Open source · 301k downloads · 108 likes

2.5
(108 avis)ChatAPI & Local
À propos

Le modèle Qwen3.5 4B Claude 4.6 Opus Reasoning Distilled est une version affinée du modèle Qwen3.5-4B, spécialement conçue pour exceller dans le raisonnement structuré et les tâches analytiques complexes. Grâce à une distillation avancée des capacités de raisonnement de l'Opus 4.6 de Claude, il adopte une approche méthodique en décomposant les problèmes en étapes logiques claires, encapsulées dans des balises `<think>`, avant de fournir des réponses précises et nuancées. Son entraînement ciblé sur des données de raisonnement variées, incluant des domaines comme les sciences, les mathématiques et le suivi d'instructions, lui permet de traiter des requêtes avec une efficacité et une profondeur accrues, tout en minimisant les redondances dans son processus de réflexion. Ce modèle se distingue par sa capacité à allier performance et accessibilité, offrant une solution robuste pour les applications nécessitant une analyse détaillée et une résolution de problèmes structurée. Idéal pour les développeurs, chercheurs ou utilisateurs cherchant à intégrer un raisonnement de haute qualité dans leurs projets, il se positionne comme un outil polyvalent pour des cas d'usage allant de l'assistance technique à l'éducation avancée.

Documentation

🌟 Qwen3.5-4B-Claude-4.6-Opus-Reasoning-Distilled

🔥 Update (April 5): I’ve released the complete training notebook, codebase, and a comprehensive PDF guide to help beginners and enthusiasts understand and reproduce this model's fine-tuning process.

❤️ Special thanks to the Unsloth open-source library and @KyleHessling1 for their support.

📚 Resources & Guides

👉 GitHub Repository: Jackrong-llm-finetuning-guide Visit the repo to dive into the codebase and reproduce the results locally or on Colab.

📥 Core Technical Document

🔗 Qwopus3.5-27b Complete Fine-Tuning Guide (PDF)

  • The Full Pipeline: A step-by-step walkthrough—from downloading the base model and unifying heterogeneous data, to configuring trainer hyperparameters and publishing to Hugging Face.
  • Beginner Friendly: Includes an introductory guide to getting started with Google Colab and Unsloth.
  • Feedback welcome! If you spot any areas for improvement, please let me know and I will update it promptly.

A Note: My goal isn't just to detail a workflow, but to demystify LLM training. Beyond the social media hype, fine-tuning isn't an unattainable ritual—often, all you need is a Google account, a standard laptop, and relentless curiosity.

No one starts as an expert, but every expert was once brave enough to begin.

All training and testing for this project were self-funded. If you find this model or guide helpful, a Star ⭐️ on GitHub would be the greatest encouragement. Thank you! 🙏

[!Note] The Claude series model optimizations are named under the Qwopus3.5 series, with the latest version being 🌟Qwopus3.5-v3.


📢 Announcement

Update: This model has been further enhanced with additional reasoning data distilled from Qwen3.5-27B.

The new training data introduces higher-quality reasoning trajectories across domains such as science, instruction-following, and mathematics.

Part of the data comes from Jackrong/Qwen3.5-reasoning-700x, a curated dataset designed to improve structured step-by-step reasoning and reasoning diversity.

HCaJnUQaoAAaMIc

💡 Model Introduction

Qwen3.5-4B-Claude-4.6-Opus-Reasoning-Distilled is a highly capable reasoning model fine-tuned on top of the Qwen3.5-4B dense architecture. The model's core directive is to leverage state-of-the-art Chain-of-Thought (CoT) distillation primarily sourced from Claude-4.6 Opus interactions.

Through Supervised Fine-Tuning (SFT) focusing specifically on structured reasoning logic, this model excels in breaking down complex user problems, planning step-by-step methodologies within strictly formatted <think> tags, and ultimately delivering precise, nuanced solutions.

🗺️ Training Pipeline Overview

Text
Base Model (Qwen3.5-4B)
 │
 ▼
Supervised Fine-Tuning (SFT) + LoRA
(Response-Only Training masked on "<|im_start|>assistant\n<think>")
 │
 ▼
Final Model Text Only (Qwen3.5-4B-Claude-4.6-Opus-Reasoning-Distilled)

🧠 Example of Learned Reasoning Scaffold(Example)

The model includes targeted optimizations addressing Qwen3.5’s tendency toward excessive transitional or repetitive reasoning on simple queries. Through deep distillation and structural imitation of Claude-4.6-Opus reasoning chains, the model adopts a more efficient structured thinking pattern:
“Let me analyze this request carefully: 1..2..3...”.
This streamlined reasoning paradigm significantly reduces redundant cognitive loops while preserving deep analytical capacity, resulting in substantially improved inference efficiency.

Text
Let me analyze this request carefully:

1. Identify the core objective of the problem.
2. Break the task into clearly defined subcomponents.
3. Evaluate constraints and edge cases.
4. Formulate a step-by-step solution plan.
5. Execute the reasoning sequentially and verify consistency.
            .
            .
            .

🔹 Supervised Fine-Tuning (SFT)

  • Objective: To inject high-density reasoning logic and establish a strict format for problem-solving involving an internal thinking state prior to outputting the final response.
  • Method: We utilized Unsloth for highly efficient memory and compute optimization. A critical component of this stage is the train_on_responses_only strategy, masking instructions so the loss is purely calculated over the generation of the <think> sequences and the subsequent solutions.
  • Format Enforcement: All training samples were systematically normalized so the model strictly abides by the structure <think> {internal reasoning} </think>\n {final answer}.

📈 Training Loss Curve

The training loss showed a strong and healthy downward trend throughout the run, demonstrating effective knowledge distillation. Starting from an initial loss of 0.74356, the model converged steadily to a final loss of 0.23984 — indicating the model successfully internalized the structured <think> reasoning patterns from the Claude 4.6 Opus teacher data.

📚 All Datasets Used

The dataset consists of high-quality, filtered reasoning distillation data:

Dataset NameDescription / Purpose
nohurry/Opus-4.6-Reasoning-3000x-filteredProvides comprehensive Claude 4.6 Opus reasoning trajectories.
TeichAI/claude-4.5-opus-high-reasoning-250xInjecting high-intensity, structured reasoning instances.
Jackrong/Qwen3.5-reasoning-700xAdditional curated reasoning samples designed to strengthen structured step-by-step problem solving and improve reasoning diversity.

🌟 Benchmark

BenchmarkBaseline (4B)Distilled (4B)
GPQA Diamond (0-shot)33.8238.88
AI2 ARC-Challenge (25-shot)64.5966.38

These evaluation results were originally reported by khitsly. Evaluation was conducted using the EleutherAI lm-evaluation with 8-bit inference and temperature 0. Higher scores indicate better performance.

🌟 Core Skills & Capabilities

  1. Modular & Structured Thinking: Inheriting traits from Opus-level reasoning, the model demonstrates confident parsing of the prompt, establishing an outlined plan in its <think> block sequentially rather than exploratory "trial-and-error" self-doubt.
  2. Extended Context Support: Fine-tuned smoothly with a 16,384 token context window allowing complex multi-step reasoning traces to exist gracefully within memory limits.

⚠️ Limitations & Intended Use

  • Hallucination Risk: While reasoning is strong, the model remains an autoregressive LLM; external facts provided during the thinking sequence may occasionally contain hallucinations if verifying real-world events.
  • Intended Scenario: Best suited for offline analytical tasks, coding, math, and heavy logic-dependent prompting where the user needs to transparently follow the AI's internal logic.
  • This model is a test version intended solely for learning and demonstration purposes, and is for academic research and technical exploration use only.

🙏 Acknowledgements

Significant thanks to the Unsloth AI team for making rapid fine-tuning of large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets (nohurry and TeichAI).

Liens & Ressources
Spécifications
CatégorieChat
AccèsAPI & Local
LicenceOpen Source
TarificationOpen Source
Paramètres4B parameters
Note
2.5

Essayer Qwen3.5 4B Claude 4.6 Opus Reasoning Distilled GGUF

Accédez directement au modèle