by LuffyTheFox
Open source · 86k downloads · 99 likes
Le modèle Qwen3.5 9B Claude 4.6 Opus Uncensored Distilled GGUF est une version optimisée du modèle Qwen3.5-9B, spécialement conçue pour exceller dans le raisonnement structuré et la résolution de problèmes complexes. Grâce à une distillation avancée des capacités de raisonnement de l'IA Claude 4.6 Opus, il adopte une approche méthodique en décomposant les questions en étapes logiques clairement définies, encapsulées dans des balises `<think>`, avant de fournir une réponse précise et nuancée. Ce modèle se distingue par sa capacité à traiter des domaines variés comme les sciences, les mathématiques ou le suivi d'instructions, tout en minimisant les redondances et en améliorant l'efficacité des inférences. Idéal pour les utilisateurs recherchant une assistance analytique approfondie sans restrictions de contenu, il convient particulièrement aux tâches nécessitant une réflexion approfondie ou une planification détaillée. Son approche "sans refus" et son entraînement sur des données de haute qualité en font un outil puissant pour les professionnels, chercheurs ou passionnés exigeants.
Thinking is disabled by default in this model via modified chat template file baked in gguf. If you want to enable thinking set variable: {%- set enable_thinking = False %} to True in chat template.
I extracted uncensored tensors made by HauhauCS via this script: https://pastebin.com/1qKgR3za and merged them with Jackrong distilled checkpoint.
For best model perfomance use following settings in LM Studio:
Temperature: 0.7
Top K Sampling: 20
Presence Penalty: 1.5
Top P Sampling: 0.8
Min P Sampling: 0
Seed: 3407 or 42
And this system prompt: https://pastebin.com/pU25DVnB
Update: This model has been further enhanced with additional reasoning data distilled from Qwen3.5-27B.
The new training data introduces higher-quality reasoning trajectories across domains such as science, instruction-following, and mathematics.
Part of the data comes from Jackrong/Qwen3.5-reasoning-700x, a curated dataset designed to improve structured step-by-step reasoning and reasoning diversity.

Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled is a highly capable reasoning model fine-tuned on top of the Qwen3.5-9B dense architecture. The model's core directive is to leverage state-of-the-art Chain-of-Thought (CoT) distillation primarily sourced from Claude-4.6 Opus interactions.
Through Supervised Fine-Tuning (SFT) focusing specifically on structured reasoning logic, this model excels in breaking down complex user problems, planning step-by-step methodologies within strictly formatted <think> tags, and ultimately delivering precise, nuanced solutions.
Base Model (Qwen3.5-9B)
│
▼
Supervised Fine-Tuning (SFT) + LoRA
(Response-Only Training masked on "<|im_start|>assistant\n<think>")
│
▼
Final Model Text-only (Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled)
The model includes targeted optimizations addressing Qwen3.5’s tendency toward excessive transitional or repetitive reasoning on simple queries. Through deep distillation and structural imitation of Claude-4.6-Opus reasoning chains, the model adopts a more efficient structured thinking pattern:
“Let me analyze this request carefully: 1..2..3...”.
This streamlined reasoning paradigm significantly reduces redundant cognitive loops while preserving deep analytical capacity, resulting in substantially improved inference efficiency.
Let me analyze this request carefully:
1. Identify the core objective of the problem.
2. Break the task into clearly defined subcomponents.
3. Evaluate constraints and edge cases.
4. Formulate a step-by-step solution plan.
5. Execute the reasoning sequentially and verify consistency.
.
.
.
train_on_responses_only strategy, masking instructions so the loss is purely calculated over the generation of the <think> sequences and the subsequent solutions.<think> {internal reasoning} </think>\n {final answer}.The training loss showed a strong and healthy downward trend throughout the run, demonstrating effective knowledge distillation. Starting from an initial loss of 0.5138, the model converged steadily to a final loss of 0.35786 — indicating the model successfully internalized the structured <think> reasoning patterns from the Claude 4.6 Opus teacher data.
The dataset consists of high-quality, filtered reasoning distillation data:
| Dataset Name | Description / Purpose |
|---|---|
| nohurry/Opus-4.6-Reasoning-3000x-filtered | Provides comprehensive Claude 4.6 Opus reasoning trajectories. |
| TeichAI/claude-4.5-opus-high-reasoning-250x | Injecting high-intensity, structured reasoning instances. |
| Jackrong/Qwen3.5-reasoning-700x | Additional curated reasoning samples designed to strengthen structured step-by-step problem solving and improve reasoning diversity. |
<think> block sequentially rather than exploratory "trial-and-error" self-doubt.Significant thanks to the Unsloth AI team for making rapid fine-tuning of large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets (nohurry and TeichAI).