par bullpoint
Open source · 90k downloads · 24 likes
Le modèle Qwen3 Coder Next AWQ 4bit est une version optimisée et quantifiée (4 bits) du modèle Qwen3 Coder Next, spécialement conçu pour les agents de codage et le développement local. Avec seulement 3 milliards de paramètres activés sur un total de 80 milliards, il offre des performances comparables à des modèles bien plus lourds, tout en restant extrêmement efficace en termes de ressources. Il se distingue par ses capacités avancées de raisonnement sur le long terme, son utilisation d'outils complexes et sa résilience face aux échecs d'exécution, ce qui le rend idéal pour des tâches de programmation dynamiques et autonomes. Grâce à une longueur de contexte de 256 000 tokens et une intégration flexible avec divers environnements de développement (IDE, CLI), il s'adapte facilement aux workflows existants comme Claude Code ou Qwen Code. Son approche hybride combinant attention gérée et experts multiples (MoE) lui permet de gérer des projets variés tout en minimisant l'empreinte mémoire, ce qui en fait un choix pertinent pour les développeurs cherchant performance et autonomie sans surcharge matérielle.
Model Size: 80B total parameters, 3B activated | Quantization: 4-bit AWQ | VRAM: ~45GB
This is a 4-bit AWQ quantized version of Qwen3-Coder-Next, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:
This model was quantized using llm-compressor with AWQ (Activation-aware Weight Quantization).
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen3-Coder-Next |
| Quantization Method | AWQ |
| Quantization Tool | llm-compressor |
| Calibration Dataset | nvidia/Llama-Nemotron-Post-Training-Dataset |
| Bits | 4 |
| Group Size | 32 |
| Symmetric | Yes |
| Strategy | Group |
| Observer | MSE |
| Format | pack-quantized |
| Quant Method | compressed-tensors |
| Type | Size |
|---|---|
| Original (BF16) | ~151 GB |
| Quantized (4-bit) | ~45 GB |
To preserve model quality, the following components are kept at higher precision:
model.embed_tokens)lm_head)*norm*, *RMSNorm*, *input_layernorm, *post_attention_layernorm)self_attn.q_proj, self_attn.k_proj, self_attn.v_proj, self_attn.o_proj)linear_attn.in_proj_qkvz, linear_attn.in_proj_ba, linear_attn.out_proj, linear_attn.norm, linear_attn.conv1d, linear_attn.A_log, linear_attn.dt_bias)mlp.gate, shared_expert_gate)shared_expert.gate_proj, shared_expert.up_proj, shared_expert.down_proj)mtp.*)The quantization uses duo scaling with the following activation smoothing:
post_attention_layernorm → mlp.experts.*.gate_proj, mlp.experts.*.up_projmlp.experts.*.up_proj → mlp.experts.*.down_projQwen3-Coder-Next has the following features:
NOTE: This model supports only non-thinking mode and does not generate <think></think> blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to the blog, GitHub, and Documentation.
We advise you to use the latest version of transformers.
The following contains a code snippet illustrating how to use the model to generate content based on given inputs.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "bullpoint/Qwen3-Coder-Next-AWQ-4bit"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Write a quick sort algorithm."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=65536
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as 32,768.
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
For deployment, you can use the latest sglang or vllm to create an OpenAI-compatible API endpoint.
SGLang is a fast serving framework for large language models and vision language models. SGLang could be used to launch a server with OpenAI-compatible API service.
sglang>=v0.5.8 is required for Qwen3-Coder-Next, which can be installed using:
pip install 'sglang[all]>=v0.5.8'
See its documentation for more details.
The following command can be used to create an API endpoint at http://localhost:30000/v1 with maximum context length 256K tokens using tensor parallel on 2 GPUs.
python -m sglang.launch_server --model bullpoint/Qwen3-Coder-Next-AWQ-4bit --port 30000 --tp-size 2 --tool-call-parser qwen3_coder
[!Note] The default context length is 256K. Consider reducing the context length to a smaller value, e.g.,
32768, if the server fails to start.
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. vLLM could be used to launch a server with OpenAI-compatible API service.
vllm>=0.15.0 is required for Qwen3-Coder-Next, which can be installed using:
pip install 'vllm>=0.15.0'
See its documentation for more details.
The following command can be used to create an API endpoint at http://localhost:8000/v1 with maximum context length 256K tokens using tensor parallel on 2 GPUs.
vllm serve bullpoint/Qwen3-Coder-Next-AWQ-4bit --port 8000 --tensor-parallel-size 2 --enable-auto-tool-choice --tool-call-parser qwen3_coder
[!Note] The default context length is 256K. Consider reducing the context length to a smaller value, e.g.,
32768, if the server fails to start.
Qwen3-Coder-Next excels in tool calling capabilities.
You can simply define or use any tools as following example.
# Your tool implementation
def square_the_number(num: float) -> dict:
return num ** 2
# Define Tools
tools = [
{
"type": "function",
"function": {
"name": "square_the_number",
"description": "output the square of the number.",
"parameters": {
"type": "object",
"required": ["input_num"],
"properties": {
"input_num": {
"type": "number",
"description": "input_num is a number that will be squared"
}
},
}
}
}
]
from openai import OpenAI
# Define LLM
client = OpenAI(
# Use a custom endpoint compatible with OpenAI API
base_url="http://localhost:8000/v1", # api_base
api_key="EMPTY"
)
messages = [{"role": "user", "content": "square the number 1024"}]
completion = client.chat.completions.create(
messages=messages,
model="Qwen3-Coder-Next-AWQ-4bit",
max_tokens=65536,
tools=tools,
)
print(completion.choices[0])
To achieve optimal performance, we recommend the following sampling parameters: temperature=1.0, top_p=0.95, top_k=40.
If you find our work helpful, feel free to give us a cite.
@techreport{qwen_qwen3_coder_next_tech_report,
title = {Qwen3-Coder-Next Technical Report},
author = {{Qwen Team}},
url = {https://github.com/QwenLM/Qwen3-Coder/blob/main/qwen3_coder_next_tech_report.pdf},
note = {Accessed: 2025}
}