par Qwen
Open source · 102k downloads · 363 likes
Qwen2.5 7B Instruct 1M est un modèle de langage avancé spécialisé dans le traitement de contextes extrêmement longs, pouvant gérer jusqu’à 1 million de tokens. Conçu pour exceller dans les tâches nécessitant une compréhension approfondie de documents volumineux, il combine une architecture optimisée avec des mécanismes d’attention adaptés pour maintenir une performance élevée, même sur des séquences dépassant 256 000 tokens. Ses capacités principales incluent l’analyse de textes étendus, la génération de réponses précises à partir de données massives et une gestion efficace des informations longues, tout en conservant une grande réactivité sur des requêtes plus courtes. Idéal pour des applications comme l’analyse de documents juridiques, la synthèse de rapports techniques ou la gestion de bases de connaissances, il se distingue par son équilibre entre puissance et efficacité, rendu possible par des optimisations matérielles et logicielles dédiées.
Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens. Compared to the Qwen2.5 128K version, Qwen2.5-1M demonstrates significantly improved performance in handling long-context tasks while maintaining its capability in short tasks.
The model has the following features:
For more details, please refer to our blog, GitHub, Technical Report, and Documentation.
The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.
With transformers<4.37.0, you will encounter the following error:
KeyError: 'qwen2'
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct-1M"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
To enhance processing accuracy and efficiency for long sequences, we have developed an advanced inference framework based on vLLM, incorporating sparse attention and length extrapolation. This approach significantly improves model generation performance for sequences exceeding 256K tokens and achieves a 3 to 7 times speedup for sequences up to 1M tokens.
Here we provide step-by-step instructions for deploying the Qwen2.5-1M models with our framework.
To achieve the best performance, we recommend using GPUs with Ampere or Hopper architecture, which support optimized kernels.
Ensure your system meets the following requirements:
VRAM Requirements:
If your GPUs do not have sufficient VRAM, you can still use Qwen2.5-1M for shorter tasks.
For now, you need to clone the vLLM repository from our custom branch and install it manually. We are working on getting our branch merged into the main vLLM project.
git clone -b dev/dual-chunk-attn [email protected]:QwenLM/vllm.git
cd vllm
pip install -e . -v
vLLM supports offline inference or launch an openai-like server.
Example of Offline Inference
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct-1M")
# Pass the default decoding hyperparameters of Qwen2.5-7B-Instruct
# max_tokens is for the maximum length for generation.
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512)
# Input the model name or path. See below for parameter explanation (after the example of openai-like server).
llm = LLM(model="Qwen/Qwen2.5-7B-Instruct-1M",
tensor_parallel_size=4,
max_model_len=1010000,
enable_chunked_prefill=True,
max_num_batched_tokens=131072,
enforce_eager=True,
# quantization="fp8", # Enabling FP8 quantization for model weights can reduce memory usage.
)
# Prepare your prompts
prompt = "Tell me something about large language models."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# generate outputs
outputs = llm.generate([text], sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Example of Openai-like Server
vllm serve Qwen/Qwen2.5-7B-Instruct-1M \
--tensor-parallel-size 4 \
--max-model-len 1010000 \
--enable-chunked-prefill --max-num-batched-tokens 131072 \
--enforce-eager \
--max-num-seqs 1
# --quantization fp8 # Enabling FP8 quantization for model weights can reduce memory usage.
Then you can use curl or python to interact with the deployed model.
Parameter Explanations:
--tensor-parallel-size
--max-model-len
--max-num-batched-tokens
--max-num-seqs
You can also refer to our Documentation for usage of vLLM.
Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache."
The VRAM reserved for the KV cache is insufficient. Consider reducing the max_model_len or increasing the tensor_parallel_size. Alternatively, you can reduce max_num_batched_tokens, although this may significantly slow down inference.
Encountering the error: "torch.OutOfMemoryError: CUDA out of memory."
The VRAM reserved for activation weights is insufficient. You can try setting gpu_memory_utilization to 0.85 or lower, but be aware that this might reduce the VRAM available for the KV cache.
Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager."
The input is too lengthy. Consider using a shorter sequence or increasing the max_model_len.
Detailed evaluation results are reported in this 📑 blog and our technical report.
If you find our work helpful, feel free to give us a cite.
@misc{qwen2.5-1m,
title = {Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens},
url = {https://qwenlm.github.io/blog/qwen2.5-1m/},
author = {Qwen Team},
month = {January},
year = {2025}
}
@article{qwen2.5,
title={Qwen2.5-1M Technical Report},
author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
journal={arXiv preprint arXiv:2501.15383},
year={2025}
}