par Qwen
Open source · 650k downloads · 200 likes
Qwen2 0.5B Instruct est un modèle de langage léger et optimisé pour suivre des instructions, issu de la famille Qwen2. Conçu pour rivaliser avec les meilleurs modèles open source, il excelle dans des tâches variées comme la compréhension et la génération de texte, le raisonnement logique, la résolution de problèmes mathématiques ou encore la programmation. Son approche combine un entraînement massif sur des données diversifiées et des techniques d'alignement avancées pour améliorer sa pertinence et sa précision. Idéal pour des applications nécessitant une IA performante mais peu gourmande en ressources, il s'adapte aussi bien à des usages grand public que professionnels. Ce qui le distingue, c'est son équilibre entre efficacité et polyvalence, offrant une alternative compétitive aux modèles propriétaires tout en restant accessible.
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 0.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our blog, GitHub, and Documentation.
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install transformers>=4.37.0, or you might encounter the following error:
KeyError: 'qwen2'
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-0.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
We briefly compare Qwen2-0.5B-Instruct with Qwen1.5-0.5B-Chat. The results are as follows:
| Datasets | Qwen1.5-0.5B-Chat | Qwen2-0.5B-Instruct | Qwen1.5-1.8B-Chat | Qwen2-1.5B-Instruct |
|---|---|---|---|---|
| MMLU | 35.0 | 37.9 | 43.7 | 52.4 |
| HumanEval | 9.1 | 17.1 | 25.0 | 37.8 |
| GSM8K | 11.3 | 40.1 | 35.3 | 61.6 |
| C-Eval | 37.2 | 45.2 | 55.3 | 63.8 |
| IFEval (Prompt Strict-Acc.) | 14.6 | 20.0 | 16.8 | 29.0 |
If you find our work helpful, feel free to give us a cite.
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}