par Qwen
Open source · 92k downloads · 56 likes
Qwen1.5 7B est un modèle de langage avancé, basé sur une architecture Transformer, conçu pour comprendre et générer du texte de manière fluide et naturelle. Disponible en plusieurs tailles, ce modèle se distingue par ses performances améliorées, notamment dans les versions optimisées pour le dialogue, et supporte jusqu’à 32 000 tokens de contexte, ce qui le rend adapté aux longues conversations ou analyses. Il excelle dans le traitement multilingue, couvrant plusieurs langues naturelles et même la programmation, grâce à un tokenizer adapté. Son approche flexible, incluant des variantes pour le chat ou l’alignement, permet de l’utiliser dans des applications variées, comme les assistants virtuels, la génération de contenu ou l’analyse textuelle. Ce qui le rend particulièrement efficace, c’est son équilibre entre puissance et accessibilité, offrant une solution performante sans nécessiter de configurations complexes.
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
trust_remote_code.For more details, please refer to our blog post and GitHub repo.
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install transformers>=4.37.0, or you might encounter the following error:
KeyError: 'qwen2'.
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
If you find our work helpful, feel free to give us a cite.
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}