AI ExplorerAI Explorer
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium

—

AI Tools

—

Sites & Blogs

—

LLMs & Models

—

Categories

AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • All tools
  • Sites & Blogs
  • LLMs & Models
  • Compare
  • Chatbots
  • AI Images
  • Code & Dev

Company

  • Premium
  • About
  • Contact
  • Blog

Legal

  • Legal notice
  • Privacy
  • Terms

© 2026 AI Explorer. All rights reserved.

HomeLLMsFLM 2 52B Instruct 2407

FLM 2 52B Instruct 2407

by CofeAI

Open source · 20k downloads · 11 likes

1.3
(11 reviews)EmbeddingAPI & Local
About

The FLM 2 52B Instruct 2407 model is an optimized and specialized version of the FLM-2 model, designed to excel in Chinese instruction-based tasks. With a GPT-style architecture and technical enhancements such as RoPE and SwiGLU, it delivers superior performance relative to its size, sometimes outperforming larger models. Its training relies on a rigorous selection of data, which enhances its ability to provide precise and contextually relevant responses. Intended for diverse applications, it stands out for its efficiency and capacity to handle complex domains like logical reasoning, mathematics, or open-ended questions. This model serves as a versatile tool for developers and researchers, particularly in contexts requiring a deep understanding of Chinese.

Documentation

Introduction

FLM-2 (aka Tele-FLM) is our open-source large language model series. The FLM-2 series demonstrate superior performances at its scale, and sometimes surpass larger models. The currently released versions include (Tele-FLM)[https://huggingface.co/CofeAI/Tele-FLM] and (Tele-FLM-1T)[https://huggingface.co/CofeAI/Tele-FLM-1T]. These models feature a stable, efficient pre-training paradigm and enhanced factual judgment capabilities. This repo contains the instruction-tuned 52B Tele-FLM model, which we have named FLM-2-52B-Instruct.

Model Details

FLM-2-52B-Instruct utilizes the standard GPT-style decoder-only transformer architecture with a few adjustments:

  • Rotary Positional Embedding (RoPE)
  • RMSNorm for normalization
  • SwiGLU for activation function
  • Linear bias disabled
  • Embedding and language model head untied
  • Input and output multiplier
Modelslayer
number
attention
heads
hidden
size
ffn hidden
size
vocab
size
params
count
FLM-2-52B-Instruct-240764648,19221,82480,00052.85 B

Training details

Unlike conventional fine-tuning methods, we employed an innovative and cost-effective fine-tuning approach. Through specialized screening techniques, we meticulously selected 30,735 samples from a large corpus of fine-tuning data. This refined dataset facilitated the fine-tuning process and yielded promising results.

Quickstart

Here provides simple code for loading the tokenizer, loading the model, and generating contents.

Python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('CofeAI/FLM-2-52B-Instruct-2407', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('CofeAI/FLM-2-52B-Instruct-2407', torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto", trust_remote_code=True)
history = [
    {"role": "user", "content": "你好"},
    {"role": "assistant", "content": "你好"},
    {"role": "user", "content": "北京有哪些必去的景点?"}
]
inputs = tokenizer.apply_chat_template(history, return_tensors='pt').to(model.device)
response = model.generate(inputs, max_new_tokens=128, repetition_penalty=1.03)
print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))

Evaluation

We evaluate the alignment performance of FLM-2-52B-Instruct-2407 in Chinese across various domains utilizing AlignBench. AlignBench is a comprehensive and multidimensional evaluation benchmark designed to assess Chinese large language models’ alignment performance. It encompasses 8 categories with a total of 683 question-answer pairs, covering areas such as fundamental language ability (Fund.), Chinese advanced understanding (Chi.), open-ended questions (Open.), writing ability (Writ.), logical reasoning (Logi.), mathematics (Math.), task-oriented role playing (Role.), and professional knowledge (Pro.).

ModelsOverallMath.Logi.Fund.Chi.Open.Writ.Role.Pro.
gpt-4-1106-preview7.587.396.837.697.078.668.238.088.55
gpt-4-06136.836.335.157.166.767.267.317.487.56
gpt-3.5-turbo-06135.684.904.796.015.606.977.276.986.29
chatglm-turbo6.364.885.097.507.038.458.057.677.70
FLM-2-52B-Instruct-24076.233.795.157.697.868.458.177.887.85

Acknowledgements

This work was supported by the National Science and Technology Major Project (No. 2022ZD0116314).

Citation

If you find our work helpful, please consider citing it.

INI
@article{tele-flm-1t,
  author       = {Xiang Li and Yiqun Yao and Xin Jiang and Xuezhi Fang and Chao Wang and Xinzhang Liu and Zihan Wang and Yu Zhao and Xin Wang and Yuyao Huang and Shuangyong Song and Yongxiang Li and Zheng Zhang and Bo Zhao and Aixin Sun and Yequan Wang and Zhongjiang He and Zhongyuan Wang and Xuelong Li and Tiejun Huang},
  title        = {52B to 1T: Lessons Learned via Tele-FLM Series},
  journal      = {CoRR},
  volume       = {abs/2407.02783},
  year         = {2024},
  url          = {https://doi.org/10.48550/arXiv.2407.02783},
  doi          = {10.48550/ARXIV.2407.02783},
  eprinttype   = {arXiv},
  eprint       = {2407.02783},
}

@article{tele-flm-2024,
  author       = {Xiang Li and Yiqun Yao and Xin Jiang and Xuezhi Fang and Chao Wang and Xinzhang Liu and Zihan Wang and Yu Zhao and Xin Wang and Yuyao Huang and Shuangyong Song and Yongxiang Li and Zheng Zhang and Bo Zhao and Aixin Sun and Yequan Wang and Zhongjiang He and Zhongyuan Wang and Xuelong Li and Tiejun Huang},
  title        = {Tele-FLM Technical Report},
  journal      = {CoRR},
  volume       = {abs/2404.16645},
  year         = {2024},
  url          = {https://doi.org/10.48550/arXiv.2404.16645},
  doi          = {10.48550/ARXIV.2404.16645},
  eprinttype   = {arXiv},
  eprint       = {2404.16645},
}
Capabilities & Tags
transformerspytorchTeleFLMfeature-extractioncustom_code
Links & Resources
Specifications
CategoryEmbedding
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Parameters52B parameters
Rating
1.3

Try FLM 2 52B Instruct 2407

Access the model directly