par baichuan-inc
Open source · 92k downloads · 120 likes
Baichuan M2 32B est un modèle d'IA spécialisé dans le domaine médical, conçu pour exceller dans les tâches de raisonnement clinique et d'interaction patient. Il intègre un système innovant de vérification à grande échelle, combinant des simulateurs de patients et des mécanismes de validation multidimensionnels pour garantir des réponses médicales précises et fiables. Grâce à un entraînement spécifique sur des cas cliniques réels et à une approche par renforcement multi-étapes, il reproduit un raisonnement médical proche de celui d'un professionnel, tout en conservant des capacités générales solides. Ce modèle se distingue par ses performances exceptionnelles sur des benchmarks médicaux comme HealthBench, où il surpasse la plupart des modèles open source et rivalise avec des solutions propriétaires avancées. Destiné principalement à l'éducation médicale, au conseil en santé et au soutien aux décisions cliniques, il doit cependant être utilisé avec prudence et sous supervision professionnelle, car il ne remplace pas un diagnostic ou un traitement médical formel.
This repository contains the model presented in Baichuan-M2: Scaling Medical Capability with Large Verifier System.
Baichuan-M2-32B is Baichuan AI's medical-enhanced reasoning model, the second medical model released by Baichuan. Designed for real-world medical reasoning tasks, this model builds upon Qwen2.5-32B with an innovative Large Verifier System. Through domain-specific fine-tuning on real-world medical questions, it achieves breakthrough medical performance while maintaining strong general capabilities.
Model Features:
Baichuan-M2 incorporates three core technical innovations: First, through the Large Verifier System, it combines medical scenario characteristics to design a comprehensive medical verification framework, including patient simulators and multi-dimensional verification mechanisms; second, through medical domain adaptation enhancement via Mid-Training, it achieves lightweight and efficient medical domain adaptation while preserving general capabilities; finally, it employs a multi-stage reinforcement learning strategy, decomposing complex RL tasks into hierarchical training stages to progressively enhance the model's medical knowledge, reasoning, and patient interaction capabilities.
Core Highlights:
| Model Name | HealthBench | HealthBench-Hard | HealthBench-Consensus |
|---|---|---|---|
| Baichuan-M2 | 60.1 | 34.7 | 91.5 |
| gpt-oss-120b | 57.6 | 30 | 90 |
| Qwen3-235B-A22B-Thinking-2507 | 55.2 | 25.9 | 90.6 |
| Deepseek-R1-0528 | 53.6 | 22.6 | 91.5 |
| GLM-4.5 | 47.8 | 18.7 | 85.3 |
| Kimi-K2 | 43 | 10.7 | 90.9 |
| gpt-oss-20b | 42.5 | 10.8 | 82.6 |
| Benchmark | Baichuan-M2-32B | Qwen3-32B (Thinking) |
|---|---|---|
| AIME24 | 83.4 | 81.4 |
| AIME25 | 72.9 | 72.9 |
| Arena-Hard-v2.0 | 45.8 | 44.5 |
| CFBench | 77.6 | 75.7 |
| WritingBench | 8.56 | 7.90 |
Note: AIME uses max_tokens=64k, others use 32k; temperature=0.6 for all tests.
📗 Technical Blog: Blog - Baichuan-M2
📑 Technical Report: Arxiv - Baichuan-M2
# 1. load model
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-M2-32B", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-M2-32B")
# 2. Input prompt text
prompt = "Got a big swelling after a bug bite. Need help reducing it."
# 3. Encode the input text for the model
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
thinking_mode='on' # on/off/auto
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# 4. Generate text
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096
)
output_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
][0].tolist()
# 5. parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("
")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("
")
print("thinking content:", thinking_content)
print("content:", content)
For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.9.0 or to create an OpenAI-compatible API endpoint:
python -m sglang.launch_server --model-path baichuan-inc/Baichuan-M2-32B --reasoning-parser qwen3
vllm serve baichuan-inc/Baichuan-M2-32B --reasoning-parser qwen3
python3 -m sglang.launch_server \
--model Baichuan-M2-32B \
--speculative-algorithm EAGLE3 \
--speculative-draft-model-path Baichuan-M2-32B/draft \
--speculative-num-steps 6 \
--speculative-eagle-topk 10 \
--speculative-num-draft-tokens 32 \
--mem-fraction 0.9 \
--cuda-graph-max-bs 2 \
--reasoning-parser qwen3 \
--dtype bfloat16
Licensed under the Apache License 2.0. Research and commercial use permitted.
Empowering Healthcare with AI, Making Health Accessible to All