AI ExplorerAI Explorer
OutilsCatégoriesSitesLLMsComparerQuiz IAAlternativesPremium

—

Outils IA

—

Sites & Blogs

—

LLMs & Modèles

—

Catégories

AI Explorer

Trouvez et comparez les meilleurs outils d'intelligence artificielle pour vos projets.

Fait avecen France

Explorer

  • Tous les outils
  • Sites & Blogs
  • LLMs & Modèles
  • Comparer
  • Chatbots
  • Images IA
  • Code & Dev

Entreprise

  • Premium
  • À propos
  • Contact
  • Blog

Légal

  • Mentions légales
  • Confidentialité
  • CGV

© 2026 AI Explorer. Tous droits réservés.

AccueilLLMsGLM 4.7 Flash REAP 23B A3B

GLM 4.7 Flash REAP 23B A3B

par cerebras

Open source · 138k downloads · 69 likes

2.3
(69 avis)ChatAPI & Local
À propos

GLM-4.7-Flash-REAP-23B-A3B est une version optimisée et allégée du modèle GLM-4.7-Flash, conçue pour réduire de 25 % sa taille tout en conservant des performances quasi identiques. Grâce à la méthode REAP (Router-weighted Expert Activation Pruning), il supprime sélectivement les experts redondants dans son architecture Mixture-of-Experts, réduisant ainsi son empreinte mémoire sans altérer ses capacités fondamentales. Ce modèle excelle dans des tâches complexes comme la génération de code, l'exécution d'agents autonomes, la compréhension de dépôts logiciels ou l'appel de fonctions, tout en restant compatible avec les outils standards comme vLLM. Idéal pour les environnements contraints en ressources, il offre une alternative performante et économique aux modèles plus lourds, sans nécessiter de modifications logicielles. Son approche innovante en fait un choix pertinent pour les déploiements locaux, la recherche académique ou les applications industrielles nécessitant un équilibre entre efficacité et puissance.

Documentation

𓌳 REAP𓌳 the Experts: Why Pruning Prevails for One-Shot MoE Compression
REAP

GLM-4.7-Flash-REAP-23B-A3B

✨ Highlights

Introducing GLM-4.7-Flash-REAP-23B-A3B, a memory-efficient compressed variant of GLM-4.7-Flash that maintains near-identical performance while being 25% lighter.

This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include:

  • Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 355B model
  • 25% Memory Reduction: Compressed from 30B to 23B parameters, significantly lowering deployment costs and memory requirements
  • Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling
  • Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required
  • Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research

📋 Model Overview

GLM-4.7-Flash-REAP-23B-A3B has the following specifications:

  • Base Model: GLM-4.7-Flash
  • Compression Method: REAP (Router-weighted Expert Activation Pruning)
  • Compression Ratio: 25% expert pruning
  • Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model
  • Number of Parameters: 23B total, 3B activated per token
  • Number of Layers: 47
  • Number of Attention Heads: 20 for QKV
  • Number of Experts: 48 (uniformly pruned from 64)
  • Number of Activated Experts: 4 per token
  • Context Length: 202,752 tokens
  • License: MIT

📊 Evaluations

BenchmarkGLM-4.7-FlashGLM-4.7-Flash-REAP-23B-A3B
Compression—25%
Coding
HumanEval94.595.1
HumanEval+89.089.0

🟩 This checkpoint maintains almost identical performance while being 25% lighter.

For more details on the evaluation setup, refer to the REAP arXiv preprint.


🚀 Deployment

You can deploy the model directly using the latest vLLM (that supports GLM4.7-Flash), no source modifications or custom patches required.

Bash
vllm serve cerebras/GLM-4.7-Flash-REAP-23B-A3B \
    --tensor-parallel-size 4 \
    --reasoning-parser glm45 \
    --tool-call-parser glm47 \
    --enable-auto-tool-choice

If you encounter insufficient memory when running this model, you might need to set a lower value for --max-num-seqs flag (e.g. set to 64).

🧩 Model Creation

This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.7, with a 25% pruning rate.

How REAP Works

REAP selects experts to prune based on a novel saliency criterion that considers both:

  • Router gate values: How frequently and strongly the router activates each expert
  • Expert activation norms: The magnitude of each expert's output contributions

This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations.

Key Advantages

  • One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment
  • Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse"
  • Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks

Calibration

The model was calibrated using a diverse mixture of domain-specific datasets including:

  • Code generation samples (evol-codealpaca)
  • Function calling examples (xlam-function-calling)
  • Agentic multi-turn trajectories (SWE-smith-trajectories)

📚 For more details, refer to the following resources:

  • 🧾 arXiv Preprint
  • 🧾 REAP Blog
  • 💻 REAP Codebase (GitHub)

⚖️ License

This model is derived from zai-org/GLM-4.7-Flash and distributed under the MIT license.


🧾 Citation

If you use this checkpoint, please cite the REAP paper:

Bibtex
@article{lasby-reap,
  title={REAP the Experts: Why Pruning Prevails for One-Shot MoE compression},
  author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
  journal={arXiv preprint arXiv:2510.13999},
  year={2025}
}
Liens & Ressources
Spécifications
CatégorieChat
AccèsAPI & Local
LicenceOpen Source
TarificationOpen Source
Paramètres23B parameters
Note
2.3

Essayer GLM 4.7 Flash REAP 23B A3B

Accédez directement au modèle