AI/EXPLORER
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium
—AI Tools
—Sites & Blogs
—LLMs & Models
—Categories
AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • ›All tools
  • ›Sites & Blogs
  • ›LLMs & Models
  • ›Compare
  • ›Chatbots
  • ›AI Images
  • ›Code & Dev

Company

  • ›Premium
  • ›About
  • ›Contact
  • ›Blog

Legal

  • ›Legal notice
  • ›Privacy
  • ›Terms

© 2026 AI Explorer·All rights reserved.

HomeLLMsDeepSeek V3.2 Exp

DeepSeek V3.2 Exp

by deepseek-ai

Open source · 190k downloads · 983 likes

3.7
(983 reviews)ChatAPI & Local
About

DeepSeek V3.2 Exp is an experimental version of the DeepSeek model, designed as an intermediate step toward a new architecture. It introduces a major innovation: *DeepSeek Sparse Attention*, a sparse attention mechanism that optimizes computational efficiency when processing long textual contexts without compromising response quality. The model maintains performance comparable to its predecessors while significantly reducing computational costs, both for training and inference. Ideal for applications requiring advanced handling of long sequences, such as analyzing extended documents or prolonged conversations, it stands out for its pioneering approach to sparse attention. This version reflects DeepSeek’s commitment to pushing the boundaries of more efficient transformer architectures.

Documentation

DeepSeek-V3.2-Exp

DeepSeek-V3

Homepage Chat Hugging Face
Discord Wechat Twitter Follow
License

Introduction

We are excited to announce the official release of DeepSeek-V3.2-Exp, an experimental version of our model. As an intermediate step toward our next-generation architecture, V3.2-Exp builds upon V3.1-Terminus by introducing DeepSeek Sparse Attention—a sparse attention mechanism designed to explore and validate optimizations for training and inference efficiency in long-context scenarios.

This experimental release represents our ongoing research into more efficient transformer architectures, particularly focusing on improving computational efficiency when processing extended text sequences.

  • DeepSeek Sparse Attention (DSA) achieves fine-grained sparse attention for the first time, delivering substantial improvements in long-context training and inference efficiency while maintaining virtually identical model output quality.

  • To rigorously evaluate the impact of introducing sparse attention, we deliberately aligned the training configurations of DeepSeek-V3.2-Exp with V3.1-Terminus. Across public benchmarks in various domains, DeepSeek-V3.2-Exp demonstrates performance on par with V3.1-Terminus.

BenchmarkDeepSeek-V3.1-TerminusDeepSeek-V3.2-Exp
Reasoning Mode w/o Tool Use
MMLU-Pro85.085.0
GPQA-Diamond80.779.9
Humanity's Last Exam21.719.8
LiveCodeBench74.974.1
AIME 202588.489.3
HMMT 202586.183.6
Codeforces20462121
Aider-Polyglot76.174.5
Agentic Tool Use
BrowseComp38.540.1
BrowseComp-zh45.047.9
SimpleQA96.897.1
SWE Verified68.467.8
SWE-bench Multilingual57.857.9
Terminal-bench36.737.7

Update

  • 2025.11.17: We have identified that previous versions of the inference demo code contained an implementation discrepancy in Rotary Position Embedding (RoPE) within the indexer module, potentially leading to degraded model performance. Specifically, the input tensor to RoPE in the indexer module requires a non-interleaved layout, whereas RoPE in the MLA module expects an interleaved layout. This issue has now been resolved. Please refer to the updated version of the inference demo code and take note of this implementation detail.

How to Run Locally

HuggingFace

We provide an updated inference demo code in the inference folder to help the community quickly get started with our model and understand its architectural details.

First convert huggingface model weights to the the format required by our inference demo. Set MP to match your available GPU count:

Bash
cd inference
export EXPERTS=256
python convert.py --hf-ckpt-path ${HF_CKPT_PATH} --save-path ${SAVE_PATH} --n-experts ${EXPERTS} --model-parallel ${MP}

Launch the interactive chat interface and start exploring DeepSeek's capabilities:

Bash
export CONFIG=config_671B_v3.2.json
torchrun --nproc-per-node ${MP} generate.py --ckpt-path ${SAVE_PATH} --config ${CONFIG} --interactive

SGLang

Installation with Docker

Bash
# H200
docker pull lmsysorg/sglang:dsv32

# MI350
docker pull lmsysorg/sglang:dsv32-rocm

# NPUs
docker pull lmsysorg/sglang:dsv32-a2
docker pull lmsysorg/sglang:dsv32-a3

Launch Command

Bash
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --dp 8 --enable-dp-attention

vLLM

vLLM provides day-0 support of DeepSeek-V3.2-Exp. See the recipes for up-to-date details.

Open-Source Kernels

For TileLang kernels with better readability and research-purpose design, please refer to TileLang.

For high-performance CUDA kernels, indexer logit kernels (including paged versions) are available in DeepGEMM. Sparse attention kernels are released in FlashMLA.

License

This repository and the model weights are licensed under the MIT License.

Citation

INI
@misc{deepseekai2024deepseekv32,
      title={DeepSeek-V3.2-Exp: Boosting Long-Context Efficiency with DeepSeek Sparse Attention}, 
      author={DeepSeek-AI},
      year={2025},
}

Contact

If you have any questions, please raise an issue or contact us at [email protected].

Capabilities & Tags
transformerssafetensorsdeepseek_v32text-generationconversationalendpoints_compatiblefp8
Links & Resources
Specifications
CategoryChat
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Rating
3.7

Try DeepSeek V3.2 Exp

Access the model directly