Skip to content
AI Viewer
DeepSeek Released December 1, 2025 Fresh data: Synced Mar 10, 2026

DeepSeek: DeepSeek V3.2

DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism that reduces training and inference cost while preserving quality in long-context scenarios. A scalable reinforcement learning post-training framework further improves reasoning, with reported performance in the GPT-5 class, and the model has demonstrated gold-medal results on the 2025 IMO and IOI. V3.2 also uses a large-scale agentic task synthesis pipeline to better integrate reasoning into tool-use settings, boosting compliance and generalization in interactive environments. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)

Tool useReasoning

Why it stands out

164K-token context window handles longer documents and multi-turn conversations without truncation.
Combines tool use with reasoning — a strong baseline for agentic and multi-step workflows.
$0.25/M input makes it practical for always-on agents, batch processing, or high-volume classification.

What to watch

Text-only input — image or audio workflows require a separate model in the pipeline.
No benchmark score currently tracked — evaluate using task-specific testing alongside pricing and capability data.

Release timeline

Tracked events for DeepSeek: DeepSeek V3.2.

Back to model tracker

release

DeepSeek: DeepSeek V3.2 entered the tracked catalog

December 1, 2025

DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism that reduces training and inference cost while preserving quality in long-context scenarios. A scalable reinforcement learning post-training framework further improves reasoning, with reported performance in the GPT-5 class, and the model has demonstrated gold-medal results on the 2025 IMO and IOI. V3.2 also uses a large-scale agentic task synthesis pipeline to better integrate reasoning into tool-use settings, boosting compliance and generalization in interactive environments. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)

View source

Nearby alternatives

Other DeepSeek models worth checking.

Need a recommendation instead?

Recent changes

LaunchDec 1

DeepSeek launched DeepSeek: DeepSeek V3.2

Compare

See how DeepSeek: DeepSeek V3.2 stacks up.

All comparisons
Free forever·Unsubscribe anytime·View archive