Skip to content
AI Viewer
Qwen Released April 28, 2025 Synced Apr 19, 2026

Qwen: Qwen3 235B A22B

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and...

Tool useReasoning

Why it stands out

131K-token context window handles longer documents and multi-turn conversations without truncation.
Combines tool use with reasoning — a strong baseline for agentic and multi-step workflows.
$0.455/M input makes it practical for always-on agents, batch processing, or high-volume classification.

What to watch

Text-only input — image or audio workflows require a separate model in the pipeline.
No benchmark score currently tracked — evaluate using task-specific testing alongside pricing and capability data.

Release timeline

Tracked events for Qwen: Qwen3 235B A22B.

Back to model tracker

release

Qwen: Qwen3 235B A22B entered the tracked catalog

April 28, 2025

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.

View source

Nearby alternatives

Other Qwen models worth checking.

Need a recommendation instead?

Recent changes

LaunchApr 28

Qwen launched Qwen: Qwen3 235B A22B

Compare

See how Qwen: Qwen3 235B A22B stacks up.

All comparisons