Skip to content
AI Viewer
Qwen Released July 31, 2025 Synced Apr 19, 2026

Qwen: Qwen3 Coder 30B A3B Instruct

Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the...

Tool use

Why it stands out

160K-token context window handles longer documents and multi-turn conversations without truncation.
Tool use support makes it viable for function-calling and agentic pipelines.
$0.07/M input makes it practical for always-on agents, batch processing, or high-volume classification.

What to watch

Text-only input — image or audio workflows require a separate model in the pipeline.
No benchmark score currently tracked — evaluate using task-specific testing alongside pricing and capability data.

Release timeline

Tracked events for Qwen: Qwen3 Coder 30B A3B Instruct.

Back to model tracker

release

Qwen: Qwen3 Coder 30B A3B Instruct entered the tracked catalog

July 31, 2025

Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the Qwen3 architecture, it supports a native context length of 256K tokens (extendable to 1M with Yarn) and performs strongly in tasks involving function calls, browser use, and structured code completion. This model is optimized for instruction-following without “thinking mode”, and integrates well with OpenAI-compatible tool-use formats.

View source

Nearby alternatives

Other Qwen models worth checking.

Need a recommendation instead?

Recent changes

LaunchJul 31

Qwen launched Qwen: Qwen3 Coder 30B A3B Instruct

Compare

See how Qwen: Qwen3 Coder 30B A3B Instruct stacks up.

All comparisons