Models

Qwen3 235B A22B (Non-reasoning)

Qwen3 235B A22B is a large-scale Mixture-of-Experts (MoE) language model from Alibaba's Qwen series, with a total of 235 billion parameters but only 22 billion activated per inference. This non-reasoning variant is optimized for general-purpose tasks, offering strong multilingual capabilities, coding proficiency, and efficient performance due to its MoE architecture.

CodingReasoningFastLong contextMultimodal
Input / 1M tokens
$0.45
Output / 1M tokens
$1.80
Output tokens/s
69.11
First-token seconds
1.2s
Supported plans
2

Benchmark history

Evaluations

15

TAU2

Measured May 14, 2026Source

Score

0.27

Terminalbench Hard

Measured May 14, 2026Source

Score

0.06

Lcr

Measured May 14, 2026Source

Score

0

Ifbench

Measured May 14, 2026Source

Score

0.37

Aime 25

Measured May 14, 2026Source

Score

0.24

Aime

Measured May 14, 2026Source

Score

0.33

Math 500

Measured May 14, 2026Source

Score

0.9

Scicode

Measured May 14, 2026Source

Score

0.3

Livecodebench

Measured May 14, 2026Source

Score

0.34

Hle

Measured May 14, 2026Source

Score

0.05

Gpqa

Measured May 14, 2026Source

Score

0.61

Mmlu Pro

Measured May 14, 2026Source

Score

0.76

Artificial Analysis Math Index

Measured May 14, 2026Source

Score

23.7

Artificial Analysis Coding Index

Measured May 14, 2026Source

Score

14

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

17

Plan availability

Products and plans that support this model

1

Discussion

Thinking... Make sure you are connected to GitHub server