Models

Qwen3 235B A22B 2507 Instruct

Qwen3 235B A22B is a large-scale Mixture-of-Experts (MoE) language model from Alibaba's Qwen series. It features 235 billion total parameters with 22 billion activated per token, designed for strong instruction following, complex reasoning, and multilingual tasks.

ReasoningCodingLong context
Input / 1M tokens
$0.20
Output / 1M tokens
$0.825
Output tokens/s
68.67
First-token seconds
1.25s
Supported plans
0

Benchmark history

Evaluations

15

TAU2

Measured May 14, 2026Source

Score

0.33

Terminalbench Hard

Measured May 14, 2026Source

Score

0.15

Lcr

Measured May 14, 2026Source

Score

0.31

Ifbench

Measured May 14, 2026Source

Score

0.46

Aime 25

Measured May 14, 2026Source

Score

0.72

Aime

Measured May 14, 2026Source

Score

0.72

Math 500

Measured May 14, 2026Source

Score

0.98

Scicode

Measured May 14, 2026Source

Score

0.36

Livecodebench

Measured May 14, 2026Source

Score

0.52

Hle

Measured May 14, 2026Source

Score

0.11

Gpqa

Measured May 14, 2026Source

Score

0.75

Mmlu Pro

Measured May 14, 2026Source

Score

0.83

Artificial Analysis Math Index

Measured May 14, 2026Source

Score

71.7

Artificial Analysis Coding Index

Measured May 14, 2026Source

Score

22.1

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

25

Plan availability

Products and plans that support this model

0
No products or plans have been linked to this model yet.

Discussion

Thinking... Make sure you are connected to GitHub server