TAU2
Measured May 14, 2026Source
Score
0.33
Qwen3 235B A22B is a large-scale Mixture-of-Experts (MoE) language model from Alibaba's Qwen series. It features 235 billion total parameters with 22 billion activated per token, designed for strong instruction following, complex reasoning, and multilingual tasks.
Benchmark history
Score
0.33
Score
0.15
Score
0.31
Score
0.46
Score
0.72
Score
0.72
Score
0.98
Score
0.36
Score
0.52
Score
0.11
Score
0.75
Score
0.83
Score
71.7
Score
22.1
Score
25
Plan availability

Thinking... Make sure you are connected to GitHub server