TAU2
Measured May 14, 2026Source
Score
0.86
This is a high-efficiency Mixture-of-Experts (MoE) model from the Qwen3.5 series, featuring 35 billion total parameters with only 3 billion activated per inference. It is optimized for fast response speeds and is the non-reasoning variant, suitable for general-purpose tasks.
Benchmark history
Score
0.86
Score
0.11
Score
0.55
Score
0.44
Score
0.29
Score
0.13
Score
0.82
Score
16.8
Score
30.7
Plan availability

Thinking... Make sure you are connected to GitHub server