TAU2
Score
0.27
Qwen3 235B A22B is a large-scale Mixture-of-Experts (MoE) language model from Alibaba's Qwen series, with a total of 235 billion parameters but only 22 billion activated per inference. This non-reasoning variant is optimized for general-purpose tasks, offering strong multilingual capabilities, coding proficiency, and efficient performance due to its MoE architecture.
Benchmark history
Score
0.27
Score
0.06
Score
0
Score
0.37
Score
0.24
Score
0.33
Score
0.9
Score
0.3
Score
0.34
Score
0.05
Score
0.61
Score
0.76
Score
23.7
Score
14
Score
17
Plan availability
China Unicom Cloud's Coding Plan is a subscription service for AI-powered coding, providing access to multiple large language models via an API for use with mainstream AI programming tools. It offers tiered plans with monthly request quotas.

Thinking... Make sure you are connected to GitHub server