Models

阿里巴巴

QwQ 32B

QwQ 32B is a 32-billion parameter language model from Alibaba, designed to deliver strong reasoning and coding capabilities. It offers a balanced performance-to-cost ratio, making it suitable for a wide range of general-purpose and specialized tasks.

CodingReasoningCheap
Input / 1M tokens
$0.66
Output / 1M tokens
$1.00
Output tokens/s
31.39
First-token seconds
0.46s
Supported plans
4

Benchmark history

Evaluations

12

Lcr

Measured May 14, 2026Source

Score

0.25

Ifbench

Measured May 14, 2026Source

Score

0.39

Aime 25

Measured May 14, 2026Source

Score

0.29

Aime

Measured May 14, 2026Source

Score

0.78

Math 500

Measured May 14, 2026Source

Score

0.96

Scicode

Measured May 14, 2026Source

Score

0.36

Livecodebench

Measured May 14, 2026Source

Score

0.63

Hle

Measured May 14, 2026Source

Score

0.08

Gpqa

Measured May 14, 2026Source

Score

0.59

Mmlu Pro

Measured May 14, 2026Source

Score

0.76

Artificial Analysis Math Index

Measured May 14, 2026Source

Score

29

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

19.7

Plan availability

Products and plans that support this model

2

Discussion

Thinking... Make sure you are connected to GitHub server