Models

Qwen3.5 122B A10B (Reasoning)

Qwen3.5 122B A10B is a large-scale Mixture-of-Experts (MoE) model from Alibaba's Qwen series, optimized for complex reasoning tasks. It features a 122 billion parameter architecture with 10 billion active parameters, balancing high performance with computational efficiency. The model supports an extremely long context window and excels in code generation and logical analysis.

CodingReasoningLong context
Input / 1M tokens
$0.40
Output / 1M tokens
$3.20
Output tokens/s
153.85
First-token seconds
1.02s
Supported plans
4

Benchmark history

Evaluations

9

TAU2

Measured May 14, 2026Source

Score

0.94

Terminalbench Hard

Measured May 14, 2026Source

Score

0.31

Lcr

Measured May 14, 2026Source

Score

0.67

Ifbench

Measured May 14, 2026Source

Score

0.76

Scicode

Measured May 14, 2026Source

Score

0.42

Hle

Measured May 14, 2026Source

Score

0.23

Gpqa

Measured May 14, 2026Source

Score

0.86

Artificial Analysis Coding Index

Measured May 14, 2026Source

Score

34.7

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

41.6

Plan availability

Products and plans that support this model

1
Apertis Coding Plan

Apertis Coding Plan

Apertis Coding Plan is a subscription-based AI coding service providing unified access to 30+ AI models (GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro, and more) through a single API key. Designed for developers using coding agents like Claude Code, Cursor, Cline, and OpenCode, it offers predictable monthly pricing, free prompt caching, auto-failover, and quota-based billing across OpenAI, Anthropic, Google, and other providers.

Discussion

Thinking... Make sure you are connected to GitHub server