Models

Ling 2.6 Flash

Ling 2.6 Flash is a fast-response AI model from InclusionAI, designed to deliver quick and efficient performance for various tasks. It likely emphasizes low latency and high throughput while maintaining core reasoning capabilities.

FastReasoning
Input / 1M tokens
$0.10
Output / 1M tokens
$0.30
Output tokens/s
215.39
First-token seconds
1.19s
Supported plans
0

Benchmark history

Evaluations

9

TAU2

Measured May 14, 2026Source

Score

0.86

Terminalbench Hard

Measured May 14, 2026Source

Score

0.21

Lcr

Measured May 14, 2026Source

Score

0.25

Ifbench

Measured May 14, 2026Source

Score

0.57

Scicode

Measured May 14, 2026Source

Score

0.27

Hle

Measured May 14, 2026Source

Score

0.06

Gpqa

Measured May 14, 2026Source

Score

0.59

Artificial Analysis Coding Index

Measured May 14, 2026Source

Score

23.2

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

26.2

Plan availability

Products and plans that support this model

0
No products or plans have been linked to this model yet.

Discussion

Thinking... Make sure you are connected to GitHub server