Models

Sarvam 105B (high)

Sarvam 105B is a large language model with 105 billion parameters, designed for high-performance tasks. It is optimized for strong reasoning capabilities and handling long-context information.

ReasoningLong context
Input / 1M tokens
$0.00
Output / 1M tokens
$0.00
Output tokens/s
145.38
First-token seconds
1.26s
Supported plans
0

Benchmark history

Evaluations

9

TAU2

Measured May 14, 2026Source

Score

0.47

Terminalbench Hard

Measured May 14, 2026Source

Score

0.02

Lcr

Measured May 14, 2026Source

Score

0

Ifbench

Measured May 14, 2026Source

Score

0.34

Scicode

Measured May 14, 2026Source

Score

0.26

Hle

Measured May 14, 2026Source

Score

0.1

Gpqa

Measured May 14, 2026Source

Score

0.74

Artificial Analysis Coding Index

Measured May 14, 2026Source

Score

9.8

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

18.2

Plan availability

Products and plans that support this model

0
No products or plans have been linked to this model yet.

Discussion

Thinking... Make sure you are connected to GitHub server