TAU2
Measured May 14, 2026Source
Score
0.47
Sarvam 105B is a large language model with 105 billion parameters, designed for high-performance tasks. It is optimized for strong reasoning capabilities and handling long-context information.
Benchmark history
Score
0.47
Score
0.02
Score
0
Score
0.34
Score
0.26
Score
0.1
Score
0.74
Score
9.8
Score
18.2
Plan availability

Thinking... Make sure you are connected to GitHub server