Sarvam 105B (high)
9 evaluationsSarvam
Sarvam 105B is a large language model with 105 billion parameters, designed for high-performance tasks. It is optimized for strong reasoning capabilities and handling long-context information.
Input / 1M tokens
$0.00
Output / 1M tokens
$0.00


