Models

DeepSeek R1 Distill Llama 70B

A distilled version of the DeepSeek R1 reasoning model, built on the Llama 70B architecture. It inherits strong reasoning and chain-of-thought capabilities from the R1 series while being optimized for efficiency. The model excels at complex problem-solving, code generation, and tasks requiring logical deduction.

CodingReasoningFastCheap
Input / 1M tokens
$0.70
Output / 1M tokens
$1.05
Output tokens/s
44.91
First-token seconds
0.34s
Supported plans
4

Benchmark history

Evaluations

15

TAU2

Measured May 14, 2026Source

Score

0.22

Terminalbench Hard

Measured May 14, 2026Source

Score

0.02

Lcr

Measured May 14, 2026Source

Score

0.11

Ifbench

Measured May 14, 2026Source

Score

0.28

Aime 25

Measured May 14, 2026Source

Score

0.54

Aime

Measured May 14, 2026Source

Score

0.67

Math 500

Measured May 14, 2026Source

Score

0.93

Scicode

Measured May 14, 2026Source

Score

0.31

Livecodebench

Measured May 14, 2026Source

Score

0.27

Hle

Measured May 14, 2026Source

Score

0.06

Gpqa

Measured May 14, 2026Source

Score

0.4

Mmlu Pro

Measured May 14, 2026Source

Score

0.8

Artificial Analysis Math Index

Measured May 14, 2026Source

Score

53.7

Artificial Analysis Coding Index

Measured May 14, 2026Source

Score

11.4

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

16

Plan availability

Products and plans that support this model

2

Discussion

Thinking... Make sure you are connected to GitHub server