Models

Mistral

Ministral 3 14B

Ministral 3 14B is a mid-sized, efficient language model from Mistral AI, optimized for a balance of speed, cost, and strong reasoning capabilities. It is well-suited for general-purpose tasks, coding assistance, and applications requiring responsive performance.

FastCheapReasoningCoding
Input / 1M tokens
$0.20
Output / 1M tokens
$0.20
Output tokens/s
107.3
First-token seconds
0.39s
Supported plans
0

Benchmark history

Evaluations

13

TAU2

Measured May 14, 2026Source

Score

0.27

Terminalbench Hard

Measured May 14, 2026Source

Score

0.05

Lcr

Measured May 14, 2026Source

Score

0.22

Ifbench

Measured May 14, 2026Source

Score

0.32

Aime 25

Measured May 14, 2026Source

Score

0.3

Scicode

Measured May 14, 2026Source

Score

0.24

Livecodebench

Measured May 14, 2026Source

Score

0.35

Hle

Measured May 14, 2026Source

Score

0.05

Gpqa

Measured May 14, 2026Source

Score

0.57

Mmlu Pro

Measured May 14, 2026Source

Score

0.69

Artificial Analysis Math Index

Measured May 14, 2026Source

Score

30

Artificial Analysis Coding Index

Measured May 14, 2026Source

Score

10.9

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

16

Plan availability

Products and plans that support this model

0
No products or plans have been linked to this model yet.

Discussion

Thinking... Make sure you are connected to GitHub server