Providers
Mistral

France

Mistral

Mistral develops high-performance open and commercial LLMs such as Mixtral and Mistral Large, focusing on efficiency and openness.

Products
0
Models
33
Available
0
Benchmarks
15

Region

France

Updated

May 14, 2026

Product coverage

Products from this provider

0
No products have been linked to this provider yet.

Model coverage

Models from this provider

33

Mistral

Devstral 2

Devstral 2 is a high-performance model from Mistral AI, optimized for software development tasks. It excels at code generation, debugging, and complex reasoning, offering a strong balance of speed and cost-efficiency for developers.

CodingReasoningFastCheap

Input / 1M tokens

$0.00

Output tokens/s

57.61

First-token seconds

0.55s

Artificial Analysis Intelligence Index

22

Devstral

Devstral Medium

Devstral Medium is a mid-sized model from Mistral's Devstral family, specifically optimized for code generation and software engineering tasks. It balances strong coding and reasoning capabilities with efficient performance and cost-effectiveness.

CodingReasoningFastCheapLong context

Input / 1M tokens

$0.40

Output tokens/s

83.48

First-token seconds

0.51s

Artificial Analysis Intelligence Index

18.7

Mistral

Devstral Small (Jul '25)

Devstral Small is a lightweight, fast model from Mistral optimized for software engineering tasks like code generation, debugging, and understanding. It is designed for rapid iteration and cost-effective deployment in development workflows.

CodingFastCheapLong context

Input / 1M tokens

$0.10

Output tokens/s

219.13

First-token seconds

0.4s

Artificial Analysis Intelligence Index

15.2

Mistral

Devstral Small (May '25)

Devstral Small is a compact, efficient model from Mistral's Devstral family, optimized for fast and cost-effective code generation and reasoning tasks. Released in May 2025, it is designed for developer workflows, offering a balance of performance and low latency.

CodingReasoningFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

18

Mistral

Devstral Small 2

Devstral Small 2 is a lightweight, code-focused language model from Mistral's Devstral series. It is optimized for efficient code generation, understanding, and editing tasks, making it suitable for resource-constrained environments and rapid development workflows.

CodingFastCheap

Input / 1M tokens

$0.00

Output tokens/s

59.36

First-token seconds

0.49s

Artificial Analysis Intelligence Index

19.5

Mistral

Magistral Medium 1

Magistral Medium 1 is a balanced, mid-sized language model from Mistral AI, optimized for strong reasoning and code generation capabilities. It offers a good trade-off between performance and efficiency for a wide range of tasks.

CodingReasoningFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

18.8

Mistral

Magistral Medium 1.2

Magistral Medium 1.2 is a mid-sized, general-purpose language model from Mistral AI. It is designed to offer a strong balance between performance, speed, and cost-effectiveness for a wide range of tasks.

ReasoningFastCheapCoding

Input / 1M tokens

$2.00

Output tokens/s

40.98

First-token seconds

0.52s

Artificial Analysis Intelligence Index

27.1

Mistral

Magistral Small 1

A lightweight, efficient model from Mistral's Magistral series, optimized for fast response times and low cost. Suitable for general-purpose tasks where speed and affordability are priorities.

FastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

16.8

Mistral

Magistral Small 1.2

Magistral Small 1.2 is a compact and efficient language model from Mistral AI, optimized for fast inference and low-cost deployment. It is well-suited for applications requiring quick responses and resource efficiency, such as edge computing or high-throughput services.

ReasoningCodingFastCheap

Input / 1M tokens

$0.50

Output tokens/s

107.88

First-token seconds

0.38s

Artificial Analysis Intelligence Index

18.2

Mistral

Ministral 3 14B

Ministral 3 14B is a mid-sized, efficient language model from Mistral AI, optimized for a balance of speed, cost, and strong reasoning capabilities. It is well-suited for general-purpose tasks, coding assistance, and applications requiring responsive performance.

FastCheapReasoningCoding

Input / 1M tokens

$0.20

Output tokens/s

107.3

First-token seconds

0.39s

Artificial Analysis Intelligence Index

16

Mistral

Ministral 3 3B

Ministral 3B is a highly efficient, lightweight language model from Mistral AI, optimized for fast inference and low-cost deployment. It is designed for edge devices and applications requiring rapid response times with minimal computational resources.

FastCheap

Input / 1M tokens

$0.10

Output tokens/s

276.11

First-token seconds

0.35s

Artificial Analysis Intelligence Index

11.2

Mistral

Ministral 3 8B

Ministral 3 8B is a lightweight, efficient language model from Mistral AI. It is designed for fast inference and low-cost deployment while maintaining strong reasoning capabilities. This model is suitable for applications requiring quick responses and efficient resource utilization.

ReasoningFastCheapLong context

Input / 1M tokens

$0.15

Output tokens/s

119.9

First-token seconds

0.38s

Artificial Analysis Intelligence Index

14.8

Mistral

Mistral 7B Instruct

Mistral 7B Instruct is a 7-billion-parameter language model optimized for instruction-following and conversational tasks. It offers a strong balance of performance and efficiency, featuring a 32K token context window and robust reasoning capabilities for its size.

ReasoningFastCheapLong context

Input / 1M tokens

$0.20

Output tokens/s

109.98

First-token seconds

0.37s

Artificial Analysis Intelligence Index

7.4

Mistral

Mistral Large (Feb '24)

Mistral Large (Feb '24) is Mistral AI's flagship model, featuring strong reasoning capabilities, support for a long context window, and the ability to handle multimodal inputs. It excels at code generation and complex task processing.

CodingReasoningFastLong contextMultimodal

Input / 1M tokens

$4.00

Artificial Analysis Intelligence Index

9.9

Mistral

Mistral Large 2 (Jul '24)

Mistral Large 2 is Mistral AI's flagship model, offering top-tier performance in reasoning, multilingual understanding, and code generation. It supports a 128k token context window and is optimized for complex tasks requiring high accuracy and instruction following.

ReasoningCodingLong context

Input / 1M tokens

$2.00

Artificial Analysis Intelligence Index

13

Mistral

Mistral Large 2 (Nov '24)

Mistral Large 2 is Mistral AI's flagship model, featuring a 128k context window and strong multilingual capabilities. It excels at complex reasoning, code generation, and supports function calling for advanced agentic workflows.

CodingReasoningLong contextMultimodal

Input / 1M tokens

$2.00

Output tokens/s

29.77

First-token seconds

1.07s

Artificial Analysis Intelligence Index

15.1

Mistral

Mistral Large 3

This is Mistral's latest flagship model, featuring strong reasoning and multilingual capabilities with support for a long context window.

ReasoningLong contextCoding

Input / 1M tokens

$0.50

Output tokens/s

56.45

First-token seconds

0.59s

Artificial Analysis Intelligence Index

22.8

Mistral Medium

Mistral Medium

Mistral Medium is a mid-sized language model from Mistral AI, designed to offer a strong balance between performance and cost. It is well-suited for general-purpose tasks, including reasoning and coding, providing a capable and efficient solution.

ReasoningCodingFastCheap

Input / 1M tokens

$2.75

Output tokens/s

75.52

First-token seconds

0.49s

Artificial Analysis Intelligence Index

9

Mistral

Mistral Medium 3

Mistral Medium 3 is a balanced, mid-sized model from Mistral AI, designed to offer strong performance across reasoning, coding, and multimodal tasks while maintaining efficiency. It aims to provide a good trade-off between capability, speed, and cost for a wide range of applications.

CodingReasoningFastCheapMultimodal

Input / 1M tokens

$0.40

Output tokens/s

44.4

First-token seconds

0.51s

Artificial Analysis Intelligence Index

18.8

Mistral

Mistral Medium 3.1

Mistral Medium 3.1 is the latest version in the Mistral Medium series, featuring a 32K context window and strong multilingual support. It excels in reasoning, coding, and multilingual tasks, offering a balanced performance profile for complex applications.

CodingReasoningLong context

Input / 1M tokens

$0.40

Output tokens/s

86.89

First-token seconds

0.5s

Artificial Analysis Intelligence Index

21.3

Mistral

Mistral Medium 3.5

Mistral is an AI research company that develops open-weight multimodal models, such as Mistral Medium 3.5, optimized for coding, reasoning, and agentic use cases. Their models are designed for efficient performance and are released under open licenses.

Input / 1M tokens

$1.50

Output tokens/s

154.34

First-token seconds

0.82s

Artificial Analysis Intelligence Index

39.2

Mistral

Mistral Nemo Instruct 2407

Mistral Nemo Instruct model from Mistral, released in July 2024.

CodingReasoningFastCheap

Mistral

Mistral Saba

Mistral Saba is a model from Mistral AI. Based on the provider's general focus, it likely emphasizes efficient performance and strong reasoning capabilities. Specific details on its architecture or unique features are not publicly confirmed.

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

12.1

Mistral

Mistral Small (Feb '24)

Mistral Small (Feb '24) is a compact and efficient language model from Mistral AI, optimized for fast inference and low-cost deployment. It delivers strong performance on general tasks, coding, and reasoning while maintaining a small footprint suitable for edge or high-throughput applications.

FastCheapCodingReasoning

Input / 1M tokens

$1.00

Output tokens/s

155.11

First-token seconds

0.5s

Artificial Analysis Intelligence Index

9

Mistral

Mistral Small (Sep '24)

Mistral Small (Sep '24) is a compact and efficient language model from Mistral AI, optimized for fast inference and low-cost deployment. It delivers strong performance on general tasks, coding, and reasoning while maintaining a small footprint suitable for edge or high-throughput applications.

FastCheapReasoningCoding

Input / 1M tokens

$0.20

Output tokens/s

157.76

First-token seconds

0.54s

Artificial Analysis Intelligence Index

10.2

Mistral

Mistral Small 3

Mistral Small 3 is a compact and efficient language model from Mistral AI, optimized for fast inference and low-cost deployment. It is well-suited for applications requiring quick responses and cost-effective operation, while maintaining solid general reasoning capabilities.

FastCheapReasoning

Input / 1M tokens

$0.075

Output tokens/s

160.93

First-token seconds

0.52s

Artificial Analysis Intelligence Index

12.7

Mistral

Mistral Small 3.1

Mistral Small 3.1 is a compact and efficient language model from Mistral AI, optimized for fast inference and cost-effective deployment. It delivers strong performance in reasoning and coding tasks while maintaining low latency.

ReasoningFastCheap

Input / 1M tokens

$0.105

Output tokens/s

161.35

First-token seconds

0.51s

Artificial Analysis Intelligence Index

14.5

Mistral

Mistral Small 3.2

Mistral Small 3.2 is a lightweight yet high-performance language model from the Mistral Small family. It features a 128K token context window, strong reasoning and coding capabilities, and is optimized for fast, cost-effective inference in production environments.

CodingReasoningFastCheapLong context

Input / 1M tokens

$0.087

Output tokens/s

118.82

First-token seconds

0.42s

Artificial Analysis Intelligence Index

15.1

Mistral

Mistral Small 4 (Non-reasoning)

Mistral Small 4 (Non-reasoning) is a fast and cost-effective language model optimized for low-latency, high-throughput tasks. It is designed for general-purpose use cases where speed and efficiency are prioritized over complex, multi-step reasoning.

FastCheapCodingLong context

Input / 1M tokens

$0.15

Output tokens/s

160.42

First-token seconds

0.52s

Artificial Analysis Intelligence Index

18.6

Mistral

Mistral Small 4 (Reasoning)

Mistral Small 4 is the latest iteration in the Mistral Small series, optimized for reasoning tasks. It delivers enhanced complex reasoning capabilities while maintaining the series' characteristic efficiency and low cost.

ReasoningFastCheapLong context

Input / 1M tokens

$0.15

Output tokens/s

163.26

First-token seconds

0.58s

Artificial Analysis Intelligence Index

27.8

Mistral

Mixtral 8x22B Instruct

A large-scale Mixture-of-Experts (MoE) model composed of 8 experts, each with 22B parameters, enabling efficient inference through sparse activation. It excels at instruction following, dialogue, and complex reasoning tasks while maintaining a relatively low inference cost.

ReasoningCheapLong context

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

9.8

Mistral

Mixtral 8x7B Instruct

An open-source instruction-tuned model based on a Mixture-of-Experts (MoE) architecture. It excels at complex reasoning tasks, supports long-context processing, and offers fast response times with high cost-efficiency.

ReasoningFastCheapLong context

Input / 1M tokens

$0.45

Artificial Analysis Intelligence Index

7.7

Mistral

Pixtral Large

Pixtral Large is a multimodal model from Mistral AI, designed to process and understand both text and images. It is part of the Pixtral family, focusing on strong reasoning and visual comprehension capabilities.

MultimodalReasoningCoding

Input / 1M tokens

$2.00

Output tokens/s

55.74

First-token seconds

0.49s

Artificial Analysis Intelligence Index

14

Discussion

Thinking... Make sure you are connected to GitHub server