Models

Mistral

Mixtral 8x22B Instruct

A large-scale Mixture-of-Experts (MoE) model composed of 8 experts, each with 22B parameters, enabling efficient inference through sparse activation. It excels at instruction following, dialogue, and complex reasoning tasks while maintaining a relatively low inference cost.

ReasoningCheapLong context
Input / 1M tokens
$0.00
Output / 1M tokens
$0.00
Supported plans
0

Benchmark history

Evaluations

8

Aime

Measured May 14, 2026Source

Score

0

Math 500

Measured May 14, 2026Source

Score

0.54

Scicode

Measured May 14, 2026Source

Score

0.19

Livecodebench

Measured May 14, 2026Source

Score

0.15

Hle

Measured May 14, 2026Source

Score

0.04

Gpqa

Measured May 14, 2026Source

Score

0.33

Mmlu Pro

Measured May 14, 2026Source

Score

0.54

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

9.8

Plan availability

Products and plans that support this model

0
No products or plans have been linked to this model yet.

Discussion

Thinking... Make sure you are connected to GitHub server