Models

NVIDIA

Nemotron Cascade 2 30B A3B

NVIDIANVIDIAUnited States

Nemotron Cascade 2 30B A3B is a large language model from NVIDIA's Nemotron family, featuring a Mixture-of-Experts (MoE) architecture with 30 billion total parameters and 3 billion active parameters per token. This design enables efficient, high-speed inference while maintaining strong performance on coding and reasoning tasks.

CodingReasoningFastCheap
Input / 1M tokens
$0.00
Output / 1M tokens
$0.00
Supported plans
0

Benchmark history

Evaluations

9

TAU2

Measured May 14, 2026Source

Score

0.53

Terminalbench Hard

Measured May 14, 2026Source

Score

0.21

Lcr

Measured May 14, 2026Source

Score

0.34

Ifbench

Measured May 14, 2026Source

Score

0.8

Scicode

Measured May 14, 2026Source

Score

0.35

Hle

Measured May 14, 2026Source

Score

0.11

Gpqa

Measured May 14, 2026Source

Score

0.76

Artificial Analysis Coding Index

Measured May 14, 2026Source

Score

25.8

Artificial Analysis Intelligence Index

Measured May 14, 2026Source

Score

28.4

Plan availability

Products and plans that support this model

0
No products or plans have been linked to this model yet.

Discussion

Thinking... Make sure you are connected to GitHub server