Providers
DeepSeek

China

DeepSeek

DeepSeek develops high-performance open and proprietary LLMs such as DeepSeek-V2 and DeepSeek-Coder, with strong capabilities in coding and reasoning tasks.

Products
0
Models
31
Available
0
Benchmarks
15

Region

China

Updated

May 14, 2026

Product coverage

Products from this provider

0
No products have been linked to this provider yet.

Model coverage

Models from this provider

31

DeepSeek Coder V2

DeepSeek Coder V2 Lite Instruct

DeepSeek Coder V2 Lite Instruct is a lightweight, efficient version of the Coder V2 model, optimized for fast code generation and instruction following. It retains strong coding and reasoning capabilities while being more resource-efficient.

CodingReasoningFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

8.5

DeepSeek

DeepSeek LLM 67B Chat (V1)

DeepSeek LLM 67B Chat (V1) is a 67-billion parameter large language model developed by DeepSeek. It is designed for general-purpose conversational AI tasks, demonstrating strong instruction-following and dialogue capabilities as part of the DeepSeek LLM series.

Reasoning

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

8.4

DeepSeek R1

DeepSeek R1 (Jan '25)

DeepSeek R1 is a reasoning-focused language model from DeepSeek, optimized for complex tasks in mathematics, coding, and logic. It utilizes chain-of-thought reasoning to solve problems step-by-step and is available as an open-weight model.

ReasoningCodingFastCheap

Input / 1M tokens

$1.68

Artificial Analysis Intelligence Index

18.8

DeepSeek R1

DeepSeek R1 0528 (May '25)

DeepSeek R1 is a reasoning-focused model from the R1 series, optimized for complex tasks requiring step-by-step thinking. It excels in mathematics, coding, and logical reasoning by leveraging advanced reinforcement learning and chain-of-thought methodologies.

ReasoningCodingCheap

Input / 1M tokens

$1.35

Artificial Analysis Intelligence Index

27.1

DeepSeek R1

DeepSeek R1 0528 Qwen3 8B

A distilled reasoning model from DeepSeek, based on the Qwen3 8B architecture. It is optimized for mathematical and code reasoning tasks, offering strong performance in a lightweight and efficient package.

ReasoningLong context

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

16.4

DeepSeek R1

DeepSeek R1 Distill Llama 70B

A distilled version of the DeepSeek R1 reasoning model, built on the Llama 70B architecture. It inherits strong reasoning and chain-of-thought capabilities from the R1 series while being optimized for efficiency. The model excels at complex problem-solving, code generation, and tasks requiring logical deduction.

CodingReasoningFastCheap

Input / 1M tokens

$0.70

Output tokens/s

44.91

First-token seconds

0.34s

Artificial Analysis Intelligence Index

16

DeepSeek R1

DeepSeek R1 Distill Llama 8B

A distilled 8B parameter model from the DeepSeek R1 series, optimized for fast and efficient reasoning tasks. It inherits strong reasoning capabilities from larger R1 models while being lightweight and cost-effective.

ReasoningFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

12.1

DeepSeek R1

DeepSeek R1 Distill Qwen 1.5B

A distilled version of the DeepSeek R1 reasoning model, based on the Qwen 1.5B architecture. It inherits strong reasoning capabilities from the larger R1 model while being significantly smaller and faster, making it suitable for edge deployment and low-latency applications.

ReasoningFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

9.1

DeepSeek R1

DeepSeek R1 Distill Qwen 14B

A distilled version of the DeepSeek R1 reasoning model, built upon the Qwen 14B architecture. It aims to deliver strong reasoning and problem-solving capabilities in a more compact and efficient form factor.

ReasoningFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

15.8

DeepSeek R1

DeepSeek R1 Distill Qwen 32B

A distilled version of the DeepSeek R1 reasoning model, built on the Qwen 32B architecture. It inherits strong chain-of-thought reasoning capabilities from the larger R1 model while offering faster inference speeds and lower computational costs. This model is optimized for efficient deployment without sacrificing core reasoning performance.

ReasoningFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

17.2

DeepSeek V3

DeepSeek V3 (Dec '24)

DeepSeek V3 is a powerful and cost-effective large language model released in December 2024. It excels in complex reasoning, coding, and long-context understanding (up to 128K tokens), while maintaining high inference speed and low operational costs.

CodingReasoningFastCheapLong context

Input / 1M tokens

$0.40

Artificial Analysis Intelligence Index

16.5

DeepSeek V3

DeepSeek V3 0324

DeepSeek V3 0324 is DeepSeek's flagship model, featuring strong reasoning and coding capabilities. It offers a high-performance, cost-effective solution for complex tasks.

CodingReasoningFastCheap

Input / 1M tokens

$1.20

Artificial Analysis Intelligence Index

22.3

DeepSeek V3.1

DeepSeek V3.1 (Non-reasoning)

DeepSeek V3.1 is a high-performance, non-reasoning variant of the V3 series, optimized for fast response times and cost-efficiency. It is designed for general-purpose tasks where rapid inference and low latency are prioritized over complex chain-of-thought reasoning.

FastCheap

Input / 1M tokens

$0.555

Artificial Analysis Intelligence Index

28.1

DeepSeek V3.1

DeepSeek V3.1 (Reasoning)

DeepSeek V3.1 (Reasoning) is a specialized variant of the V3.1 model optimized for complex reasoning tasks. It incorporates an enhanced thinking process to deliver more accurate and logical solutions for problems requiring multi-step analysis.

ReasoningCodingCheap

Input / 1M tokens

$0.59

Artificial Analysis Intelligence Index

27.7

DeepSeek V3.1

DeepSeek V3.1 Terminus (Non-reasoning)

DeepSeek V3.1 Terminus is a non-reasoning variant of the V3.1 model, optimized for high-speed response and cost-efficiency. It is designed for tasks where rapid output and low latency are prioritized over complex chain-of-thought reasoning.

CodingFastCheapLong context

Input / 1M tokens

$0.27

Artificial Analysis Intelligence Index

28.5

DeepSeek V3.1

DeepSeek V3.1 Terminus (Reasoning)

DeepSeek V3.1 Terminus (Reasoning) is a specialized variant of the V3.1 model optimized for complex reasoning tasks. It likely incorporates advanced chain-of-thought or thinking mechanisms to enhance performance on logic, analysis, and problem-solving challenges.

ReasoningCodingLong context

Input / 1M tokens

$1.64

Artificial Analysis Intelligence Index

33.9

DeepSeek V3.2

DeepSeek V3.2 (Non-reasoning)

DeepSeek V3.2 (Non-reasoning) is a general-purpose large language model optimized for fast response and low cost. It excels at code generation and processing long contexts, making it suitable for a wide range of non-reasoning tasks.

CodingFastCheapLong context

Input / 1M tokens

$0.50

Artificial Analysis Intelligence Index

32.1

DeepSeek V3.2

DeepSeek V3.2 (Reasoning)

DeepSeek V3.2 (Reasoning) is a large language model optimized for complex reasoning tasks. It features an enhanced thinking mode or chain-of-thought capability, excelling in multi-step logical deduction, mathematical problem-solving, and code generation.

ReasoningCodingLong context

Input / 1M tokens

$0.30

Artificial Analysis Intelligence Index

41.7

DeepSeek V3.2

DeepSeek V3.2 Exp (Non-reasoning)

This is an experimental version of the DeepSeek V3 series, optimized for general-purpose tasks and fast response times. It maintains strong code generation and long-context processing capabilities while operating in a non-reasoning mode for quicker outputs.

CodingFastCheapLong context

Input / 1M tokens

$0.275

Artificial Analysis Intelligence Index

28.4

DeepSeek V3.2

DeepSeek V3.2 Exp (Reasoning)

An experimental reasoning-focused variant of the DeepSeek V3.2 model, optimized for complex logical deduction and chain-of-thought tasks. It likely maintains strong coding and general capabilities while enhancing performance on problems requiring multi-step reasoning.

ReasoningCodingFastCheap

Input / 1M tokens

$0.275

Artificial Analysis Intelligence Index

32.9

DeepSeek V3.2

DeepSeek V3.2 Speciale

A specialized variant of the DeepSeek V3.2 architecture, optimized for enhanced performance in code generation and complex reasoning tasks. It maintains the series' hallmark of high cost-efficiency and rapid response times.

CodingReasoningFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

29.4

DeepSeek V4 Flash

DeepSeek V4 Flash (Non-reasoning)

A fast and efficient model from the DeepSeek V4 family, optimized for low-latency responses and general tasks. It excels in code generation and instruction following, but is not designed for complex reasoning or chain-of-thought tasks.

FastCoding

Input / 1M tokens

$0.14

Output tokens/s

91.21

First-token seconds

0.83s

Artificial Analysis Intelligence Index

36.5

DeepSeek V4 Flash

DeepSeek V4 Flash (Reasoning, High Effort)

This is the Flash version of the DeepSeek V4 series, optimized for reasoning tasks with a high-effort mode to enhance complex problem-solving. It delivers enhanced reasoning performance while maintaining fast response speeds.

ReasoningFastCheap

Input / 1M tokens

$0.14

Artificial Analysis Intelligence Index

46

DeepSeek V4 Flash

DeepSeek V4 Flash (Reasoning, Max Effort)

DeepSeek V4 Flash is a fast-response model optimized for reasoning tasks. It is designed to deliver high-quality reasoning outputs with maximum computational effort while maintaining low latency.

FastReasoning

Input / 1M tokens

$0.14

Output tokens/s

81.62

First-token seconds

0.83s

Artificial Analysis Intelligence Index

46.5

DeepSeek V4 Pro

DeepSeek V4 Pro (Non-reasoning)

This is the non-reasoning variant of the DeepSeek V4 Pro model. It retains the core capabilities of the V4 Pro series, such as strong coding and general knowledge, but is optimized for faster response times and lower cost by omitting the extended reasoning or chain-of-thought process. It is suitable for applications requiring quick, cost-effective responses.

CodingFastCheapMultimodal

Input / 1M tokens

$1.74

Output tokens/s

31.21

First-token seconds

1.1s

Artificial Analysis Intelligence Index

39.3

DeepSeek V4 Pro

DeepSeek V4 Pro (Reasoning, High Effort)

A high-effort reasoning model from the DeepSeek V4 series, optimized for complex problem-solving and deep analytical tasks. It likely employs extended thinking or chain-of-thought processes to tackle challenging queries in coding, mathematics, and logic.

CodingReasoning

Input / 1M tokens

$1.74

Output tokens/s

31.01

First-token seconds

1.21s

Artificial Analysis Intelligence Index

49.8

DeepSeek V4 Pro

DeepSeek V4 Pro (Reasoning, Max Effort)

DeepSeek V4 Pro is a high-compute reasoning model optimized for complex logical, mathematical, and coding tasks. It represents the 'Max Effort' variant, utilizing significant computational resources to maximize accuracy and depth in its reasoning chains.

Reasoning

Input / 1M tokens

$1.74

Output tokens/s

31.3

First-token seconds

1.2s

Artificial Analysis Intelligence Index

51.5

DeepSeek Coder V2

DeepSeek-Coder-V2

DeepSeek-Coder-V2 is a powerful code generation model designed for complex programming tasks. It excels in code completion, generation, and understanding across multiple languages, featuring strong reasoning capabilities and support for long-context windows.

CodingReasoningLong contextFastCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

10.6

DeepSeek V2

DeepSeek-V2-Chat

DeepSeek-V2-Chat is an open-source chat model built on a Mixture-of-Experts (MoE) architecture with 236B total parameters but only 21B activated per token, achieving high efficiency. It supports a 128K context window and demonstrates strong performance in coding, mathematics, and multilingual tasks.

Long contextCodingReasoningCheap

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

9.1

DeepSeek V2.5

DeepSeek-V2.5

DeepSeek-V2.5 is an advanced Mixture-of-Experts (MoE) large language model, representing an evolution of the V2 series. It is optimized for strong reasoning and coding capabilities while maintaining efficiency. The model supports a long context window.

CodingReasoningFastCheapLong context

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

12.3

DeepSeek V2.5

DeepSeek-V2.5 (Dec '24)

DeepSeek-V2.5 is a high-performance Mixture-of-Experts (MoE) model optimized for coding and reasoning tasks. It offers a strong balance of capability and cost-efficiency, supporting long context windows for complex applications.

CodingReasoningFastCheapLong context

Input / 1M tokens

$0.00

Artificial Analysis Intelligence Index

12.5

Discussion

Thinking... Make sure you are connected to GitHub server