/Models/Inception: Mercury 2
M

Inception: Mercury 2

128k context
Lowest Price
$0.25
per 1M tokens
Providers
1
Available
Context
128k
tokens

Price Comparison

ProviderInput / OutputLatencyStatus
OpenRouterOpenRouterLowest
$0.25/$0.75
...
Verified

About This Model

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs l...

Quick Start