Is DeepSeek V4 available through API model IDs?
DeepSeek's API docs list deepseek-v4-flash and deepseek-v4-pro, with both OpenAI-format and Anthropic-format base URLs. Always re-check official docs before production launch.
FAQ
Clear answers for Flash-first pricing, OpenClaw adaptation, API migration, aliases, model comparison, and site independence.
DeepSeek's API docs list deepseek-v4-flash and deepseek-v4-pro, with both OpenAI-format and Anthropic-format base URLs. Always re-check official docs before production launch.
The current official V4 Pro row is $0.145/M cache-hit input, $1.74/M cache-miss input, and $3.48/M output. The $12/M output price belongs to Gemini 3.1 Pro Preview, not DeepSeek V4 Pro.
Flash is the route to promote for default production and OpenClaw traffic because it has the stronger cost story: $0.14/M input, $0.028/M cache-hit input, and $0.28/M output. Pro is still useful, but it should be an escalation model.
It means the site gives OpenClaw a dedicated model-routing story: routine agent turns use deepseek-v4-flash, prompt and tool scaffolds stay stable for cache hits, and deepseek-v4-pro is used only after clear escalation triggers.
The pricing page says older model names will eventually be deprecated. They currently map to V4 Flash compatibility modes. New integrations should configure V4 model IDs directly.
No. This is an information and comparison site. It keeps pricing and model comparison separate from any purchasable inventory or commercial claim.
Those families map to different production roles: DeepSeek for cost-first routing, GPT for a broad frontier baseline, Claude for enterprise review, Gemini for multimodal/Google workflows, and Grok for realtime/xAI workflows.