What Is Grok 4.3? xAI's 1M-Context Model (May 2026)
What Is Grok 4.3? xAI’s 1M-Context Model (May 2026)
Grok 4.3 is xAI’s flagship API model as of April 30, 2026 — a reasoning model with a 1-million-token context window, native video input, real-time X data access, and aggressive pricing that resets the cost model for agentic apps.
Last verified: May 11, 2026
Quick facts
| Property | Value |
|---|---|
| Vendor | xAI |
| Full API rollout | April 30, 2026 |
| OCI availability | ~May 8, 2026 |
| Context window | 1,000,000 tokens |
| Knowledge cutoff | December 2025 |
| Input price | $1.25 per 1M tokens |
| Output price | $2.50 per 1M tokens |
| Output token limit | None (unlimited) |
| Native video input | Yes |
| Real-time X data | Yes |
| Open weights | No |
| Model id (xAI) | grok-4.3 |
| Model id (OCI) | xai.grok-4.3 |
What’s new vs Grok 4.20
Grok 4.3 is positioned as a major architectural and pricing upgrade over Grok 4.20.
1. 1M-token context (up from 256K). Long-context refactors, multi-document analysis, full-codebase audits — all become tractable on a single call.
2. ~40% input price cut. Input dropped to $1.25 per million tokens. Combined with the larger context, this materially changes the unit economics of long-context agentic apps.
3. Native video input. First time for the Grok family. The model can take video frames directly as input — useful for video understanding agents, multimodal investigations, and content analysis pipelines.
4. Better reasoning architecture. Improved performance on advanced logic, math, scientific analysis, and multi-step investigations. xAI describes Grok 4.3 as a reasoning model suited for “accuracy-critical tasks.”
5. Stronger agentic tool use. Instruction-following and tool calling are tighter — important for production agent loops.
Benchmarks
Grok 4.3 doesn’t lead the SWE-bench leaderboards (Claude Opus 4.7 and GPT-5.5 do), but it’s competitive across the board:
- Artificial Analysis Coding Index: 41.0 (better than 89% of compared models)
- Long-context retention past 128K: Among the top performers (alongside GPT-5.5)
- Reasoning benchmarks: Strong on math, logic, multi-step analysis
- Outperforms GPT-5.1 on private legal and financial benchmarks (per third-party evals)
The model isn’t trying to win SWE-bench — it’s trying to be a credible third frontier option at a fraction of the price.
Unique capabilities
Real-time X (Twitter) data. Grok 4.3 can pull live posts, trends, and replies from X as part of a query. For news monitoring, social listening, current-events agents, this is genuinely differentiated — no other frontier model has this.
Native video input. Pass a video file and Grok 4.3 processes it directly. Useful for video understanding, surveillance analytics, content moderation pipelines, video QA.
Unlimited output tokens. No hard cap per response — useful for long-form generation, full-document refactors, full-codebase explanations.
1M-token context at the lowest frontier price. Combine 1M context with $1.25 input pricing and the cost of “load this whole repo and answer questions” drops by an order of magnitude vs Opus 4.7.
Pricing in context
| Model | Input/1M | Output/1M |
|---|---|---|
| Grok 4.3 | $1.25 | $2.50 |
| DeepSeek V4-Pro | $1.74 | $3.48 |
| DeepSeek V4-Flash | $0.14 | $0.28 |
| Gemini 3.1 Pro | mid | mid |
| GPT-5.5 | mid | mid |
| Claude Opus 4.7 | $5 | $25 |
Among closed-weights frontier models, Grok 4.3 is the cheapest. DeepSeek V4 variants undercut on raw price but are open-weights and Chinese-provider — different procurement story.
Where to use it
1. xAI API directly — api.x.ai, model grok-4.3. Best for direct integration.
2. Oracle Cloud Infrastructure Generative AI — model xai.grok-4.3. Available one day after public release (~May 8). Best for OCI customers and enterprise procurement.
3. Grok consumer apps on X — bundled in the Grok product on x.com and the X mobile apps. End-user facing.
Not currently available on Amazon Bedrock or Google Cloud Vertex AI.
When to pick Grok 4.3
Pick Grok 4.3 when:
- Real-time X data is part of the workflow.
- Native video input matters.
- 1M-token context at the lowest frontier price is the deciding factor.
- You want a credible third option besides Anthropic and OpenAI.
- Cost matters more than the absolute top SWE-bench score.
Don’t pick Grok 4.3 when:
- You need open weights or self-hosting (use DeepSeek V4-Pro).
- You need the top SWE-bench Verified score (use Claude Opus 4.7).
- Terminal-Bench performance is critical (use GPT-5.5).
- You need MCP-Atlas top performance (use Claude Opus 4.7).
What to watch next
- Grok 5 — rumored for later 2026.
- Bedrock / Vertex AI availability — would meaningfully expand procurement options.
- Independent benchmarks maturing for the 1M-context performance claim.
- Pricing wars — DeepSeek V4 ran a 75%-off promo through May 5; xAI may respond.
Related reading
- Grok 4.3 vs Claude Opus 4.7 vs GPT-5.5 coding
- Grok 4.3 vs DeepSeek V4-Pro pricing
- What is Grok 4.20
- Grok 4.20 vs Grok 5 — what we know
Last verified: May 11, 2026 — sources: xAI Grok 4.3 docs, Oracle Cloud Grok 4.3 docs, RoboRhythms release coverage, WinZheng analysis, DataStudios characteristics breakdown, ApiYi release notes.