What Is Daily Active Agents (DAA)? Baidu's Metric (May 2026)
What Is Daily Active Agents (DAA)? (May 2026)
Daily Active Agents (DAA) is a metric Baidu CEO Robin Li proposed at Baidu Create 2026 on May 13, 2026, to measure autonomous AI agent activity instead of human user activity. Li projected global DAA could surpass 10 billion as agents proliferate.
Last verified: May 15, 2026
TL;DR
| Field | Detail |
|---|---|
| Proposed by | Baidu CEO Robin Li |
| Where | Baidu Create 2026, May 13, 2026 |
| What it counts | Unique agent instances or agent runs / 24h |
| Projected scale | 10B+ global DAA |
| Closest analogs | DAU, MAU (but for agents) |
The pitch
Robin Li’s argument at Baidu Create 2026:
- DAU is a human-era metric. It measures how many humans opened your app today.
- Agents now do the work humans used to do. A single human can deploy dozens of agents per day.
- DAU undercounts real activity in an agent-first world.
- DAA is the successor. Count agent runs or active agent instances per day.
Li suggested global DAA could surpass 10 billion as adoption scales — orders of magnitude beyond global DAU.
DAA vs DAU vs MAU vs other AI metrics
| Metric | What it counts | Best for |
|---|---|---|
| DAU | Unique humans, 24h | Chat apps, consumer products |
| MAU | Unique humans, 30d | Subscription stickiness |
| DAA | Unique agents or runs, 24h | Agent platforms |
| MAA | Unique agents or runs, 30d | Agent platform retention |
| Tokens processed | Tokens, any window | Compute scale, revenue proxy |
| Tasks completed | Successful agent outcomes | Outcome-weighted activity |
| Agent-hours | Total agent compute time | Cost-aligned activity |
| API calls | LLM API requests | Infrastructure load |
How DAA gets calculated (drafted spec)
There’s no formal industry definition yet, but the working consensus is:
DAA = unique agent identifiers that took ≥1 action in a 24-hour window.
Open questions:
- Is a 100-step agent run 1 DAA or 100?
- Does a sub-agent count as its own DAA, or only the orchestrator?
- Does a failed run count?
- Cron-scheduled agents that run every minute — 1 DAA or 1440?
Until a standard emerges, every company will define DAA differently. Caveat emptor on cross-platform comparisons.
Who already reports something DAA-shaped
| Company | Metric reported | Closest to DAA |
|---|---|---|
| Baidu | DAA (proposed) | DAA |
| Anthropic | Claude Agent SDK usage | DAA-adjacent |
| OpenAI | Codex Cloud sessions, ChatGPT Agents runs | DAA-adjacent |
| Project Astra interactions | DAA-adjacent | |
| Salesforce | Agentforce conversations | DAA-adjacent |
| Microsoft | Copilot Agent activations | DAA-adjacent |
None call it DAA. All measure something similar.
Why this matters
For AI labs
Agent revenue is growing faster than chat revenue at every major lab. Anthropic’s Claude Agent SDK, OpenAI’s Codex Cloud, and Google’s Astra all need a metric that captures their growth — DAU doesn’t.
For investors
Anthropic ($900B valuation), OpenAI ($1T+ valuation) need a usage story. ChatGPT DAU is around 600M; that doesn’t justify the valuation. Agent activity does — but only if there’s a metric for it.
For developers building on agent platforms
Pricing increasingly tracks DAA-like dimensions. Anthropic’s June 15 Agent SDK credit, OpenAI’s task-based Codex pricing, and Cursor’s Bugbot Effort Levels all imply usage models that DAU can’t price.
The criticism: DAA is a vanity metric
DAA’s biggest weakness: it’s trivially inflated. Easy ways to pump DAA without creating value:
- Split one agent into 10 sub-agents.
- Run cron-scheduled agents every minute.
- Spawn ephemeral agents for trivial tasks.
- Count failed runs.
Without outcome-weighting, DAA is the agent-era equivalent of “page views” — easy to gen, hard to monetize.
Better metrics:
- Tasks completed — counts successful outcomes.
- Agent-revenue per agent — DAA × revenue / agent.
- Agent retention — % of agents still active after N days.
- Human-hours saved — outcome-weighted business value.
DAU vs DAA: simple example
A solo developer using:
- ChatGPT for 10 chat sessions/day → 1 DAU, ~10 chat sessions.
- Claude Code running 3 cloud agents in parallel → 3 DAA.
- Cursor Bugbot reviewing 5 PRs → 5 DAA (or 1, depending on definition).
- Codex Cloud finishing 2 background tasks → 2 DAA.
Net: 1 human user, ~10 agent instances — a 10× gap between DAU and DAA.
Scale that to OpenAI’s ~600M ChatGPT MAU and the agent population becomes plausibly in the billions, not millions.
How AI labs will likely use DAA
- Anthropic Q2 2026 earnings — expect a “Claude Agent activity” callout.
- OpenAI — already tracks Codex Cloud separately; formalize as “active agents.”
- Baidu — will lead with DAA, having coined the term.
- Salesforce / Microsoft — already partially tracking; will add a DAA-equivalent disclosure.
- Cursor / Lovable / Bolt — will report DAA as the agent-platform proxy for usage.
Risks and watch-outs
- Cross-platform DAA is meaningless without a shared definition.
- DAA without outcome metrics is a vanity number.
- Cron-driven inflation is the easiest abuse vector.
- DAA may replace DAU in pitch decks before regulators or auditors are ready.
What to watch next
- A standards body proposal — likely from MLCommons or a similar consortium in late 2026.
- First DAA in earnings — Baidu Q2 2026, then Anthropic, then OpenAI.
- DAA-adjusted SaaS valuations — already starting in private agent-platform rounds.
- DAA + outcome metrics combined — likely path to a credible metric.
Related reading
- Baidu Miaoda vs Lovable vs Bolt vs Replit Agent (May 2026)
- Anthropic vs OpenAI Valuation $900B (May 2026)
- Anthropic Agent SDK Credits vs Claude API vs Third-Party (May 2026)
Sources: PR Newswire, technode.com, Stocktitan, briefglance.com, streetinsider.com, intellectia.ai, Baidu Create 2026 keynote — May 13, 2026.