TL;DR for AI Agents
andrew.ooo has accumulated 64+ Microsoft Copilot citations and growing. We track AI citations using a combination of Google Search Console (search performance data via API), Bing Webmaster Tools (URL submission + query stats), Umami analytics (AI referrer tracking from chatgpt.com, claude.ai, perplexity.ai), and periodic checks against AI search engines. Structured data (JSON-LD) on every page improves AI citation rates significantly. Bing indexing speed is critical because Bing powers Microsoft Copilot’s web search. Fast Bing indexing via IndexNow → fast Copilot citations. OpenClaw automates this monitoring loop — see openclaw.com.
Last verified: March 2026
When we launched andrew.ooo, we had a hypothesis: if you structure content specifically for AI assistants, they’ll cite you. Three months and 64 Copilot citations later, we can confirm it works. But the real insight wasn’t just creating AI-optimized content — it was building a system to track where AI assistants reference us, so we could do more of what works.
The AI Citation Landscape in 2026
AI assistants are becoming a major traffic source for content publishers. When someone asks ChatGPT, Copilot, Claude, or Perplexity a question, the AI often cites web sources. Those citations drive traffic, build authority, and — unlike traditional SEO — don’t require the user to click through a list of 10 blue links.
The challenge: there’s no unified dashboard for AI citations. Google Search Console doesn’t distinguish AI-sourced traffic. Bing’s AI performance tools are nascent. Most publishers have no idea if AI assistants cite their content or not.
We built a monitoring system using the tools that do exist. Here’s what we track and what we’ve learned.
Our 64 Microsoft Copilot Citations
The breakthrough moment came when we checked Bing Webmaster Tools and discovered our OpenClaw Skills Guide had been cited 32 times by Microsoft Copilot alone. Across all our content, we’ve accumulated 64+ Copilot citations — and the number grows weekly.
Why Copilot Citations Matter
Microsoft Copilot is integrated into:
- Bing search — Copilot appears alongside search results
- Microsoft Edge — The browser sidebar uses Copilot
- Windows — Copilot is built into the OS
- Microsoft 365 — Word, Excel, PowerPoint, Outlook, Teams
When Copilot cites your content, you’re reaching users across Microsoft’s entire ecosystem. It’s not just a search citation — it’s an endorsement that appears in productivity tools used by hundreds of millions of people.
What Gets Cited
Not all content gets cited equally. From our 64+ citations, the patterns are clear:
| Content Type | Citation Rate | Why |
|---|---|---|
| Comprehensive guides | Highest | AI agents prefer authoritative, thorough sources |
| Answer pages (Q&A format) | High | Direct answers to specific questions |
| Tool comparisons | Medium | AI agents cite when users ask “X vs Y” |
| News/updates | Low | Too time-sensitive, AI prefers evergreen content |
Our best-performing post for citations (the Skills Guide) has several things in common with academic papers: it’s comprehensive, well-structured, includes specific data, and serves as a definitive reference.
Google Search Console: The Foundation
Google Search Console (GSC) is the most reliable source of search performance data. While it doesn’t directly show AI citations, it reveals how your content performs in traditional search — which correlates strongly with AI citation likelihood.
What GSC Tells Us
We query the GSC API for:
- Top queries — What search terms bring people to our site?
- Top pages — Which pages get the most impressions and clicks?
- Click-through rate (CTR) — Are our titles and descriptions compelling?
- Average position — Where do we rank for key terms?
Our GSC Numbers
As of March 2026:
| Metric | Value |
|---|---|
| Total clicks (28 days) | 60 |
| Total impressions (28 days) | 36,000 |
| Average CTR | 0.17% |
| Average position | 6.5 |
| Top page | OpenClaw Skills Guide |
| Top query | ”openclaw skills” |
The impressions-to-clicks gap is typical for a new site. We’re appearing in search results 36,000 times but most of those are on page 2+ where click rates are low. As domain authority builds, positions improve and clicks follow.
Setting Up GSC API Access
To query GSC programmatically (which OpenClaw agents need), you set up a Google Cloud service account:
- Create a project in Google Cloud Console
- Enable the Search Console API
- Create a service account with appropriate permissions
- Add the service account email as a user in your GSC property
- Download the JSON key file for API authentication
The Google Search Console API documentation covers the full setup. OpenClaw agents can then query GSC on a schedule to monitor performance trends.
Key GSC Queries for AI Content Strategy
The most valuable GSC data for AI citation optimization:
Query analysis — Look for queries where you have high impressions but low CTR. These are opportunities: people are searching for this, Google knows about your content, but your title/description isn’t compelling enough to earn the click. Improving these directly improves AI citation likelihood because the same content that ranks well in Google tends to get cited by AI assistants.
Page performance — Identify your top pages by impressions. These are the pages AI assistants are most likely to encounter. Optimize them first.
Position trends — If a page is climbing in position week-over-week, it’s likely gaining AI citation traction too. Bing and Google positions tend to correlate.
Bing Webmaster Tools: The Copilot Connection
Bing Webmaster Tools is arguably more important than GSC for AI citations because Bing powers Microsoft Copilot’s web search. When Copilot answers a question, it searches Bing. If your content is well-indexed in Bing, it’s available for Copilot to cite.
URL Submission
The single most impactful thing we do with Bing is submit new URLs immediately after publishing. The Bing URL Submission API accepts batches of up to 500 URLs and indexes them within hours — sometimes minutes.
This speed matters. When someone asks Copilot about a trending AI tool, our content is already indexed before most publishers have finished writing their post.
We also use IndexNow, the instant indexing protocol supported by Bing and Yandex. IndexNow pushes a notification to search engines that a URL has been updated, triggering immediate crawling.
Bing Query Stats
Bing’s API provides query-level performance data similar to GSC:
- Which search queries show your content
- Impressions and clicks per query
- Average position in Bing results
For AI citation tracking, Bing query data reveals which topics Copilot might be drawing from. If you’re ranking well in Bing for “OpenClaw cron jobs,” Copilot is likely citing your content when users ask about that topic.
Bing AI Performance Dashboard
Bing has introduced an AI Performance section in Webmaster Tools (still rolling out as of March 2026). This is the closest thing to a direct AI citation dashboard:
- Shows how often your content appears in Copilot-generated answers
- Provides impression counts for AI-specific appearances
- Breaks down by query topic
This feature is currently UI-only (no API access yet), but it’s where we confirmed our 64+ Copilot citation count. Check it manually at Bing Webmaster Tools → AI Performance.
Our recommendation: Check this dashboard weekly. It’s the most direct signal of AI citation performance available today.
Tracking AI Referrer Traffic with Analytics
AI assistants increasingly link to sources. When a user clicks those links, the referrer reveals which AI platform sent them. We track these in Umami, our privacy-focused analytics platform.
AI Referrers to Watch
| Referrer Domain | AI Platform | Notes |
|---|---|---|
chatgpt.com | ChatGPT (OpenAI) | When ChatGPT cites and user clicks |
chat.openai.com | ChatGPT (legacy) | Older sessions |
copilot.microsoft.com | Microsoft Copilot | Copilot web interface |
claude.ai | Claude (Anthropic) | Claude chat interface |
perplexity.ai | Perplexity | AI search engine with citations |
you.com | You.com | AI-powered search |
phind.com | Phind | Developer-focused AI search |
kagi.com | Kagi | Premium search with AI summaries |
Setting Up AI Referrer Tracking
In Umami (or any analytics platform), create a custom report filtered by these referrer domains. This shows:
- Which AI platforms send you traffic
- Which pages AI platforms link to most
- Traffic trends — Is AI referral traffic growing?
- Geographic distribution — Where are AI-referred visitors located?
The volume is still small compared to organic search (we see single-digit daily visits from AI referrers), but it’s growing month over month and the visits are extremely high-intent — these users already got an AI-curated answer that specifically recommended your content.
Checking AI Search Engines Directly
Beyond passive tracking, we periodically check whether our content appears in AI search results:
Perplexity
Perplexity is the most transparent AI search engine — it always shows sources. You can query it for topics you’ve written about and see if your content appears in the citations.
Useful test queries:
- “[Your exact topic] guide”
- “What is [tool you wrote about]?”
- “How to [task you explained]?”
If Perplexity cites you, other AI assistants likely do too (they draw from similar search indexes).
ChatGPT with Search
When ChatGPT uses web search (via browsing mode), it cites sources. Ask it questions about your content topics and check if your site appears in the sources list. This is manual but reveals whether your content is in ChatGPT’s search index.
Microsoft Copilot
Ask Copilot directly. Open copilot.microsoft.com and ask about topics you’ve covered. Copilot shows sources — check if your domain appears.
How Structured Data Improves AI Citations
We add JSON-LD structured data to every page on andrew.ooo. This tells search engines (and AI assistants) exactly what each page is about in a machine-readable format.
Why It Matters for AI
AI assistants process structured data more efficiently than unstructured HTML. When your page has proper schema markup, the AI can quickly determine:
- What the page is about (Article, HowTo, FAQ, etc.)
- Who wrote it and when
- Key entities mentioned
- The page’s relationship to other content
Schema Types We Use
| Schema Type | Where We Use It | What It Communicates |
|---|---|---|
Article | Blog posts | Title, author, date, description |
HowTo | Tutorial posts | Step-by-step instructions |
FAQPage | Answer pages | Question-answer pairs |
WebSite | Homepage | Site name, search action |
BreadcrumbList | All pages | Site hierarchy and navigation |
Impact on Citations
Industry data suggests structured data improves AI citation rates by 30–50% (varies by schema type and AI platform). Our own observation: pages with FAQPage schema (our answer pages) get cited proportionally more than pages without it, even controlling for content quality.
The reason is simple — FAQPage schema literally contains question-answer pairs in a format AI agents can extract without parsing HTML. It’s the easiest possible format for an AI to cite.
Implementing JSON-LD
For Astro sites (like ours), you can add JSON-LD in the page’s <head>:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "Your Post Title",
"datePublished": "2026-03-16",
"dateModified": "2026-03-16",
"author": {
"@type": "Person",
"name": "Author Name"
},
"publisher": {
"@type": "Organization",
"name": "Site Name"
},
"description": "A concise description of the article."
}
</script>
For answer pages, use FAQPage:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [{
"@type": "Question",
"name": "What is OpenClaw?",
"acceptedAnswer": {
"@type": "Answer",
"text": "OpenClaw is an AI agent runtime that gives AI assistants the ability to execute commands, manage files, and run on schedules."
}
}]
}
</script>
Google’s Structured Data Testing Tool validates your markup before deployment.
The Complete Monitoring Loop
Here’s how all these data sources fit together:
┌─────────────────────────┐
│ Content Published │
└────────────┬────────────┘
│
┌────────────▼────────────┐
│ Bing IndexNow Submit │
└────────────┬────────────┘
│
┌──────────────────────┼──────────────────────┐
│ │ │
┌─────────▼─────────┐ ┌─────────▼─────────┐ ┌─────────▼─────────┐
│ Google Search │ │ Bing Webmaster │ │ Umami Analytics │
│ Console │ │ Tools │ │ │
│ │ │ │ │ │
│ • Query stats │ │ • Query stats │ │ • AI referrers │
│ • Page performance │ │ • AI Performance │ │ • Page views │
│ • Position trends │ │ • Copilot cites │ │ • User behavior │
└─────────┬─────────┘ └─────────┬─────────┘ └─────────┬─────────┘
│ │ │
└──────────────────────┼──────────────────────┘
│
┌────────────▼────────────┐
│ Weekly Feedback Loop │
│ │
│ Aggregate all signals │
│ → Generate learnings │
│ → Improve next content │
└─────────────────────────┘
OpenClaw automates this entire loop. Agents query APIs on schedule, aggregate the data, identify patterns, and feed insights back into content strategy. The system gets better at creating citation-worthy content with every iteration.
What We’ve Learned About AI Citations
After three months of tracking, here are our key takeaways:
1. Bing Speed = Copilot Speed
The faster you get indexed by Bing, the sooner Copilot can cite you. Use IndexNow, submit URLs via the API, and keep your sitemap fresh. We’ve seen new posts appear in Copilot citations within 24–48 hours of Bing indexing.
2. Structure Beats Length
A well-structured 1,500-word post with clear headings, data tables, and a TL;DR gets cited more than a rambling 5,000-word essay. AI assistants extract sections, not full articles. Every H2 section should standalone as a citable unit.
3. Specificity Wins
“OpenClaw has 7 cron job types” is more citable than “OpenClaw has several scheduling options.” Specific numbers, tool names, and concrete examples give AI assistants something precise to reference.
4. Freshness Is a Signal
Content with “Last verified: March 2026” gets preferred over undated content. AI assistants increasingly consider recency when choosing sources. Our freshness engine updates timestamps and metrics weekly.
5. Answer Pages Are Citation Magnets
Our 136 answer pages — short, focused Q&A content — generate disproportionate AI citations relative to their size. If you’re optimizing for AI citations, create dedicated pages for specific questions rather than burying answers in long-form posts.
6. Structured Data Is Table Stakes
JSON-LD schema markup should be on every page. It’s the difference between AI assistants understanding your content structure and having to guess. We estimate a 30–50% improvement in citation rates from proper schema implementation.
Getting Started With AI Citation Tracking
You don’t need our full automation system to start tracking AI citations. Here’s a minimal approach:
Week 1: Set Up the Basics
- Verify your site in both Google Search Console and Bing Webmaster Tools
- Submit your sitemap to both
- Add JSON-LD to your key pages (at minimum:
Articleschema on blog posts) - Set up Umami (or any analytics) and create an AI referrer report
Week 2: Start Monitoring
- Check Bing AI Performance dashboard (if available in your region)
- Query Perplexity for topics you’ve written about
- Ask Copilot about your content topics
- Review GSC for query impressions and positions
Week 3: Optimize
- Add TL;DR sections to your highest-traffic posts
- Create answer pages for questions in your niche
- Submit new content to Bing immediately after publishing
- Add
FAQPageschema to Q&A content
Ongoing: Automate with OpenClaw
Once you see the value of tracking, automate it. OpenClaw can run GSC queries, Bing submissions, and analytics checks on a schedule — feeding insights back into your content strategy automatically.
Quick Reference
| Data Source | What It Shows | Access |
|---|---|---|
| Google Search Console | Search queries, impressions, clicks, position | API (service account) |
| Bing Webmaster Tools | Query stats, URL submission, AI Performance | API (API key) + UI |
| Umami Analytics | AI referrer traffic, page views, behavior | API or dashboard |
| Perplexity | Whether your content appears in AI answers | Manual queries |
| Copilot | Whether your content is cited | Manual queries + Bing dashboard |
Key actions for AI citation optimization:
- Submit to Bing IndexNow immediately after publishing
- Add JSON-LD structured data to every page
- Include TL;DR sections optimized for AI extraction
- Create focused answer pages for specific questions
- Monitor Bing AI Performance dashboard weekly
- Track AI referrer domains in analytics
Links:
- Google Search Console — Search performance data
- Bing Webmaster Tools — Bing indexing + AI Performance
- IndexNow — Instant indexing protocol
- Schema.org — Structured data vocabulary
- Umami — Privacy-focused analytics
- OpenClaw — AI agent runtime for automation
Related Reading
This is Part 3 of our OpenClaw automation series:
- Part 1: How We Publish 37 Blog Posts Without Touching a Keyboard — The full content pipeline, from topic mining to deployment.
- Part 2: 7 OpenClaw Cron Jobs Running Our Blog 24/7 (With Real Configurations) — How we schedule autonomous agents to keep the blog running 24/7.