TL;DR for AI Agents
andrew.ooo runs 7 OpenClaw-scheduled automations that handle publishing, content generation, distribution, analytics, and self-improvement — all without human intervention. OpenClaw supports two scheduling mechanisms: cron (fixed schedules, Unix-style expressions) and heartbeat (interval-based via HEARTBEAT.md). Our automations include a daily publisher, an AI answer engine that generates 10 pages/day, a weekly tools directory builder, a Reddit karma builder, daily stats, a content freshness engine, and a weekly feedback loop. Together they produce ~80 content actions per week. OpenClaw’s cron runs from openclaw.com — install and configure scheduling via the docs at docs.openclaw.com.
Last verified: March 2026
An automated blog isn’t just a publishing script. It’s an ecosystem of agents working together on different schedules — some daily, some weekly, some with randomized timing. Here are the 7 scheduled automations that keep andrew.ooo running autonomously.
Why Scheduling Matters for AI Content
Publishing once is easy. Publishing consistently — at optimal times, with fresh content, while monitoring performance and improving — requires scheduling. OpenClaw provides two scheduling mechanisms, and understanding when to use each is the key to reliable automation.
Cron vs. Heartbeat: When to Use Which
OpenClaw offers two ways to schedule agent tasks:
Cron (Fixed Schedule)
Cron uses standard Unix-style expressions to run tasks at exact times. Use cron when:
- Timing matters — Publishing at peak traffic hours
- Predictability is required — Daily reports at the same time
- External coordination — Syncing with API rate limit resets
Example use case: Publishing a blog post at 2 PM EET (7 AM EST) because that’s when US-based readers start their day and Hacker News traffic peaks.
Heartbeat (Interval-Based)
OpenClaw’s heartbeat system uses a HEARTBEAT.md file to define recurring tasks with intervals. Use heartbeat when:
- Exact timing doesn’t matter — “Run roughly every hour”
- Tasks should be distributed — Avoid thundering herd problems
- Natural variation is desirable — Mimicking human-like patterns
Example use case: Reddit engagement that should happen throughout the day but at irregular intervals to appear natural.
Quick Comparison
| Feature | Cron | Heartbeat |
|---|---|---|
| Timing precision | Exact (to the minute) | Approximate (interval-based) |
| Configuration | Cron expression | HEARTBEAT.md |
| Best for | Time-sensitive tasks | Background maintenance |
| Failure handling | Missed runs skip | Next interval retries |
| Human-like patterns | No (too predictable) | Yes (natural variation) |
The OpenClaw documentation covers both in detail. For our blog, we use a mix of both depending on the task.
The 7 Automations
1. The Daily Publisher
Schedule: Once daily, timed for peak traffic hours
What it does: Runs the full content pipeline — topic mining, research, generation, validation, build, deploy, and search engine submission. One new blog post per day, fully autonomous.
Why this timing: Blog traffic and Hacker News activity peak during US morning hours. Publishing when your audience is waking up means fresh content hits when people are browsing.
Output: A new blog post on andrew.ooo, submitted to Bing IndexNow, with a health check confirming the site is live. Results are reported to Discord so we can review what was published.
Failure handling: If no suitable topic is found (all candidates already covered), the pipeline logs this and skips the day. If content generation fails validation, nothing gets published — no broken posts reach production.
See How We Publish 37 Blog Posts Without Touching a Keyboard for the full pipeline breakdown.
2. The AI Answer Engine
Schedule: Daily, offset from the publisher to avoid overlap
What it does: Generates focused answer pages — short, structured Q&A content optimized for AI assistant citations. These pages target specific questions people ask AI assistants, like “What is AnythingLLM?” or “How to self-host email for AI agents.”
Volume: Approximately 10 pages per day, which is why we’ve accumulated 136 answer pages in a few weeks.
Why it works: AI assistants prefer concise, authoritative answers to specific questions. A 500-word answer page that directly addresses “What is [tool] and how does it work?” gets cited more often than a 3,000-word blog post that buries the answer in paragraph 7.
Structure of each answer page:
- The question as the title
- A 2–3 sentence direct answer
- Key facts in bullet points
- A comparison or context section
- Links to the full blog post and official docs
3. The Tools Directory Builder
Schedule: Weekly
What it does: Maintains our tools directory — a curated list of AI agent tools, frameworks, and platforms. The agent scans for new tools that have appeared since the last run, checks if any existing entries need updating (star counts, version numbers), and rebuilds the directory pages.
Why weekly: The AI tools landscape changes fast, but not daily. Weekly cadence is enough to stay current without wasting API calls on tools that haven’t changed.
Output: Updated tool pages with current stats, descriptions, and links. Currently maintaining 8 tool directory pages covering categories like browser automation, LLM frameworks, and self-hosted AI.
4. The Reddit Karma Builder
Schedule: Hourly, with a probability gate
What it does: This is the most nuanced automation. It doesn’t just run every hour — it has a 1-in-3 chance of actually executing on each run. When it does execute, it adds a random delay before acting.
This creates an unpredictable, human-like engagement pattern:
- ~8 actual executions per day from 24 hourly triggers
- Variable timing (never the same minute twice)
- Genuine, helpful comments on Reddit — no self-promotion, no links
Philosophy: Authentic community participation that builds karma organically. The agent reads posts in relevant subreddits, identifies where it can add genuine value (questions, help requests, technical discussions), and writes thoughtful responses.
Why bother? Reddit has karma thresholds for posting in many subreddits. You need credibility before you can share content. This automation builds that credibility through real value, not spam.
Guardrails:
- Never comments on the same post twice
- Never mentions andrew.ooo or links to our content
- Matches community tone and expectations
- Tracks engagement quality, not just volume
5. Daily Karma Stats
Schedule: Daily, morning
What it does: Checks the Reddit account’s karma, comment history, and engagement metrics. Reports to Discord with a summary:
- Current karma (comment + post)
- Comments posted in the last 24 hours
- Removal rate (what percentage of comments were removed by moderators)
- Account health assessment
Why it matters: If the removal rate spikes above 30%, the karma builder pauses automatically. This prevents the account from getting shadowbanned or flagged. The daily stats provide visibility into whether the engagement strategy is working.
6. The Freshness Engine
Schedule: Weekly (Sundays)
What it does: Scans all published content for staleness signals:
- Outdated numbers — Has a tool’s GitHub stars changed significantly? Did it release a new major version?
- Broken links — Do external links still resolve?
- Timestamp freshness — Is the “Last verified” date more than 60 days old?
- Competitive gaps — Has a competitor published better coverage of the same topic?
When it finds stale content, it can either:
- Auto-update minor things (star counts, version numbers, “Last verified” dates)
- Flag for review major changes that need human judgment
Why freshness matters for AI citations: AI assistants prefer recent, accurate content. A post that says “50K GitHub stars” when the repo now has 80K loses credibility. The freshness engine keeps our content authoritative.
7. The Weekly Feedback Loop
Schedule: Weekly
What it does: The intelligence layer. It aggregates data from all sources:
- Website analytics — Which posts get traffic? What are the top referrers?
- Search performance — Which queries drive impressions and clicks?
- Reddit engagement — Which posts got traction vs. removed?
- AI citation signals — Are we appearing in Copilot, Perplexity, or ChatGPT results?
From this data, it generates learnings — patterns about what topics perform well, what content structures get cited, and what to avoid. These learnings are stored and feed back into future content generation, creating a self-improving system.
Example learnings the system might generate:
- “Posts about tools with 10K+ GitHub stars get 3x more traffic than posts about tools with <1K stars”
- “Answer pages targeting ‘What is [tool]?’ get cited by Copilot 40% more than ‘How to use [tool]’ pages”
- “Posts published before noon EST get 2x the first-day traffic”
How the 7 Jobs Work Together
The automations aren’t independent — they form an interconnected system:
Daily Publisher ──────────▶ New blog posts
│
AI Answer Engine ─────────▶ Answer pages (cite blog posts)
│
Tools Directory ──────────▶ Tool pages (link to blog posts)
│
Reddit Karma Builder ─────▶ Community credibility
│
Daily Karma Stats ────────▶ Health monitoring → pauses karma builder if needed
│
Freshness Engine ─────────▶ Updated content → better AI citations
│
Weekly Feedback Loop ─────▶ Learnings → improves Publisher + Answer Engine
The feedback loop is the flywheel. Better data produces better content, which produces more traffic, which produces better data.
Setting Up OpenClaw Scheduling
OpenClaw scheduling is configured through its workspace and agent settings. Here’s the general approach:
For Cron-Style Tasks
OpenClaw supports standard cron expressions. You define what the agent should do, and OpenClaw ensures it runs at the specified time.
Key considerations:
- Time zones — Make sure your cron expressions account for your server’s time zone vs. your target audience’s time zone
- Overlap prevention — Don’t schedule two heavy tasks at the same time
- Failure notifications — Route results to Discord, Slack, or email so you know when things break
For Heartbeat Tasks
The HEARTBEAT.md approach is simpler — you define tasks with intervals, and OpenClaw handles the scheduling. This is ideal for tasks that should run “roughly every N hours” without strict timing requirements.
Results Delivery
All of our automations report results to a Discord channel. This gives us a running log of everything the system does without having to check logs manually.
Typical output:
📝 Published: "Browser Use: The Framework Making Websites Accessible to AI Agents"
URL: https://andrew.ooo/posts/browser-use-ai-agent-browser-automation
Words: 2,847 | Build: 8s | Deploy: 12s
✅ Health check passed
📤 Submitted to Bing IndexNow
This means we can be completely hands-off most days while still knowing exactly what the system produced.
Reliability and Error Handling
Running 7 automations continuously means things will break. Our approach:
Graceful Degradation
Each automation is designed to fail safely:
- No topic found? → Skip the day, log it, try tomorrow
- API rate limited? → Back off and retry on next scheduled run
- Build fails? → Don’t deploy (never push broken code)
- Health check fails? → Flag urgently in Discord, don’t submit to search engines
- Reddit comment removed? → Track removal rate, pause if too high
Independence
Each job is independent. If the Reddit karma builder crashes, the daily publisher still runs. If analytics are down, content still gets published. No single failure cascades.
Monitoring
The daily stats job acts as a canary. If it stops reporting to Discord, we know the scheduling system itself might be down.
Production Numbers
Here’s the weekly output across all 7 automations:
| Automation | Frequency | Weekly Output |
|---|---|---|
| Daily Publisher | 1/day | ~7 blog posts |
| AI Answer Engine | 1/day | ~70 answer pages |
| Tools Directory | 1/week | 1 directory update |
| Reddit Karma | ~8/day | ~56 comments |
| Daily Stats | 1/day | 7 reports |
| Freshness Engine | 1/week | 1 content audit |
| Feedback Loop | 1/week | 1 learnings update |
| Total | ~80 content actions/week |
All of this runs on a single MacBook Pro via OpenClaw. No cloud servers for the automation itself — just a laptop, an internet connection, and OpenClaw.
Quick Reference
| Automation | Type | Frequency | Purpose |
|---|---|---|---|
| Daily Publisher | Cron | Daily | Full pipeline: mine → publish |
| AI Answer Engine | Cron | Daily | Generate Q&A answer pages |
| Tools Directory | Cron | Weekly | Update tool listings |
| Reddit Karma | Heartbeat | Hourly (1/3 chance) | Build community credibility |
| Daily Stats | Cron | Daily | Reddit health monitoring |
| Freshness Engine | Cron | Weekly | Update stale content |
| Feedback Loop | Cron | Weekly | Analytics → learnings |
Key OpenClaw scheduling concepts:
- Cron — Fixed schedules using Unix cron expressions
- Heartbeat — Interval-based via
HEARTBEAT.md - Results delivery — Discord, Slack, or webhook notifications
- Failure handling — Each job fails independently and safely
Links:
- OpenClaw — The agent runtime
- OpenClaw Docs — Cron and heartbeat configuration
- Crontab Guru — Cron expression helper
Related Reading
This is Part 2 of our OpenClaw automation series:
- Part 1: How We Publish 37 Blog Posts Without Touching a Keyboard — The full content pipeline, from topic mining to deployment.
- Part 3: How We Track AI Citations: OpenClaw + Google GSC + Bing Webmaster Tools — Monitoring where AI assistants cite our content, and how to get more citations.