TL;DR

Goose is a free, open-source AI agent built by Block (Jack Dorsey’s company) that runs entirely on your machine. It’s not just a coding assistant — it handles research, automation, data analysis, and multi-step workflows through a composable extension system. Key highlights:

  • 38,000+ GitHub stars, Apache 2.0 license, 400+ contributors, now under the Linux Foundation’s Agentic AI Foundation (AAIF)
  • Any LLM — Anthropic, OpenAI, Google, Ollama, OpenRouter, Azure, Bedrock, and 15+ more providers
  • Desktop app + CLI + API — native apps for macOS, Linux, and Windows, built in Rust
  • 70+ MCP extensions — connect to GitHub, Slack, PostgreSQL, Jira, and dozens more tools via the Model Context Protocol
  • Recipes — reusable YAML workflow definitions you can share, version-control, and run in CI
  • Subagents — spawn parallel workers for code review, research, or file processing
  • Security built in — prompt injection detection, tool permissions, sandbox mode, adversary reviewer
  • ACP support — works as an ACP server for Zed, JetBrains, and VS Code; can use Claude Code and Codex as providers

Install in seconds: curl -fsSL https://github.com/aaif-goose/goose/releases/download/stable/download_cli.sh | bash

Think of it as the Swiss Army knife of AI agents — model-agnostic, extensible, and completely free.


What Is Goose and Why Should You Care?

Goose started life as an internal tool at Block. Their engineering teams needed something that could automate entire workflows — running tests, debugging failures, modifying code across files — not just autocomplete suggestions. Rather than license a closed-source tool, they built their own and open-sourced it under Apache 2.0 in January 2025.

Since then, it’s grown from a niche internal project to one of the most-starred AI agent repos on GitHub. The move to the Linux Foundation’s Agentic AI Foundation (AAIF) in early 2026 cemented its position as a vendor-neutral, community-governed project.

What sets Goose apart from Claude Code, Cursor, or OpenAI Codex:

  1. Model freedom — You’re not locked into one provider. Use Claude for complex architecture, GPT-4o for quick tasks, or a local Llama model through Ollama for air-gapped environments
  2. Beyond code — While most AI coding tools focus narrowly on programming, Goose handles research, writing, data analysis, and general automation
  3. Zero cost — The agent itself is free. You only pay for API calls to your chosen LLM provider (or nothing if you run local models)
  4. Deep MCP integration — With 70+ documented extensions, Goose can interact with virtually any tool in your stack

How Goose Actually Works

When you give Goose a task, it doesn’t just generate code and hand it back. It follows an agentic loop:

  1. Plan — Break the task into steps
  2. Execute — Read files, run shell commands, edit code, call MCP tools
  3. Observe — Check the results of each action
  4. Iterate — If something fails (a test breaks, a build errors), adjust and retry

This loop continues until the task is complete or Goose needs your input.

Installation and First Run

Getting started takes about two minutes:

# macOS (Homebrew)
brew install goose

# Any platform (CLI)
curl -fsSL https://github.com/aaif-goose/goose/releases/download/stable/download_cli.sh | bash

# Or download the desktop app from goose-docs.ai

Configure your LLM provider:

# Set up with Claude (or any supported provider)
goose configure

# Or set environment variables directly
export ANTHROPIC_API_KEY="your-key-here"

Start a session:

# Interactive session
goose session

# Or give it a direct task
goose run "Fix the failing tests in src/auth/"

The first interaction feels responsive — Goose starts planning within seconds. The desktop app provides a GUI where you can watch it work through tasks in real-time, which is genuinely useful for understanding what the agent is doing.


MCP Extensions: Goose’s Killer Feature

The Model Context Protocol (MCP) is the open standard for connecting AI agents to tools and data sources. Goose was one of its earliest adopters and has arguably the deepest integration in the ecosystem.

An MCP extension gives Goose access to a specific capability. Some examples:

# Example: Configure GitHub and PostgreSQL extensions
extensions:
  - name: github
    type: mcp
    config:
      token: ${GITHUB_TOKEN}
  - name: postgres
    type: mcp
    config:
      connection_string: ${DATABASE_URL}
  - name: slack
    type: mcp
    config:
      token: ${SLACK_BOT_TOKEN}

With these configured, you can say things like:

  • “Check the open PRs on our repo, review the code changes, and post summaries to #engineering in Slack”
  • “Query the production database for users who signed up last week and generate a retention report”
  • “Create a GitHub issue for each failing test, with the error message and suggested fix”

The 70+ documented extensions cover the major developer tools: GitHub, GitLab, Jira, Linear, Slack, Discord, PostgreSQL, MongoDB, AWS, GCP, Docker, Kubernetes, and more. The community is adding new ones regularly.

MCP Apps: Interactive UIs Inside Goose

A newer feature — MCP Apps let extensions render interactive UIs directly inside Goose Desktop. Buttons, forms, data visualizations. This turns Goose from a text-only agent into something closer to an AI-powered IDE for any workflow:

# Install an MCP app
goose extensions install mcp-app-dashboard

# The app renders inside the Goose Desktop interface
goose run --recipe dashboard-daily

Recipes: Reusable Workflow Automation

Recipes are YAML files that define multi-step workflows. Think of them as GitHub Actions for AI agent tasks:

# recipe: fix-and-test.yaml
name: Fix Failing Tests
description: Find failing tests, fix them, verify, commit
steps:
  - run: "npm test 2>&1 | head -50"
    capture: test_output
  - prompt: |
      The test output is:
      ${test_output}
      Fix each failing test. After fixing, re-run to confirm.
  - run: "npm test"
    expect: exit_code_0
  - run: "git add -A && git commit -m 'fix: resolve failing tests'"

You trigger a recipe with a single command:

goose run --recipe fix-and-test.yaml

The community has published hundreds of recipes covering everything from dependency updates to full CI pipeline debugging. Some popular ones:

  • code-review — Review a PR with specific style guidelines
  • refactor-module — Refactor a module while maintaining test coverage
  • onboard-repo — Analyze a new codebase and generate documentation
  • debug-ci — Fetch CI logs, identify failures, propose fixes

Writing custom recipes takes a few days to learn well. The YAML syntax is straightforward, but understanding how to chain Goose actions effectively requires experimentation.


Subagents: Parallel Execution

One of Goose’s more powerful features is subagent spawning. You can kick off independent workers that run in parallel:

# In a Goose session
> Spawn 3 subagents:
  1. Review the auth module for security issues
  2. Update all deprecated API calls in src/api/
  3. Generate API documentation for the public endpoints

Each subagent gets its own context and runs independently. The main conversation stays clean while parallel work happens in the background. Results are collected and presented when complete.

This is particularly useful for:

  • Running multiple code reviews simultaneously
  • Processing large sets of files in parallel
  • Researching multiple topics at once

Security: Not an Afterthought

Goose takes security seriously — partly because Block’s own security team identified a prompt injection vulnerability in early 2026 and used it as a learning opportunity to harden the entire system:

  • Recipe visualization — See exactly what a recipe will do before running it
  • Unicode character stripping — Prevents hidden instructions in prompts
  • Tool permission controls — Whitelist which tools Goose can access
  • Sandbox mode — Restrict file system and network access
  • Adversary reviewer — A secondary AI that monitors for malicious prompts or unsafe actions

The practical takeaway: never enable auto-approve mode in critical environments. Always review what Goose is doing, especially when loading recipes from external sources.


Goose vs Claude Code vs Cursor vs Codex

The comparison isn’t as simple as “which writes better code.” These tools make fundamentally different trade-offs:

FeatureGooseClaude CodeCursorOpenAI Codex
PriceFree (+ API costs)$20/mo (Max)$20/mo ProAPI-based
LicenseApache 2.0ProprietaryProprietaryProprietary
LLM SupportAny (15+ providers)Claude onlyMultipleOpenAI only
InterfaceDesktop + CLI + APITerminal CLIVS Code GUIWeb + CLI
MCP Extensions70+ nativeYes (growing)LimitedNo
Recipes/WorkflowsYes (YAML)NoNoNo
SubagentsYesYesNoLimited
Local ModelsYes (Ollama)NoNoNo
Offline CapableYes (with local LLM)NoNoNo

When to choose Goose:

  • You want model flexibility (switch providers without changing workflow)
  • You need air-gapped or privacy-focused operation with local models
  • You want reusable workflow automation via recipes
  • You need deep MCP tool integration beyond just coding
  • Budget is a concern — the agent itself is free

When to choose Claude Code or Cursor:

  • You want the best possible code quality out of the box (Claude models are hard to beat for coding)
  • You prefer a polished, opinionated experience over flexibility
  • You don’t need workflow automation or model switching

Honest Limitations

Goose is impressive, but it’s not perfect:

  1. Model quality varies dramatically — With Claude Opus or GPT-4o, results are comparable to Claude Code. With local models like Llama 3 70B, it handles simple tasks but falls apart on multi-file coordination. Smaller local models (7B-13B) aren’t useful for real engineering work
  2. Recipe learning curve — Writing effective custom recipes takes days of experimentation, not minutes
  3. Desktop app stability — The desktop app occasionally freezes during long sessions. The CLI is more reliable for heavy workloads
  4. Context window management — On complex tasks spanning many files, Goose can lose track of earlier context. This is a limitation of the underlying LLMs, but tools like Cursor handle it more gracefully with their IDE integration
  5. API costs add up — With a top-tier cloud model, you’ll spend $5-20/month for moderate usage. At that point, the cost advantage over Claude Code’s $20/month flat rate is minimal

Community Sentiment

The community reaction has been overwhelmingly positive, with criticism focused on rough edges rather than fundamental issues:

Reddit (r/LocalLLaMA): “The MCP integration is what sold me. I can connect it to everything — my monitoring stack, my databases, even my home automation. No other AI agent does this as well.” — discussion thread

Reddit (r/AI_Agents): “The fact that Block actually uses this internally gives me confidence it won’t be abandoned. Too many open-source AI tools are weekend projects that die after the initial hype.” — 128 upvotes

Hacker News: Multiple front-page discussions, with the community particularly impressed by the AAIF move to the Linux Foundation — ensuring long-term vendor neutrality.

GitHub: 400+ contributors, active issue tracker (a sign people are actually using it), and a public 2026 roadmap in GitHub Discussions.


Who Should Use Goose?

Yes, if you:

  • Want a free, open-source AI agent with no vendor lock-in
  • Need to connect AI to your existing toolchain (databases, issue trackers, monitoring)
  • Work in environments where data privacy matters (air-gapped, local models)
  • Want reusable workflow automation you can share with your team
  • Like having control over which LLM powers your agent

No, if you:

  • Want the absolute best code generation quality (Claude Code with Opus is still king)
  • Prefer a polished GUI experience (Cursor wins here)
  • Don’t want to manage API keys and model configuration
  • Need something that “just works” with minimal setup

Getting Started Today

# 1. Install
brew install goose
# or
curl -fsSL https://github.com/aaif-goose/goose/releases/download/stable/download_cli.sh | bash

# 2. Configure your provider
goose configure

# 3. Start coding
goose session

# 4. Or run a community recipe
goose run --recipe code-review --input "Review the latest PR"

Links:


FAQ

Is Goose really free?

Yes. The agent itself is 100% free and open-source under Apache 2.0. You only pay for API calls to your chosen LLM provider. If you run a local model through Ollama, total cost is zero. With cloud models, expect $5-20/month for moderate usage.

Can I use Goose completely offline?

Yes. Point it at a local Ollama instance and Goose runs entirely on your machine with no internet connection required. Quality depends heavily on the local model — larger models (70B+) work reasonably well for simple tasks; smaller models struggle with multi-file coordination.

How does Goose compare to Claude Code for coding?

With Claude models, the code quality is comparable. Claude Code has the advantage of tighter model integration and a more polished experience. Goose wins on flexibility (any model, MCP extensions, recipes) and cost (no subscription). The right choice depends on whether you value flexibility or polish.

What happened with the security vulnerability?

In early 2026, Block’s security team identified a prompt injection attack vector through malicious recipes. They disclosed it transparently and shipped multiple fixes: recipe visualization, Unicode stripping, permission controls, and an adversary reviewer mode. The incident actually increased community trust because of how openly it was handled.

Can I use my existing Claude Code or Cursor subscription with Goose?

Yes. Through ACP (Agent Client Protocol), Goose can use your existing Claude, ChatGPT, or Gemini subscriptions as providers. This means you don’t need separate API keys — your existing subscription handles the model access.

Is Goose suitable for enterprise use?

Yes. The Apache 2.0 license allows unrestricted commercial use. The Linux Foundation governance ensures long-term stability. Block uses it internally across their engineering teams. Custom distributions let enterprises preconfigure providers, extensions, and branding for their specific needs.