TL;DR
DeerFlow 2.0 (Deep Exploration and Efficient Research Flow) is ByteDance’s open-source SuperAgent harness that orchestrates specialized sub-agents, sandboxed code execution, and long-term memory. Key highlights:
- 39,400+ GitHub stars in roughly 30 days after release
- Execution-first: Runs code inside Docker containers — not just suggestions
- Sub-agent architecture: Researcher, Coder, and Reporter agents with scoped contexts
- Multi-model support: OpenAI, Anthropic, Gemini, DeepSeek, Doubao, Kimi, and more
- Sandboxed execution: Local processes, Docker containers, or Kubernetes pods
- Skills system: Markdown-based skills for research, reports, slide decks, web apps, data pipelines
- MIT licensed: Fully open-source, ground-up v2 rewrite sharing no code with v1
Install with: git clone https://github.com/bytedance/deer-flow && make config && make docker-start
What Is DeerFlow 2.0?
Most AI coding tools help you write code. DeerFlow 2.0 helps you run entire workflows.
Released on February 27, 2026 by ByteDance, DeerFlow 2.0 is a complete ground-up rewrite of the original DeerFlow project — sharing literally zero code with v1. Where the first version was a research-oriented prototype, v2 is a production-grade SuperAgent harness that decomposes complex tasks into sub-tasks, assigns them to specialized agents, executes code in sandboxed containers, and maintains memory across sessions.
Think of DeerFlow not as an AI assistant that sits in your editor, but as an autonomous orchestrator that can:
- Research a topic across the web, synthesize findings into a report, and generate a slide deck
- Build a complete web application from a natural language description
- Construct and execute data pipelines with real code running in Docker
- Generate images and videos using integrated model APIs
- Coordinate across Telegram, Slack, or Feishu for team notifications
The name says it all: Deep Exploration and Efficient Research Flow. It’s built on LangGraph and LangChain, leveraging their graph-based orchestration to manage complex, multi-step agent workflows.
Why It’s Trending NOW
DeerFlow 2.0 hit #1 on GitHub Trending within 24 hours of its release. Within 30 days, it accumulated over 39,400 stars — a pace that puts it among the fastest-growing AI repositories of 2026.
Several factors converged to make this happen:
The Execution-First Paradigm Shift
The AI agent space has been moving from “suggestion” to “execution” throughout 2025-2026. Tools like Claude Code, Cursor, and GitHub Copilot have pushed the boundary from autocomplete to autonomous coding. But DeerFlow occupies a different niche entirely — it’s not about real-time coding assistance. It’s about long-running autonomous tasks that might take minutes or hours to complete.
When you ask DeerFlow to “build a dashboard that visualizes our sales data,” it doesn’t just write the code and hand it to you. It spins up a Docker container, installs dependencies, runs the code, tests the output, and gives you a working application. This execution-first approach resonated deeply with developers who were tired of copy-pasting AI suggestions into terminals.
ByteDance’s AI Credibility
ByteDance has been building significant AI infrastructure internally — from recommendation engines powering TikTok to their Doubao (豆包) model family. DeerFlow 2.0 represents their most visible contribution to the open-source AI tooling ecosystem, and developers took notice. The project arriving with ByteDance’s engineering resources behind it signaled that this wasn’t a weekend hackathon project.
Perfect Timing
The release landed during a period of intense interest in AI agent frameworks. Developers were actively evaluating CrewAI, AutoGen, LangGraph, and other orchestration tools. DeerFlow 2.0 entered the conversation as a batteries-included solution that combined orchestration, execution, memory, and a skills system into a single package.
Key Features
Sub-Agent Architecture
DeerFlow decomposes tasks using three specialized sub-agents, each with carefully scoped contexts:
Researcher — Gathers information from web searches, documentation, APIs, and files. The Researcher agent has access to search tools and web fetching capabilities, and its context is scoped to information gathering. It doesn’t write code or generate final outputs.
Coder — Writes and executes code in sandboxed environments. The Coder agent receives structured context from the Researcher and task decomposer, then produces working code. Crucially, it doesn’t just generate code — it executes it in a sandbox and iterates until the output meets the requirements.
Reporter — Synthesizes results from Researcher and Coder into human-readable outputs: reports, summaries, slide decks, documentation. The Reporter agent handles formatting, visualization, and final delivery.
This separation of concerns means each agent operates with a focused context window rather than trying to be everything at once — a core principle of effective context engineering.
Sandboxed Code Execution
DeerFlow offers three sandbox modes for code execution:
-
Local Process — Runs code directly on the host machine. Fast but less isolated. Suitable for trusted, low-risk operations.
-
Docker Containers — The recommended mode. Each execution spins up an isolated container with OS-level security via seccomp profiles and cgroups. Dependencies are installed fresh, and the container is torn down after execution.
-
Kubernetes Pods — For production and team deployments. Enables resource quotas, network policies, and integration with existing cluster infrastructure.
# Example sandbox configuration
sandbox:
mode: docker
image: python:3.12-slim
timeout: 300
memory_limit: 2g
network: restricted
The security model is serious — Docker isolation with seccomp and cgroups means that even if an AI-generated script goes rogue, it’s contained within the sandbox. This is a meaningful improvement over agents that execute code directly on your machine.
Skills System
DeerFlow’s skills are Markdown-based configuration files that define reusable capabilities. Think of them as structured prompt templates combined with tool configurations and execution parameters.
Built-in skills include:
- Research: Deep web research with source synthesis
- Report Generation: Structured reports from research findings
- Slide Decks: Presentation generation with formatting
- Web Apps: Full-stack application scaffolding and deployment
- Data Pipelines: ETL workflows with execution and validation
- Image Generation: Integration with image models
- Video Generation: Integration with video models
Custom skills are straightforward to create — write a Markdown file describing the skill’s purpose, required inputs, execution steps, and expected outputs. DeerFlow’s orchestrator uses this to plan and execute the workflow.
Long-Term Memory
DeerFlow maintains a persistent memory system across sessions. This isn’t just conversation history — it includes:
- Task outcomes: What worked, what failed, and why
- User preferences: How you like results formatted, which models you prefer
- Domain knowledge: Facts and context accumulated across research sessions
- Skill refinements: Learned adjustments to skill execution patterns
Memory enables DeerFlow to improve over time and avoid repeating mistakes across sessions.
Multi-Model Support
DeerFlow doesn’t lock you into a single model provider. Supported models include:
- OpenAI (GPT-4o, GPT-5, o-series)
- Anthropic (Claude, via Claude Code OAuth)
- Google (Gemini models)
- DeepSeek (v3.2 and later)
- ByteDance (Doubao-Seed-2.0-Code)
- Moonshot (Kimi 2.5)
- Codex CLI integration
The project’s recommended models are Doubao-Seed-2.0-Code, DeepSeek v3.2, and Kimi 2.5 — all of which perform well on agentic coding and research tasks. You can mix and match models across sub-agents, using a cheaper model for research and a more capable model for code generation.
Architecture & How It Works
DeerFlow’s architecture is built on LangGraph, which provides the graph-based orchestration layer for managing complex agent workflows.
Task Decomposition Flow
When you submit a request to DeerFlow, here’s what happens:
User Request
↓
┌─────────────────────┐
│ Task Decomposer │ ← Breaks request into sub-tasks
└─────────────────────┘
↓
┌─────────────────────┐
│ Orchestrator │ ← Plans execution order, assigns agents
│ (LangGraph) │
└─────────────────────┘
↓ ↓ ↓
┌──────────┐ ┌──────────┐ ┌──────────┐
│Researcher│ │ Coder │ │ Reporter │
│ Agent │ │ Agent │ │ Agent │
└──────────┘ └──────────┘ └──────────┘
↓ ↓ ↓
┌─────────────────────┐
│ Result Aggregator │ ← Combines outputs, validates
└─────────────────────┘
↓
┌─────────────────────┐
│ Memory & Output │ ← Stores learnings, delivers result
└─────────────────────┘
The LangGraph orchestrator manages state transitions, handles failures and retries, and ensures that agents receive only the context they need. This is context engineering in practice — rather than dumping everything into a single massive prompt, DeerFlow scopes each agent’s context to its specific role.
Context Engineering
Each sub-agent receives a tailored context window:
- Researcher gets: the user’s query, search results, relevant memory, skill instructions
- Coder gets: structured requirements from the decomposer, relevant code snippets, sandbox configuration, execution feedback loops
- Reporter gets: Researcher findings, Coder outputs, formatting preferences, template instructions
This scoped approach means DeerFlow can handle complex multi-step tasks without hitting context window limits, even with smaller models.
Getting Started
Prerequisites
- Node.js 22+
- Python 3.12
- pnpm (package manager)
- Docker (for sandboxed execution)
Installation
# Clone the repository
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
# Run the configuration wizard
make config
# Start with Docker (recommended)
make docker-start
The make config step walks you through model selection, API key configuration, and sandbox preferences. It generates a .env file and the necessary YAML configuration.
Configuration
DeerFlow’s configuration lives in YAML files. Here’s a minimal example:
# config.yaml
models:
default: deepseek-v3.2
coder: doubao-seed-2.0-code
researcher: deepseek-v3.2
reporter: kimi-2.5
sandbox:
mode: docker
timeout: 300
memory:
enabled: true
backend: sqlite
server:
port: 2026
host: localhost
First Run
Once started, DeerFlow runs on localhost:2026 with a web-based interface:
# After make docker-start, open your browser
open http://localhost:2026
You can also interact via the API:
# Submit a task via API
curl -X POST http://localhost:2026/api/tasks \
-H "Content-Type: application/json" \
-d '{
"prompt": "Research the top 5 JavaScript frameworks in 2026 and create a comparison report",
"skills": ["research", "report"]
}'
IM Integrations
DeerFlow integrates with messaging platforms for team workflows:
- Telegram: Bot integration for submitting tasks and receiving results
- Slack: Slash commands and webhook-based notifications
- Feishu: Native integration for ByteDance’s enterprise messenger (popular in Asian markets)
Real-World Use Cases
Deep Research & Report Generation
DeerFlow’s original purpose — and still its strongest use case. Submit a research query and get a structured report with sources:
Task: "Analyze the competitive landscape of AI code editors in Q1 2026.
Include market share estimates, pricing models, and developer sentiment."
DeerFlow will:
1. Researcher agent searches across multiple sources
2. Researcher synthesizes findings into structured data
3. Reporter agent generates a formatted report with citations
4. Output: A comprehensive Markdown or PDF report
Data Pipeline Construction
Ask DeerFlow to build and execute a data processing pipeline:
Task: "Download the last 30 days of HackerNews top stories,
extract titles and scores, analyze sentiment,
and create a visualization dashboard."
DeerFlow will:
1. Coder writes a scraping script
2. Executes in Docker sandbox
3. Processes data with pandas
4. Generates charts with matplotlib/plotly
5. Outputs a static HTML dashboard
Web Application Scaffolding
DeerFlow can generate and run complete web applications:
Task: "Build a simple expense tracker with a React frontend
and Express backend. Include user authentication."
DeerFlow will:
1. Researcher checks current best practices
2. Coder scaffolds the full-stack app
3. Executes and tests in Docker
4. Reporter documents the API and setup instructions
Slide Deck Generation
Need a presentation? DeerFlow can research a topic and produce slides:
Task: "Create a 15-slide presentation on the state of
open-source AI agents for our team's tech talk."
DeerFlow will:
1. Research current landscape
2. Structure findings into slide content
3. Generate formatted slides (Markdown-based or HTML)
4. Include relevant diagrams and comparisons
Community Reactions
Reddit (r/LocalLLaMA)
The r/LocalLLaMA community — often the first to evaluate new open-source AI tools — picked up DeerFlow quickly. A highly-upvoted thread titled “DeerFlow is OSS now: LLM + Langchain + tools” generated significant discussion.
Developers praised the execution-first approach and the sandbox security model. Several commenters noted the practical value of Docker-based isolation compared to agents that run code directly on the host. The multi-model support was also highlighted positively — the ability to use local models via DeepSeek or other providers appealed to the privacy-conscious crowd.
Skeptics raised questions about ByteDance’s long-term commitment to the project and whether the LangChain/LangGraph dependency would become a maintenance burden.
VentureBeat Coverage
VentureBeat covered DeerFlow 2.0 with a balanced assessment, noting that it’s “not a consumer product” and “requires Docker/YAML knowledge.” Their coverage emphasized the enterprise potential — the Kubernetes sandbox mode and IM integrations suggest ByteDance is positioning DeerFlow for team and organizational use cases, not individual developers looking for a Copilot replacement.
Developer Community (DEV, MarkTechPost, SitePoint)
Technical publications focused on the architecture. DEV Community posts walked through the LangGraph-based orchestration, while MarkTechPost highlighted the skills system as a differentiator. SitePoint’s coverage focused on practical getting-started tutorials. DeepLearning.ai featured DeerFlow in their newsletter, bringing attention from the ML research community.
The consensus across outlets: DeerFlow 2.0 is impressive technically, but its complexity puts it in a different category than consumer-friendly tools. It’s for developers and teams who want fine-grained control over their agent workflows.
Honest Limitations
Not Consumer-Friendly
Let’s be direct: DeerFlow 2.0 is not something you install in two minutes and start using. You need:
- Docker knowledge (or at minimum, comfort with
make docker-start) - YAML configuration literacy
- Understanding of API keys and model providers
- Comfort with localhost web applications
If you’re looking for something that “just works” like Cursor or GitHub Copilot, DeerFlow is not that. It’s infrastructure for building and running agent workflows, not a polished consumer product.
Docker Is Effectively Required
While the local process sandbox mode exists, the Docker sandbox is the recommended and most-tested mode. This means:
- Docker Desktop or Docker Engine must be installed and running
- Container images need to be pulled (initial startup is slow)
- Resource usage can be significant — each execution spins up a container
- On macOS, Docker Desktop’s VM adds overhead
For developers already using Docker daily, this is fine. For those who haven’t touched containers, it’s a barrier.
ByteDance Ownership — The Geopolitical Factor
This is worth addressing honestly. ByteDance is a Chinese technology company, and some organizations — particularly in government, defense, and regulated industries — have policies restricting the use of software from certain jurisdictions.
DeerFlow is MIT licensed and fully open-source, which means the code is auditable. However, organizational policies may still apply. If you’re in a regulated environment, check your compliance requirements before adopting DeerFlow.
For individual developers and most private companies, the MIT license and open-source nature mitigate most concerns. The code is on GitHub, it’s auditable, and you can fork it.
Local Model Performance
While DeerFlow supports multiple model providers, the quality of results varies significantly by model. The recommended models (Doubao-Seed-2.0-Code, DeepSeek v3.2, Kimi 2.5) perform well, but smaller or less capable local models may produce poor results in the multi-agent workflow. The orchestrator is only as good as the models powering its agents.
LangChain/LangGraph Dependency
Building on LangGraph provides powerful orchestration but also ties DeerFlow to LangChain’s ecosystem and release cadence. If LangChain makes breaking changes, DeerFlow inherits that upgrade burden. Some developers in the community have expressed preference for frameworks with fewer dependencies.
DeerFlow vs Alternatives
DeerFlow vs Claude Code
Different tools for different jobs. Claude Code is a real-time coding assistant that sits in your terminal and helps you write, debug, and refactor code interactively. DeerFlow is an autonomous orchestrator for long-running tasks.
| Aspect | DeerFlow 2.0 | Claude Code |
|---|---|---|
| Primary use | Long-running autonomous tasks | Real-time interactive coding |
| Execution model | Sandboxed containers | Direct terminal execution |
| Agent architecture | Multi-agent (Researcher, Coder, Reporter) | Single agent with tools |
| Setup complexity | High (Docker, YAML, config) | Low (npm install) |
| Model flexibility | Multi-model, any provider | Anthropic models |
| Best for | Research, reports, pipelines | Code writing, debugging |
DeerFlow vs Cursor
Cursor is an AI-powered IDE. DeerFlow is an autonomous agent harness. Cursor excels at interactive code editing with AI assistance. DeerFlow excels at delegating entire workflows to AI agents. They don’t really compete — you might use Cursor to write code and DeerFlow to run research or build pipelines.
DeerFlow vs CrewAI
Both are multi-agent orchestration frameworks, but they differ in philosophy:
- CrewAI focuses on role-based agent collaboration with a simpler API
- DeerFlow provides a more opinionated, batteries-included solution with built-in sandbox execution, IM integrations, and a skills system
CrewAI is lighter and more flexible. DeerFlow is more complete but heavier.
DeerFlow vs LangGraph Standalone
DeerFlow is built on top of LangGraph. Using LangGraph directly gives you maximum flexibility but requires building everything yourself — agents, tools, memory, execution environments, UIs. DeerFlow provides all of that out of the box. Think of DeerFlow as a pre-built application on the LangGraph platform.
Who Should Use DeerFlow 2.0
Use DeerFlow if you:
- Need to automate complex, multi-step research and analysis workflows
- Want sandboxed code execution with real Docker isolation
- Are comfortable with Docker, YAML, and self-hosted tools
- Need multi-model flexibility across different providers
- Want to integrate AI agents with team messaging (Slack, Telegram, Feishu)
- Are building internal tools or workflows that require autonomous execution
- Want an open-source solution you can audit, fork, and customize
Skip DeerFlow if you:
- Want a simple AI coding assistant for daily development (use Claude Code or Cursor)
- Don’t want to manage Docker infrastructure
- Need something that works out of the box with zero configuration
- Are in a regulated environment with restrictions on ByteDance software
- Primarily need real-time, interactive coding assistance
- Are looking for IDE integration
FAQ
Is DeerFlow 2.0 free?
Yes. DeerFlow 2.0 is MIT licensed and completely free to use, modify, and distribute. However, you’ll need API keys for the LLM providers you choose (OpenAI, Anthropic, etc.), which may have their own costs. If you use open-source models like DeepSeek locally, the only cost is your compute.
How is DeerFlow 2.0 different from DeerFlow v1?
DeerFlow 2.0 is a complete ground-up rewrite that shares no code with v1. The original DeerFlow was a research prototype. V2 is a production-grade SuperAgent harness with a new architecture built on LangGraph, a skills system, sandboxed execution, IM integrations, and multi-model support. If you used v1, treat v2 as an entirely new project.
Is it safe to run AI-generated code in DeerFlow?
DeerFlow’s Docker sandbox mode provides OS-level isolation using seccomp profiles and cgroups. This means AI-generated code runs in an isolated container with restricted system calls and resource limits. It’s significantly safer than running AI-generated code directly on your machine. For maximum security in production environments, use the Kubernetes sandbox mode with network policies.
Can I use DeerFlow with local models?
Yes. DeerFlow supports any model provider with an OpenAI-compatible API. You can run local models via Ollama, vLLM, or similar inference servers and point DeerFlow at them. However, agentic workflows are demanding — smaller local models may produce significantly worse results than the recommended models (Doubao-Seed-2.0-Code, DeepSeek v3.2, Kimi 2.5).
Does DeerFlow work on Windows?
DeerFlow requires Docker and runs on localhost:2026. It’s primarily developed and tested on Linux and macOS. Windows users can run it via WSL2 with Docker Desktop. Native Windows support is not a primary focus of the project.
How does DeerFlow compare to AutoGPT or AgentGPT?
DeerFlow is more focused and production-oriented. AutoGPT and AgentGPT pioneered the autonomous agent concept but often suffered from runaway loops and unreliable execution. DeerFlow’s structured sub-agent architecture (Researcher, Coder, Reporter) with scoped contexts and sandboxed execution provides more predictable, reliable results. The LangGraph-based orchestration also offers better state management and error handling than earlier autonomous agent frameworks.
Get started today:
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
make config
make docker-start
# Open http://localhost:2026
Resources: