logo
2
0
WeChat Login
Varian_米泽<mizeyuyu@gmail.com>
fix: prevent concurrent subagent file write conflicts in sandbox tools (#1714)

DeerFlow Backend

DeerFlow is a LangGraph-based AI super agent with sandbox execution, persistent memory, and extensible tool integration. The backend enables AI agents to execute code, browse the web, manage files, delegate tasks to subagents, and retain context across conversations - all in isolated, per-thread environments.


Architecture

┌──────────────────────────────────────┐ │ Nginx (Port 2026) │ │ Unified reverse proxy │ └───────┬──────────────────┬───────────┘ │ │ /api/langgraph/* │ │ /api/* (other) ▼ ▼ ┌────────────────────┐ ┌────────────────────────┐ │ LangGraph Server │ │ Gateway API (8001) │ │ (Port 2024) │ │ FastAPI REST │ │ │ │ │ │ ┌────────────────┐ │ │ Models, MCP, Skills, │ │ │ Lead Agent │ │ │ Memory, Uploads, │ │ │ ┌──────────┐ │ │ │ Artifacts │ │ │ │Middleware│ │ │ └────────────────────────┘ │ │ │ Chain │ │ │ │ │ └──────────┘ │ │ │ │ ┌──────────┐ │ │ │ │ │ Tools │ │ │ │ │ └──────────┘ │ │ │ │ ┌──────────┐ │ │ │ │ │Subagents │ │ │ │ │ └──────────┘ │ │ │ └────────────────┘ │ └────────────────────┘

Request Routing (via Nginx):

  • /api/langgraph/* → LangGraph Server - agent interactions, threads, streaming
  • /api/* (other) → Gateway API - models, MCP, skills, memory, artifacts, uploads, thread-local cleanup
  • / (non-API) → Frontend - Next.js web interface

Core Components

Lead Agent

The single LangGraph agent (lead_agent) is the runtime entry point, created via make_lead_agent(config). It combines:

  • Dynamic model selection with thinking and vision support
  • Middleware chain for cross-cutting concerns (9 middlewares)
  • Tool system with sandbox, MCP, community, and built-in tools
  • Subagent delegation for parallel task execution
  • System prompt with skills injection, memory context, and working directory guidance

Middleware Chain

Middlewares execute in strict order, each handling a specific concern:

#MiddlewarePurpose
1ThreadDataMiddlewareCreates per-thread isolated directories (workspace, uploads, outputs)
2UploadsMiddlewareInjects newly uploaded files into conversation context
3SandboxMiddlewareAcquires sandbox environment for code execution
4SummarizationMiddlewareReduces context when approaching token limits (optional)
5TodoListMiddlewareTracks multi-step tasks in plan mode (optional)
6TitleMiddlewareAuto-generates conversation titles after first exchange
7MemoryMiddlewareQueues conversations for async memory extraction
8ViewImageMiddlewareInjects image data for vision-capable models (conditional)
9ClarificationMiddlewareIntercepts clarification requests and interrupts execution (must be last)

Sandbox System

Per-thread isolated execution with virtual path translation:

  • Abstract interface: execute_command, read_file, write_file, list_dir
  • Providers: LocalSandboxProvider (filesystem) and AioSandboxProvider (Docker, in community/)
  • Virtual paths: /mnt/user-data/{workspace,uploads,outputs} → thread-specific physical directories
  • Skills path: /mnt/skillsdeer-flow/skills/ directory
  • Skills loading: Recursively discovers nested SKILL.md files under skills/{public,custom} and preserves nested container paths
  • File-write safety: str_replace serializes read-modify-write per (sandbox.id, path) so isolated sandboxes keep concurrency even when virtual paths match
  • Tools: bash, ls, read_file, write_file, str_replace (bash is disabled by default when using LocalSandboxProvider; use AioSandboxProvider for isolated shell access)

Subagent System

Async task delegation with concurrent execution:

  • Built-in agents: general-purpose (full toolset) and bash (command specialist, exposed only when shell access is available)
  • Concurrency: Max 3 subagents per turn, 15-minute timeout
  • Execution: Background thread pools with status tracking and SSE events
  • Flow: Agent calls task() tool → executor runs subagent in background → polls for completion → returns result

Memory System

LLM-powered persistent context retention across conversations:

  • Automatic extraction: Analyzes conversations for user context, facts, and preferences
  • Structured storage: User context (work, personal, top-of-mind), history, and confidence-scored facts
  • Debounced updates: Batches updates to minimize LLM calls (configurable wait time)
  • System prompt injection: Top facts + context injected into agent prompts
  • Storage: JSON file with mtime-based cache invalidation

Tool Ecosystem

CategoryTools
Sandboxbash, ls, read_file, write_file, str_replace
Built-inpresent_files, ask_clarification, view_image, task (subagent)
CommunityTavily (web search), Jina AI (web fetch), Firecrawl (scraping), DuckDuckGo (image search)
MCPAny Model Context Protocol server (stdio, SSE, HTTP transports)
SkillsDomain-specific workflows injected via system prompt

Gateway API

FastAPI application providing REST endpoints for frontend integration:

RoutePurpose
GET /api/modelsList available LLM models
GET/PUT /api/mcp/configManage MCP server configurations
GET/PUT /api/skillsList and manage skills
POST /api/skills/installInstall skill from .skill archive
GET /api/memoryRetrieve memory data
POST /api/memory/reloadForce memory reload
GET /api/memory/configMemory configuration
GET /api/memory/statusCombined config + data
POST /api/threads/{id}/uploadsUpload files (auto-converts PDF/PPT/Excel/Word to Markdown, rejects directory paths)
GET /api/threads/{id}/uploads/listList uploaded files
DELETE /api/threads/{id}Delete DeerFlow-managed local thread data after LangGraph thread deletion; unexpected failures are logged server-side and return a generic 500 detail
GET /api/threads/{id}/artifacts/{path}Serve generated artifacts

IM Channels

The IM bridge supports Feishu, Slack, and Telegram. Slack and Telegram still use the final runs.wait() response path, while Feishu now streams through runs.stream(["messages-tuple", "values"]) and updates a single in-thread card in place.

For Feishu card updates, DeerFlow stores the running card's message_id per inbound message and patches that same card until the run finishes, preserving the existing OK / DONE reaction flow.


Quick Start

Prerequisites

  • Python 3.12+
  • uv package manager
  • API keys for your chosen LLM provider

Installation

cd deer-flow # Copy configuration files cp config.example.yaml config.yaml # Install backend dependencies cd backend make install

Configuration

Edit config.yaml in the project root:

models: - name: gpt-4o display_name: GPT-4o use: langchain_openai:ChatOpenAI model: gpt-4o api_key: $OPENAI_API_KEY supports_thinking: false supports_vision: true - name: gpt-5-responses display_name: GPT-5 (Responses API) use: langchain_openai:ChatOpenAI model: gpt-5 api_key: $OPENAI_API_KEY use_responses_api: true output_version: responses/v1 supports_vision: true

Set your API keys:

export OPENAI_API_KEY="your-api-key-here"

Running

Full Application (from project root):

make dev # Starts LangGraph + Gateway + Frontend + Nginx

Access at: http://localhost:2026

Backend Only (from backend directory):

# Terminal 1: LangGraph server make dev # Terminal 2: Gateway API make gateway

Direct access: LangGraph at http://localhost:2024, Gateway at http://localhost:8001


Project Structure

backend/ ├── src/ │ ├── agents/ # Agent system │ │ ├── lead_agent/ # Main agent (factory, prompts) │ │ ├── middlewares/ # 9 middleware components │ │ ├── memory/ # Memory extraction & storage │ │ └── thread_state.py # ThreadState schema │ ├── gateway/ # FastAPI Gateway API │ │ ├── app.py # Application setup │ │ └── routers/ # 6 route modules │ ├── sandbox/ # Sandbox execution │ │ ├── local/ # Local filesystem provider │ │ ├── sandbox.py # Abstract interface │ │ ├── tools.py # bash, ls, read/write/str_replace │ │ └── middleware.py # Sandbox lifecycle │ ├── subagents/ # Subagent delegation │ │ ├── builtins/ # general-purpose, bash agents │ │ ├── executor.py # Background execution engine │ │ └── registry.py # Agent registry │ ├── tools/builtins/ # Built-in tools │ ├── mcp/ # MCP protocol integration │ ├── models/ # Model factory │ ├── skills/ # Skill discovery & loading │ ├── config/ # Configuration system │ ├── community/ # Community tools & providers │ ├── reflection/ # Dynamic module loading │ └── utils/ # Utilities ├── docs/ # Documentation ├── tests/ # Test suite ├── langgraph.json # LangGraph server configuration ├── pyproject.toml # Python dependencies ├── Makefile # Development commands └── Dockerfile # Container build

Configuration

Main Configuration (config.yaml)

Place in project root. Config values starting with $ resolve as environment variables.

Key sections:

  • models - LLM configurations with class paths, API keys, thinking/vision flags
  • tools - Tool definitions with module paths and groups
  • tool_groups - Logical tool groupings
  • sandbox - Execution environment provider
  • skills - Skills directory paths
  • title - Auto-title generation settings
  • summarization - Context summarization settings
  • subagents - Subagent system (enabled/disabled)
  • memory - Memory system settings (enabled, storage, debounce, facts limits)

Provider note:

  • models[*].use references provider classes by module path (for example langchain_openai:ChatOpenAI).
  • If a provider module is missing, DeerFlow now returns an actionable error with install guidance (for example uv add langchain-google-genai).

Extensions Configuration (extensions_config.json)

MCP servers and skill states in a single file:

{ "mcpServers": { "github": { "enabled": true, "type": "stdio", "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": {"GITHUB_TOKEN": "$GITHUB_TOKEN"} }, "secure-http": { "enabled": true, "type": "http", "url": "https://api.example.com/mcp", "oauth": { "enabled": true, "token_url": "https://auth.example.com/oauth/token", "grant_type": "client_credentials", "client_id": "$MCP_OAUTH_CLIENT_ID", "client_secret": "$MCP_OAUTH_CLIENT_SECRET" } } }, "skills": { "pdf-processing": {"enabled": true} } }

Environment Variables

  • DEER_FLOW_CONFIG_PATH - Override config.yaml location
  • DEER_FLOW_EXTENSIONS_CONFIG_PATH - Override extensions_config.json location
  • Model API keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, DEEPSEEK_API_KEY, etc.
  • Tool API keys: TAVILY_API_KEY, GITHUB_TOKEN, etc.

LangSmith Tracing

DeerFlow has built-in LangSmith integration for observability. When enabled, all LLM calls, agent runs, tool executions, and middleware processing are traced and visible in the LangSmith dashboard.

Setup:

  1. Sign up at smith.langchain.com and create a project.
  2. Add the following to your .env file in the project root:
LANGSMITH_TRACING=true LANGSMITH_ENDPOINT=https://api.smith.langchain.com LANGSMITH_API_KEY=lsv2_pt_xxxxxxxxxxxxxxxx LANGSMITH_PROJECT=xxx

Legacy variables: The LANGCHAIN_TRACING_V2, LANGCHAIN_API_KEY, LANGCHAIN_PROJECT, and LANGCHAIN_ENDPOINT variables are also supported for backward compatibility. LANGSMITH_* variables take precedence when both are set.

Langfuse Tracing

DeerFlow also supports Langfuse observability for LangChain-compatible runs.

Add the following to your .env file:

LANGFUSE_TRACING=true LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxxxxxx LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxxxxxx LANGFUSE_BASE_URL=https://cloud.langfuse.com

If you are using a self-hosted Langfuse deployment, set LANGFUSE_BASE_URL to your Langfuse host.

Dual Provider Behavior

If both LangSmith and Langfuse are enabled, DeerFlow initializes and attaches both callbacks so the same run data is reported to both systems.

If a provider is explicitly enabled but required credentials are missing, or the provider callback cannot be initialized, DeerFlow raises an error when tracing is initialized during model creation instead of silently disabling tracing.

Docker: In docker-compose.yaml, tracing is disabled by default (LANGSMITH_TRACING=false). Set LANGSMITH_TRACING=true and/or LANGFUSE_TRACING=true in your .env, together with the required credentials, to enable tracing in containerized deployments.


Development

Commands

make install # Install dependencies make dev # Run LangGraph server (port 2024) make gateway # Run Gateway API (port 8001) make lint # Run linter (ruff) make format # Format code (ruff)

Code Style

  • Linter/Formatter: ruff
  • Line length: 240 characters
  • Python: 3.12+ with type hints
  • Quotes: Double quotes
  • Indentation: 4 spaces

Testing

uv run pytest

Technology Stack

  • LangGraph (1.0.6+) - Agent framework and multi-agent orchestration
  • LangChain (1.2.3+) - LLM abstractions and tool system
  • FastAPI (0.115.0+) - Gateway REST API
  • langchain-mcp-adapters - Model Context Protocol support
  • agent-sandbox - Sandboxed code execution
  • markitdown - Multi-format document conversion
  • tavily-python / firecrawl-py - Web search and scraping

Documentation


License

See the LICENSE file in the project root.

Contributing

See CONTRIBUTING.md for contribution guidelines.