logo
0
0
WeChat Login

OpenClaw Harness

Multi-Agent Orchestration Framework for long-running autonomous AI agents

Python 3.10+ License: MIT CI Coverage

Overview

OpenClaw Harness is a production-grade framework for orchestrating multi-agent AI workflows. Built on research about long-running autonomous agents, it provides:

  • Auto-planning — Decompose goals into tasks using rules, LLM, or hybrid methods
  • Multi-Agent orchestration — Sequential, parallel, or DAG-based execution
  • Tool registry — 11 built-in tools (shell, file, Python, grep, search, etc.)
  • HITL approval gates — Human-in-the-loop task approval with risk levels
  • State machines — Validated state transitions for sessions, tasks, and approvals
  • RAG / Vector memory — Zero-dependency TF-IDF semantic search and knowledge injection
  • Multi-provider LLM — DashScope, OpenAI, Anthropic, any OpenAI-compatible endpoint
  • Rate limiting — Token bucket RPM/TPM limits with 429 auto-retry
  • Cost monitoring — Multi-model pricing, budget alerts, usage tracking
  • Docker sandbox — Isolated code execution with resource limits
  • Web dashboard — Flask-based UI with real-time state, auth, and trace export
  • Structured config — Unified YAML/JSON/ENV configuration with priority resolution

Installation

From CNB PyPI Registry

pip install -i https://pypi.cnb.cool/ifree/harness/-/packages/simple openclaw-harness

From Source

git clone https://cnb.cool/ifree/harness.git cd harness pip install -e .

Quick Start

# Initialize a project harness init ./my-project "Build a web scraper with data analysis" # Auto-generate a task plan harness plan ./my-project --goal "Build a web scraper" --method hybrid # Check project status harness status ./my-project # Launch the dashboard harness dashboard ./my-project --port 5001

Python API

from harness.scripts.multi_orchestrator import MultiHarnessOrchestrator, AgentRole from harness.scripts.harness_config import HarnessConfig # Define your team roles = [ AgentRole(name="Researcher", goal="需求分析", expertise=["市场研究", "竞品分析"]), AgentRole(name="Architect", goal="系统设计", expertise=["架构设计", "技术选型"]), AgentRole(name="Developer", goal="代码实现", expertise=["Python", "React"]), AgentRole(name="Tester", goal="质量保障", expertise=["测试", "代码审查"]), ] # Create orchestrator config = HarnessConfig.default() orchestrator = MultiHarnessOrchestrator.from_config(config, roles=roles) # Run result = await orchestrator.run(goal="Build a REST API")

Architecture

harness/ ├── harness/scripts/ │ ├── orchestrator.py # Single-agent orchestrator │ ├── multi_orchestrator.py # Multi-agent orchestrator │ ├── auto_planner.py # Auto task decomposition │ ├── tool_registry.py # Tool registry (11 built-in tools) │ ├── group_chat.py # Multi-role group chat │ ├── rate_limiter.py # Rate limiting (token bucket + 429 retry) │ ├── llm_providers.py # Multi-provider LLM abstraction │ ├── vector_memory.py # Vector memory / RAG (TF-IDF) │ ├── workflow.py # State machine + DAG workflow graphs │ ├── dependency_manager.py # Dependency management / critical path │ ├── cost_monitor.py # Cost monitoring + budget alerts │ ├── docker_sandbox.py # Docker sandbox execution │ ├── dashboard.py # Flask web dashboard (with auth) │ ├── harness_config.py # Unified configuration management │ ├── tracer.py # Distributed tracing │ └── file_utils.py # Atomic JSON writes └── harness/cli/ └── __init__.py # CLI entry point

Features

Multi-Provider LLM Support

from harness.scripts.llm_providers import ProviderRegistry # Auto-detect from environment provider = ProviderRegistry.auto_detect() # Explicit provider from harness.scripts.llm_providers import DashScopeProvider, OpenAIProvider provider = DashScopeProvider(api_key="sk-...", model="qwen3.6-plus")

Rate Limiting

from harness.scripts.rate_limiter import RateLimiter limiter = RateLimiter(rpm=60, tpm=100_000, max_retries=3) # Use with LLM client client = LLMClient(provider=provider, rate_limiter=limiter)

DAG-Based Workflows

from harness.scripts.workflow import WorkflowGraph, create_dag_workflow graph = create_dag_workflow( nodes=["research", "design", "implement", "test"], edges=[ ("research", "design"), ("design", "implement"), ("implement", "test"), ], )

Configuration

Create harness-config.yaml:

project: name: my-api goal: Build a REST API llm: provider: dashscope model: qwen3.6-plus orchestrator: execution_mode: hybrid max_concurrent: 4 rate_limit: enabled: true rpm: 60 tpm: 100000 cost: daily_budget_usd: 10.0 alert_threshold_pct: 80

Load it:

from harness.scripts.harness_config import HarnessConfig config = HarnessConfig.load("harness-config.yaml")

Testing

pip install -e ".[dev]" pytest tests/ -q

1262+ tests, 92% coverage.

License

MIT

Links

About

AI Harness 工程

1.66 MiB
0 forks0 stars1 branches0 TagREADMEMIT license
Language
Python100%