Multi-Agent Orchestration Framework for long-running autonomous AI agents
OpenClaw Harness is a production-grade framework for orchestrating multi-agent AI workflows. Built on research about long-running autonomous agents, it provides:
pip install -i https://pypi.cnb.cool/ifree/harness/-/packages/simple openclaw-harness
git clone https://cnb.cool/ifree/harness.git
cd harness
pip install -e .
# Initialize a project
harness init ./my-project "Build a web scraper with data analysis"
# Auto-generate a task plan
harness plan ./my-project --goal "Build a web scraper" --method hybrid
# Check project status
harness status ./my-project
# Launch the dashboard
harness dashboard ./my-project --port 5001
from harness.scripts.multi_orchestrator import MultiHarnessOrchestrator, AgentRole
from harness.scripts.harness_config import HarnessConfig
# Define your team
roles = [
AgentRole(name="Researcher", goal="需求分析", expertise=["市场研究", "竞品分析"]),
AgentRole(name="Architect", goal="系统设计", expertise=["架构设计", "技术选型"]),
AgentRole(name="Developer", goal="代码实现", expertise=["Python", "React"]),
AgentRole(name="Tester", goal="质量保障", expertise=["测试", "代码审查"]),
]
# Create orchestrator
config = HarnessConfig.default()
orchestrator = MultiHarnessOrchestrator.from_config(config, roles=roles)
# Run
result = await orchestrator.run(goal="Build a REST API")
harness/ ├── harness/scripts/ │ ├── orchestrator.py # Single-agent orchestrator │ ├── multi_orchestrator.py # Multi-agent orchestrator │ ├── auto_planner.py # Auto task decomposition │ ├── tool_registry.py # Tool registry (11 built-in tools) │ ├── group_chat.py # Multi-role group chat │ ├── rate_limiter.py # Rate limiting (token bucket + 429 retry) │ ├── llm_providers.py # Multi-provider LLM abstraction │ ├── vector_memory.py # Vector memory / RAG (TF-IDF) │ ├── workflow.py # State machine + DAG workflow graphs │ ├── dependency_manager.py # Dependency management / critical path │ ├── cost_monitor.py # Cost monitoring + budget alerts │ ├── docker_sandbox.py # Docker sandbox execution │ ├── dashboard.py # Flask web dashboard (with auth) │ ├── harness_config.py # Unified configuration management │ ├── tracer.py # Distributed tracing │ └── file_utils.py # Atomic JSON writes └── harness/cli/ └── __init__.py # CLI entry point
from harness.scripts.llm_providers import ProviderRegistry
# Auto-detect from environment
provider = ProviderRegistry.auto_detect()
# Explicit provider
from harness.scripts.llm_providers import DashScopeProvider, OpenAIProvider
provider = DashScopeProvider(api_key="sk-...", model="qwen3.6-plus")
from harness.scripts.rate_limiter import RateLimiter
limiter = RateLimiter(rpm=60, tpm=100_000, max_retries=3)
# Use with LLM client
client = LLMClient(provider=provider, rate_limiter=limiter)
from harness.scripts.workflow import WorkflowGraph, create_dag_workflow
graph = create_dag_workflow(
nodes=["research", "design", "implement", "test"],
edges=[
("research", "design"),
("design", "implement"),
("implement", "test"),
],
)
Create harness-config.yaml:
project:
name: my-api
goal: Build a REST API
llm:
provider: dashscope
model: qwen3.6-plus
orchestrator:
execution_mode: hybrid
max_concurrent: 4
rate_limit:
enabled: true
rpm: 60
tpm: 100000
cost:
daily_budget_usd: 10.0
alert_threshold_pct: 80
Load it:
from harness.scripts.harness_config import HarnessConfig
config = HarnessConfig.load("harness-config.yaml")
pip install -e ".[dev]"
pytest tests/ -q
1262+ tests, 92% coverage.
MIT