logo
0
0
WeChat Login
tjb-tech<1193992557@qq.com>
docs(readme): add chinese translation

OpenHarness  oh — OpenHarness: Open Agent Harness

English · 简体中文

OpenHarness delivers core lightweight agent infrastructure: tool-use, skills, memory, and multi-agent coordination.

Join the community: contribute Harness for open agent development.

Quick Start Architecture Tools Tests License

Python React Pytest E2E Output CI Feishu WeChat

One Command (oh) to Launch OpenHarness and Unlock All Agent Harnesses.

Supports CLI agent integration including OpenClaw, nanobot, Cursor, and more.

OpenHarness Terminal Demo

How Agent Harness Works


✨ OpenHarness's Key Harness Features

🔄 Agent Loop

Engine

• Streaming Tool-Call Cycle

• API Retry with Exponential Backoff

• Parallel Tool Execution

• Token Counting & Cost Tracking

🔧 Harness Toolkit

Toolkit

• 43 Tools (File, Shell, Search, Web, MCP)

• On-Demand Skill Loading (.md)

• Plugin Ecosystem (Skills + Hooks + Agents)

• Compatible with anthropics/skills & plugins

🧠 Context & Memory

Context

• CLAUDE.md Discovery & Injection

• Context Compression (Auto-Compact)

• MEMORY.md Persistent Memory

• Session Resume & History

🛡️ Governance

Governance

• Multi-Level Permission Modes

• Path-Level & Command Rules

• PreToolUse / PostToolUse Hooks

• Interactive Approval Dialogs

🤝 Swarm Coordination

Swarm

• Subagent Spawning & Delegation

• Team Registry & Task Management

• Background Task Lifecycle

ClawTeam Integration (Roadmap)


🤔 What is an Agent Harness?

An Agent Harness is the complete infrastructure that wraps around an LLM to make it a functional agent. The model provides intelligence; the harness provides hands, eyes, memory, and safety boundaries.

Harness = Tools + Knowledge + Observation + Action + Permissions

OpenHarness is an open-source Python implementation designed for researchers, builders, and the community:

  • Understand how production AI agents work under the hood
  • Experiment with cutting-edge tools, skills, and agent coordination patterns
  • Extend the harness with custom plugins, providers, and domain knowledge
  • Build specialized agents on top of proven architecture

📰 What's New

  • 2026-04-06 🚀 v0.1.2 — Unified setup flows and ohmo personal-agent app:
    • oh setup now guides provider selection as workflows instead of exposing raw auth/provider internals
    • Compatible API setup is now profile-scoped, so Anthropic/OpenAI-compatible endpoints can keep separate keys
    • ohmo ships as a packaged app with ~/.ohmo workspace, gateway, bootstrap prompts, and channel config flow
  • 2026-04-01 🎨 v0.1.0 — Initial OpenHarness open-source release featuring complete Harness architecture:

Start here: Quick Start · Provider Compatibility · Showcase · Contributing · Changelog


🚀 Quick Start

One-Click Install

The fastest way to get started — a single command handles OS detection, dependency checks, and installation:

curl -fsSL https://raw.githubusercontent.com/HKUDS/OpenHarness/main/scripts/install.sh | bash

Options:

FlagDescription
--from-sourceClone from GitHub and install in editable mode (pip install -e .)
--with-channelsAlso install IM channel dependencies (slack-sdk, python-telegram-bot, discord.py)
# Install from source (for contributors / latest code) curl -fsSL https://raw.githubusercontent.com/HKUDS/OpenHarness/main/scripts/install.sh | bash -s -- --from-source # Install with IM channel support curl -fsSL https://raw.githubusercontent.com/HKUDS/OpenHarness/main/scripts/install.sh | bash -s -- --with-channels # Or run locally after cloning bash scripts/install.sh --from-source --with-channels

The script will:

  1. Detect your OS (Linux / macOS / WSL)
  2. Verify Python ≥ 3.10 and Node.js ≥ 18
  3. Install OpenHarness via pip
  4. Set up the React TUI (npm install) if Node.js is available
  5. Create ~/.openharness/ config directory
  6. Confirm with oh --version

Prerequisites

  • Python 3.10+ and uv
  • Node.js 18+ (optional, for the React terminal UI)
  • An LLM API key

One-Command Demo

ANTHROPIC_API_KEY=your_key uv run oh -p "Inspect this repository and list the top 3 refactors"

Install & Run

# Clone and install git clone https://github.com/HKUDS/OpenHarness.git cd OpenHarness uv sync --extra dev # Example: use Kimi as the backend export ANTHROPIC_BASE_URL=https://api.moonshot.cn/anthropic export ANTHROPIC_API_KEY=your_kimi_api_key export ANTHROPIC_MODEL=kimi-k2.5 # Launch oh # if venv is activated uv run oh # without activating venv

Configure A Workflow

Use the unified setup flow instead of manually thinking about auth -> provider -> model:

uv run oh setup

oh setup walks through:

  1. Choose a workflow:
    • Anthropic-Compatible API
    • Claude Subscription
    • OpenAI-Compatible API
    • Codex Subscription
    • GitHub Copilot
  2. For compatible API families, choose a concrete backend preset
  3. If needed, authenticate the selected workflow
  4. Pick or confirm the model
  5. Save and activate the profile

Compatible API families currently guide you through presets such as:

  • Anthropic-Compatible API:
    • Claude official
    • Moonshot / Kimi
    • Zhipu / GLM
    • MiniMax
  • OpenAI-Compatible API:
    • OpenAI official
    • OpenRouter

Arbitrary compatible endpoints are still supported through advanced profile commands:

oh provider add my-endpoint \ --label "My Endpoint" \ --provider anthropic \ --api-format anthropic \ --auth-source anthropic_api_key \ --model my-model \ --base-url https://example.com/anthropic

OpenHarness stores API-key-backed compatible profiles with profile-scoped credentials when appropriate, so different compatible endpoints do not have to share one global key.

OpenHarness Landing Screen

Non-Interactive Mode (Pipes & Scripts)

# Single prompt → stdout oh -p "Explain this codebase" # JSON output for programmatic use oh -p "List all functions in main.py" --output-format json # Stream JSON events in real-time oh -p "Fix the bug" --output-format stream-json

🔌 Provider Compatibility

OpenHarness treats providers as workflows backed by named profiles. In day-to-day use, prefer:

oh setup oh provider list oh provider use <profile>

Built-in Workflows

WorkflowWhat it isTypical backends
Anthropic-Compatible APIAnthropic-style request formatClaude official, Kimi, GLM, MiniMax, internal Anthropic-compatible gateways
Claude SubscriptionClaude CLI subscription bridgeLocal ~/.claude/.credentials.json
OpenAI-Compatible APIOpenAI-style request formatOpenAI official, OpenRouter, DashScope, DeepSeek, SiliconFlow, Groq, Ollama, GitHub Models
Codex SubscriptionCodex CLI subscription bridgeLocal ~/.codex/auth.json
GitHub CopilotCopilot OAuth workflowGitHub Copilot device-flow login

Compatible API Families

Anthropic-Compatible API

Typical examples:

BackendBase URLExample models
Claude officialhttps://api.anthropic.comclaude-sonnet-4-6, claude-opus-4-6
Moonshot / Kimihttps://api.moonshot.cn/anthropickimi-k2.5
Zhipu / GLMcustom Anthropic-compatible endpointglm-4.5
MiniMaxcustom Anthropic-compatible endpointminimax-m1

OpenAI-Compatible API

Any provider implementing the OpenAI /v1/chat/completions style API works:

BackendBase URLExample models
OpenAIhttps://api.openai.com/v1gpt-5.4, gpt-4.1
OpenRouterhttps://openrouter.ai/api/v1provider-specific
Alibaba DashScopehttps://dashscope.aliyuncs.com/compatible-mode/v1qwen3.5-flash, qwen3-max, deepseek-r1
DeepSeekhttps://api.deepseek.comdeepseek-chat, deepseek-reasoner
GitHub Modelshttps://models.inference.ai.azure.comgpt-4o, Meta-Llama-3.1-405B-Instruct
SiliconFlowhttps://api.siliconflow.cn/v1deepseek-ai/DeepSeek-V3
Groqhttps://api.groq.com/openai/v1llama-3.3-70b-versatile
Ollama (local)http://localhost:11434/v1any local model

Advanced Profile Management

# List saved workflows oh provider list # Switch the active workflow oh provider use codex # Add your own compatible endpoint oh provider add my-endpoint \ --label "My Endpoint" \ --provider openai \ --api-format openai \ --auth-source openai_api_key \ --model my-model \ --base-url https://example.com/v1

For custom compatible endpoints, OpenHarness can bind credentials per profile instead of forcing every Anthropic-compatible or OpenAI-compatible backend to share the same API key.

GitHub Copilot Format (--api-format copilot)

Use your existing GitHub Copilot subscription as the LLM backend. Authentication uses GitHub's OAuth device flow — no API keys needed.

# One-time login (opens browser for GitHub authorization) oh auth copilot-login # Then launch with Copilot as the provider uv run oh --api-format copilot # Or via environment variable export OPENHARNESS_API_FORMAT=copilot uv run oh # Check auth status oh auth status # Remove stored credentials oh auth copilot-logout
FeatureDetails
Auth methodGitHub OAuth device flow (no API key needed)
Token managementAutomatic refresh of short-lived session tokens
EnterpriseSupports GitHub Enterprise via --github-domain flag
ModelsUses Copilot's default model selection
APIOpenAI-compatible chat completions under the hood

🏗️ Harness Architecture

OpenHarness implements the core Agent Harness pattern with 10 subsystems:

openharness/ engine/ # 🧠 Agent Loop — query → stream → tool-call → loop tools/ # 🔧 43 Tools — file I/O, shell, search, web, MCP skills/ # 📚 Knowledge — on-demand skill loading (.md files) plugins/ # 🔌 Extensions — commands, hooks, agents, MCP servers permissions/ # 🛡️ Safety — multi-level modes, path rules, command deny hooks/ # ⚡ Lifecycle — PreToolUse/PostToolUse event hooks commands/ # 💬 54 Commands — /help, /commit, /plan, /resume, ... mcp/ # 🌐 MCP — Model Context Protocol client memory/ # 🧠 Memory — persistent cross-session knowledge tasks/ # 📋 Tasks — background task management coordinator/ # 🤝 Multi-Agent — subagent spawning, team coordination prompts/ # 📝 Context — system prompt assembly, CLAUDE.md, skills config/ # ⚙️ Settings — multi-layer config, migrations ui/ # 🖥️ React TUI — backend protocol + frontend

The Agent Loop

The heart of the harness. One loop, endlessly composable:

while True: response = await api.stream(messages, tools) if response.stop_reason != "tool_use": break # Model is done for tool_call in response.tool_uses: # Permission check → Hook → Execute → Hook → Result result = await harness.execute_tool(tool_call) messages.append(tool_results) # Loop continues — model sees results, decides next action

The model decides what to do. The harness handles how — safely, efficiently, with full observability.

Harness Flow


✨ Features

🔧 Tools (43+)

CategoryToolsDescription
File I/OBash, Read, Write, Edit, Glob, GrepCore file operations with permission checks
SearchWebFetch, WebSearch, ToolSearch, LSPWeb and code search capabilities
NotebookNotebookEditJupyter notebook cell editing
AgentAgent, SendMessage, TeamCreate/DeleteSubagent spawning and coordination
TaskTaskCreate/Get/List/Update/Stop/OutputBackground task management
MCPMCPTool, ListMcpResources, ReadMcpResourceModel Context Protocol integration
ModeEnterPlanMode, ExitPlanMode, WorktreeWorkflow mode switching
ScheduleCronCreate/List/Delete, RemoteTriggerScheduled and remote execution
MetaSkill, Config, Brief, Sleep, AskUserKnowledge loading, configuration, interaction

Every tool has:

  • Pydantic input validation — structured, type-safe inputs
  • Self-describing JSON Schema — models understand tools automatically
  • Permission integration — checked before every execution
  • Hook support — PreToolUse/PostToolUse lifecycle events

📚 Skills System

Skills are on-demand knowledge — loaded only when the model needs them:

Available Skills: - commit: Create clean, well-structured git commits - review: Review code for bugs, security issues, and quality - debug: Diagnose and fix bugs systematically - plan: Design an implementation plan before coding - test: Write and run tests for code - simplify: Refactor code to be simpler and more maintainable - pdf: PDF processing with pypdf (from anthropics/skills) - xlsx: Excel operations (from anthropics/skills) - ... 40+ more

Compatible with anthropics/skills — just copy .md files to ~/.openharness/skills/.

🔌 Plugin System

Compatible with claude-code plugins. Tested with 12 official plugins:

PluginTypeWhat it does
commit-commandsCommandsGit commit, push, PR workflows
security-guidanceHooksSecurity warnings on file edits
hookifyCommands + AgentsCreate custom behavior hooks
feature-devCommandsFeature development workflow
code-reviewAgentsMulti-agent PR review
pr-review-toolkitAgentsSpecialized PR review agents
# Manage plugins oh plugin list oh plugin install <source> oh plugin enable <name>

🤝 Ecosystem Workflows

OpenHarness is useful as a lightweight harness layer around Claude-style tooling conventions:

  • OpenClaw-oriented workflows can reuse Markdown-first knowledge and command-driven collaboration patterns.
  • Claude-style plugins and skills stay portable because OpenHarness keeps those formats familiar.
  • ClawTeam-style multi-agent work maps well onto the built-in team, task, and background execution primitives.

For concrete usage ideas instead of generic claims, see docs/SHOWCASE.md.

🛡️ Permissions

Multi-level safety with fine-grained control:

ModeBehaviorUse Case
DefaultAsk before write/executeDaily development
AutoAllow everythingSandboxed environments
Plan ModeBlock all writesLarge refactors, review first

Path-level rules in settings.json:

{ "permission": { "mode": "default", "path_rules": [{"pattern": "/etc/*", "allow": false}], "denied_commands": ["rm -rf /", "DROP TABLE *"] } }

🖥️ Terminal UI

React/Ink TUI with full interactive experience:

  • Command picker: Type / → arrow keys to select → Enter
  • Permission dialog: Interactive y/n with tool details
  • Mode switcher: /permissions → select from list
  • Session resume: /resume → pick from history
  • Animated spinner: Real-time feedback during tool execution
  • Keyboard shortcuts: Shown at the bottom, context-aware

📡 CLI

oh [OPTIONS] COMMAND [ARGS] Session: -c/--continue, -r/--resume, -n/--name Model: -m/--model, --effort, --max-turns Output: -p/--print, --output-format text|json|stream-json Permissions: --permission-mode, --dangerously-skip-permissions Context: -s/--system-prompt, --append-system-prompt, --settings Advanced: -d/--debug, --mcp-config, --bare Subcommands: oh setup | oh provider | oh auth | oh mcp | oh plugin

🧑‍💼 ohmo Personal Agent

ohmo is a personal-agent app built on top of OpenHarness. It is packaged alongside oh, with its own workspace and gateway:

# Initialize personal workspace ohmo init # Configure gateway channels and pick a provider profile ohmo config # Run the personal agent ohmo # Run the gateway in foreground ohmo gateway run # Check or restart the gateway ohmo gateway status ohmo gateway restart

Key concepts:

  • ~/.ohmo/
    • personal workspace root
  • soul.md
    • long-term agent personality and behavior
  • identity.md
    • who ohmo is
  • user.md
    • user profile and preferences
  • BOOTSTRAP.md
    • first-run landing ritual
  • memory/
    • personal memory
  • gateway.json
    • selected provider profile and channel configuration

ohmo config uses the same workflow language as oh setup, so you can point the personal-agent gateway at:

  • Anthropic-Compatible API
  • Claude Subscription
  • OpenAI-Compatible API
  • Codex Subscription
  • GitHub Copilot

ohmo init creates the home workspace once. After that, use ohmo config to update provider and channel settings; if the gateway is already running, the config flow can restart it for you.

Currently ohmo init / ohmo config can guide channel setup for:

  • Telegram
  • Slack
  • Discord
  • Feishu

📊 Test Results

SuiteTestsStatus
Unit + Integration114✅ All passing
CLI Flags E2E6✅ Real model calls
Harness Features E2E9✅ Retry, skills, parallel, permissions
React TUI E2E3✅ Welcome, conversation, status
TUI Interactions E2E4✅ Commands, permissions, shortcuts
Real Skills + Plugins12✅ anthropics/skills + claude-code/plugins
# Run all tests uv run pytest -q # 114 unit/integration python scripts/test_harness_features.py # Harness E2E python scripts/test_real_skills_plugins.py # Real plugins E2E

🔧 Extending OpenHarness

Add a Custom Tool

from pydantic import BaseModel, Field from openharness.tools.base import BaseTool, ToolExecutionContext, ToolResult class MyToolInput(BaseModel): query: str = Field(description="Search query") class MyTool(BaseTool): name = "my_tool" description = "Does something useful" input_model = MyToolInput async def execute(self, arguments: MyToolInput, context: ToolExecutionContext) -> ToolResult: return ToolResult(output=f"Result for: {arguments.query}")

Add a Custom Skill

Create ~/.openharness/skills/my-skill.md:

--- name: my-skill description: Expert guidance for my specific domain --- # My Skill ## When to use Use when the user asks about [your domain]. ## Workflow 1. Step one 2. Step two ...

Add a Plugin

Create .openharness/plugins/my-plugin/.claude-plugin/plugin.json:

{ "name": "my-plugin", "version": "1.0.0", "description": "My custom plugin" }

Add commands in commands/*.md, hooks in hooks/hooks.json, agents in agents/*.md.


🌍 Showcase

OpenHarness is most useful when treated as a small, inspectable harness you can adapt to a real workflow:

  • Repo coding assistant for reading code, patching files, and running checks locally.
  • Headless scripting tool for json and stream-json output in automation flows.
  • Plugin and skill testbed for experimenting with Claude-style extensions.
  • Multi-agent prototype harness for task delegation and background execution.
  • Provider comparison sandbox across Anthropic-compatible backends.

See docs/SHOWCASE.md for short, reproducible examples.


🤝 Contributing

OpenHarness is a community-driven research project. We welcome contributions in:

AreaExamples
ToolsNew tool implementations for specific domains
SkillsDomain knowledge .md files (finance, science, DevOps...)
PluginsWorkflow plugins with commands, hooks, agents
ProvidersSupport for more LLM backends (OpenAI, Ollama, etc.)
Multi-AgentCoordination protocols, team patterns
TestingE2E scenarios, edge cases, benchmarks
DocumentationArchitecture guides, tutorials, translations
# Development setup git clone https://github.com/HKUDS/OpenHarness.git cd OpenHarness uv sync --extra dev uv run pytest -q # Verify everything works

Useful contributor entry points:


📄 License

MIT — see LICENSE.


OpenHarness
Oh my Harness!
The model is the agent. The code is the harness.

Thanks for visiting ✨ OpenHarness!

Views