logo
0
0
WeChat Login

FastClaw

FastClaw is a high-performance AI agent framework with multi-agent support, multi-channel integration, and comprehensive extensible features.

Features

  • Multi-Agent Orchestration: Coordinate multiple specialized agents with different roles
  • Multi-Channel Support: 8+ channel adapters (Telegram, Slack, Discord, Feishu, DingTalk, QQ, WeCom, WhatsApp)
  • Real LLM Integration: Support for OpenAI, Anthropic, Ollama, and DeepSeek
  • Memory System: JSONL session storage + Markdown long-term memory + SQLite vector search
  • Tools & Skills: Dynamic tool registration with sandbox execution and extensible skill system
  • MCP Protocol: Model Context Protocol support for extensible tool integration
  • Plugin System: Modular plugin architecture for custom extensions
  • Audit Logging: Comprehensive audit trail for all operations
  • Error Handling: Advanced error handling with retries and graceful degradation
  • CLI Interface: Command-line interface for management and testing
  • Workspace Management: Multi-workspace support for isolated environments

Architecture

FastClaw implements a comprehensive agent framework:

  • Gateway: WebSocket control plane for real-time communication
  • Agent Runtime: Embedded agent loop with inference-execution cycle
  • Multi-Agent Orchestrator: Coordinate multiple specialized agents
  • LLM Integration: Multiple LLM providers with unified interface
  • Memory System: JSONL session storage + Markdown long-term memory + SQLite vector index
  • Tools & Skills: Dynamic tool registration with sandbox execution
  • MCP Support: Model Context Protocol for extensible tool integration
  • Plugin System: Modular plugin architecture
  • Audit Logger: Comprehensive operation logging

Project Structure

fastclaw/ ├── core/ # Core framework │ ├── gateway.py # WebSocket gateway │ ├── agent.py # Agent runtime & loop │ ├── llm_client.py # LLM client integration │ ├── tools.py # Tool registry │ ├── multi_agent.py # Multi-agent orchestrator │ ├── workspace.py # Workspace management │ ├── audit.py # Audit logging │ ├── error_handler.py # Error handling │ ├── plugin.py # Plugin system │ ├── cli.py # CLI interface │ ├── llm/ # LLM adapters │ │ ├── base.py │ │ ├── openai_adapter.py │ │ ├── anthropic_adapter.py │ │ └── ollama_adapter.py │ ├── mcp/ # MCP protocol │ │ ├── manager.py │ │ └── protocol.py │ └── skills/ # Skills system │ ├── base.py │ ├── manager.py │ └── registry.py ├── adapters/ # Channel adapters │ ├── base.py │ ├── telegram.py │ ├── slack.py │ ├── discord.py │ ├── feishu.py │ ├── dingtalk.py │ ├── qq.py │ ├── wecom.py │ ├── whatsapp.py │ └── webhook.py ├── storage/ # Storage implementations │ ├── session_store.py │ └── memory_store.py ├── workspace/ # Agent workspace (markdown files) │ ├── AGENTS.md │ ├── SOUL.md │ ├── TOOLS.md │ └── MEMORY.md ├── state/ # Runtime state │ ├── sessions/ │ └── memory/ ├── ui/ # Web interface │ └── index.html ├── main.py # Entry point ├── config.yaml # Configuration file ├── ARCHITECTURE.md # Architecture documentation ├── ROADMAP.md # Project roadmap └── requirements.txt # Python dependencies

Quick Start

# Create virtual environment python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt # Set environment variables export DEEPSEEK_API_KEY="your_api_key_here" export DEEPSEEK_BASE_URL="https://api.deepseek.com/v1" # Start the server python3 main.py # Access the web interface # Open http://localhost:8000 in your browser

API Endpoints

  • POST /api/v1/chat - OpenAI-compatible HTTP API
  • WS /api/ws - WebSocket endpoint for real-time communication
  • GET /api/v1/tools - List available tools
  • GET /api/v1/sessions - List active sessions
  • GET / - Web interface

Configuration

Edit config.yaml to configure:

LLM Configuration

llm: provider: "openai" # openai | anthropic | ollama openai: api_key: "${DEEPSEEK_API_KEY}" base_url: "${DEEPSEEK_BASE_URL}" model: "deepseek-chat"

Multi-Agent Configuration

agents: - id: "coordinator" role: "coordinator" model: "deepseek-chat" enabled: true

Channel Configuration

channels: telegram: enabled: false bot_token: "${TELEGRAM_BOT_TOKEN}" slack: enabled: false bot_token: "${SLACK_BOT_TOKEN}"

Supported LLM Providers

  • OpenAI: GPT-4, GPT-3.5
  • DeepSeek: DeepSeek-V3, DeepSeek-Coder
  • Anthropic: Claude 3 Opus, Sonnet, Haiku
  • Ollama: Local models (Llama2, Mistral, etc.)

Supported Channels

  • Telegram: Full bot API support
  • Slack: App integration
  • Discord: Bot with slash commands
  • Feishu: Enterprise bot
  • DingTalk: Enterprise bot
  • QQ: OneBot protocol
  • WeCom: Enterprise bot
  • WhatsApp: Business API
  • Webhook: Generic webhook adapter

Built-in Tools

  • bash: Execute shell commands (with approval)
  • file_read: Read file contents
  • file_write: Write file contents
  • file_list: List directory contents
  • calculator: Evaluate mathematical expressions
  • memory_add: Add to long-term memory
  • memory_search: Search memory

CLI Usage

# List all CLI commands python3 -m core.cli --help # Start agent with specific configuration python3 -m core.cli start --config config.yaml # List available tools python3 -m core.cli tools list # Run a test query python3 -m core.cli query "Hello, FastClaw!"

Documentation

Complete Documentation

Core Documentation

Language Versions

  • README_ZH.md - 简体中文版 README | Simplified Chinese README

License

MIT License

About

No description, topics, or website provided.
Language
Python85.3%
HTML14.7%
Shell0.1%