@fastagent/co2 is a local CLI gateway that translates between OpenAI-compatible and Claude-compatible protocols, so existing clients can talk to the other upstream without changing their request protocol. It runs as a local HTTP service and supports two modes: o2c (OpenAI request -> Claude upstream) and c2o (Claude request -> OpenAI upstream).
npm install -g @fastagent/co2
Or run it directly without installing:
npx @fastagent/co2 start --config ./co2.config.json
co2.config.json{
"server": {
"host": "127.0.0.1",
"port": 8000,
"mode": "openai-to-claude",
"logLevel": "info"
},
"providers": {
"openai": {
"apiKey": "OPENAI_API_KEY_PLACEHOLDER",
"baseUrl": "https://api.openai.com/v1",
"defaultHeaders": {
"user-agent": "co2-cli/0.3.2"
}
},
"anthropic": {
"apiKey": "ANTHROPIC_API_KEY_PLACEHOLDER",
"baseUrl": "https://api.anthropic.com",
"version": "2023-06-01",
"defaultHeaders": {
"user-agent": "co2-cli/0.3.2"
}
}
},
"routing": {
"defaultOpenAIModel": "gpt-5.4",
"defaultClaudeModel": "claude-opus-4.6",
"openAIReasoningEffort": "high",
"openAIUpstreamApi": "responses",
"openAIParallelToolCalls": true,
"claudeOutputEffort": "high",
"skipInboundFields": {
"claudeMessages": ["context_management"],
"openAIResponses": [],
"openAIChatCompletions": []
}
},
"modelMap": {
"claude-opus-4.6": "gpt-5.4",
"gpt-5.4": "claude-opus-4-6"
}
}
Notes:
openai and anthropic providers so the full config shape is visible in one place.openai-to-claude / o2c, only providers.anthropic is used. providers.openai can be omitted without affecting startup or request handling.claude-to-openai / c2o, only providers.openai is used. providers.anthropic can be omitted without affecting startup or request handling.routing.defaultClaudeModel and routing.claudeOutputEffort only affect o2c. routing.defaultOpenAIModel, routing.openAIReasoningEffort, routing.openAIUpstreamApi, and routing.openAIParallelToolCalls only affect c2o.routing.openAIUpstreamApi explicitly selects the OpenAI upstream contract for c2o: the default is responses; if your OpenAI-compatible upstream only exposes POST /v1/chat/completions, set it to chat-completions. This is the switch you want when routing Claude Code through c2o into a chat-completions-only upstream.routing.openAIParallelToolCalls is a global hard override for c2o tool requests: if omitted, co2 does not send parallel_tool_calls; if set to true or false, co2 forwards that exact value to OpenAI only when tools are present.routing.skipInboundFields lets you explicitly drop known top-level request fields before validation, so you can keep a local gateway working with newer SDK/client fields without waiting for a new co2 release.skipInboundFields is split by inbound protocol:
claudeMessages: applies to POST /v1/messagesopenAIResponses: applies to POST /v1/responsesopenAIChatCompletions: applies to POST /v1/chat/completionsBehavior:
co2 removes it at the boundary, logs a warn, and continues processing the request.thinkingg still fail validation.Common examples:
context_management on some c2o /v1/messages requests. Add it to skipInboundFields.claudeMessages to keep those requests working./v1/responses or /v1/chat/completions, add that exact field name to the corresponding skip list instead of waiting for a new release.co2 start --config ./co2.config.json
openai-to-claude: POST /v1/chat/completions, POST /v1/responsesclaude-to-openai: POST /v1/messageso2c = OpenAI request -> Claude upstreamc2o = Claude request -> OpenAI upstreamincoming protocol -> upstream protocol; response protocol stays aligned with the incoming side by default.| What protocol your client speaks | What upstream you want | Mode to use |
|---|---|---|
OpenAI chat/completions / responses | Claude | o2c |
Claude messages | OpenAI | c2o |
Common cases:
o2c.messages clients usually use c2o.curl http://127.0.0.1:8000/v1/responses \
-H 'Content-Type: application/json' \
-d '{
"model": "gpt-5.4",
"input": [
{
"role": "user",
"content": [
{ "type": "input_text", "text": "Hello" }
]
}
],
"instructions": "You are concise."
}'
After changing server.mode to claude-to-openai:
curl http://127.0.0.1:8000/v1/messages \
-H 'Content-Type: application/json' \
-d '{
"model": "claude-opus-4.6",
"max_tokens": 128,
"messages": [
{ "role": "user", "content": "Hello" }
]
}'
co2 currently supports image input translation on all request paths:
o2c /v1/responseso2c /v1/chat/completionsc2o /v1/messagesSupported image sources:
https://... / http://...data:image/...;base64,...Not supported in V1:
When an image block or multimodal control field cannot be mapped safely, co2 drops just that field/block, logs a warn, and continues only if the message still contains supported content.
curl http://127.0.0.1:8000/v1/responses \
-H 'Content-Type: application/json' \
-d '{
"model": "gpt-5.4",
"input": [
{
"role": "user",
"content": [
{ "type": "input_text", "text": "Describe this image" },
{ "type": "input_image", "image_url": "https://example.com/a.png" }
]
}
]
}'
curl http://127.0.0.1:8000/v1/chat/completions \
-H 'Content-Type: application/json' \
-d '{
"model": "gpt-5.4",
"messages": [
{
"role": "user",
"content": [
{ "type": "text", "text": "Describe this image" },
{ "type": "image_url", "image_url": { "url": "data:image/png;base64,QUFBQQ==" } }
]
}
]
}'
curl http://127.0.0.1:8000/v1/messages \
-H 'Content-Type: application/json' \
-d '{
"model": "claude-opus-4.6",
"max_tokens": 128,
"messages": [
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": "QUFBQQ=="
}
},
{ "type": "text", "text": "Describe this image" }
]
}
]
}'
Notes:
ANTHROPIC_AUTH_TOKEN is accepted as a compatibility alias, but if it exists together with ANTHROPIC_API_KEY, they must be identical.>= 20.19.0 is required.