Technical Documentation·2026

OpenAICode Proxy

A local proxy that bridges Claude Code with OpenAI models— translating requests, routing traffic, and maintaining compatibility between two distinct API ecosystems.

01

The Bridge Between Worlds

If you've ever wondered how to make two systems speak to each other when they were never designed to communicate, you're not alone. This is the story of a translator—a local proxy that lets Claude Code interface with OpenAI models.

At its heart, this project solves a simple problem: Claude Code speaks Anthropic's language, but sometimes you need it to work with OpenAI's infrastructure. Enter LiteLLM and nginx, working in concert to bridge that gap.

Claude Code:3999NGINXRouter:4000LiteLLMProxyOpenAI APIClientEntry PointTranslatorProvider
Fig. 1The data flow architecture showing how requests traverse from Claude Code through nginx and LiteLLM to reach OpenAI's API.
§
02

The Architecture of Translation

Picture a request leaving Claude Code. It doesn't know where it's really going—it thinks it's talking to Anthropic. But nginx intercepts that request on port 3999, examines it, and makes a decision.

Most requests flow straight to LiteLLM on port 4000, where the real magic happens. LiteLLM takes the Anthropic-formatted request, translates it to OpenAI's format, sends it off, then translates the response back. Claude Code never knows the difference.

But there's a subtle complexity here. Claude Code also sends telemetry—logging events, usage data. These requests get routed to a separate interceptor service. It's a flask application that quietly logs everything, keeping the main translation pipeline clean.

Clientnginx:3999LiteLLM:4000Interceptor:3998/v1/*/api/event_logging/*OpenAIinterceptor.log
Fig. 2Request routing through nginx—API calls flow to LiteLLM while telemetry is directed to the interceptor service.
§
03

Speaking in Tongues

The model mapping is perhaps the most elegant part of this system. When you ask for Claude Sonnet, you get GPT-4.1. Request Claude Opus, and GPT-5.2 answers. Claude Haiku becomes GPT-5.

This isn't just string replacement—it's a complete format translation. Request structures differ, response formats vary, even the way errors are reported changes between APIs. LiteLLM handles all of this invisibly.

MODEL TRANSLATION TABLEINPUT (Claude)OUTPUT (OpenAI)claude-sonnet-4-5-20250929gpt-4.1-2025-04-14claude-opus-4-5-20251101gpt-5.2-2025-12-11claude-haiku-4-5-20251001gpt-5-2025-08-07
Fig. 3Model mapping diagram showing how Claude model identifiers translate to their OpenAI equivalents.
§
04

The Three Pillars

Three services work together inside a single Docker container. nginx sits at the edge, the first thing to touch incoming requests. It's fast, it's reliable, and it's been routing web traffic since 2004.

Behind nginx, LiteLLM does the heavy lifting. It's purpose-built for exactly this kind of API translation work. And tucked away in a corner, the interceptor quietly logs telemetry events to a local file.

Each component has a single responsibility. nginx routes. LiteLLM translates. The interceptor logs. This separation keeps the system maintainable and debuggable.

DOCKER CONTAINERnginx:3999LiteLLM:4000Interceptor:3998Service Component
Fig. 4Isometric view of the three-component architecture: nginx (port 3999), LiteLLM (port 4000), and the interceptor (port 3998).
§
05

Getting Started

The setup is deliberately simple. Pull the container from GitHub's registry, pass in your OpenAI API key as an environment variable, and map port 3999 to your host. That's it.

For Claude Code to cooperate, you'll need to point it at localhost:3999 and give it a dummy API key. The word 'dummy' is literal here—the proxy expects it. Your real OpenAI credentials live safely in the container's environment.

1Set EnvironmentOPENAI_API_KEY2Pull & Rundocker run ...3Configure Claudelocalhost:3999$ terminaldocker pull ghcr.io/davezaxh/openai-code:latestdocker run -d -p 3999:3999 -e OPENAI_API_KEY=$KEY ...
Fig. 5The setup flow: environment configuration, container deployment, and Claude Code integration.
§
06

When Things Go Wrong

Debugging distributed systems is an art. When the container won't start, check the logs. When the port is busy, find the culprit with lsof. When connections refuse, verify the container is actually running.

The most common issue? Model not found errors. These happen when the model name in your request doesn't exactly match what's defined in config.yaml. Spelling matters. Case matters. The hyphen versus underscore distinction matters.

TROUBLESHOOTING DECISION TREEIssue?Won't Start?docker logs openai-codePort Busy?lsof -i :3999Model Not Found?check config.yamlFollow the decision path to resolve common issues
Fig. 6Troubleshooting decision tree for common proxy issues and their resolution paths.