OpenAI-Compatible API Proxy

A production-ready proxy that accepts standard OpenAI API requests, maps model names to upstream equivalents, and forwards them to a configurable backend — with full streaming support.

Base URL

https://tmpmanueluhn65--c80791de310a11f1a33142dde27851f2.web.val.run

Endpoints

MethodPathDescription
GET/api/v1/modelsList available models (OpenAI format)
POST/api/v1/chat/completionsChat completions (streaming & non-streaming)
OPTIONS*CORS preflight

Model Mapping

AliasUpstream Model
gpt-5chat-gpt
gpt-5-turbochat-gpt
gpt-4ochat-gpt
gpt-4-turbochat-gpt
claude-4.6-sonnetanthropic-claude-sonnet-4-5
gemini-2.5-progoogle-gemini-2-5-pro
(unknown)chat-gpt

Usage

List models

curl https://tmpmanueluhn65--c80791de310a11f1a33142dde27851f2.web.val.run/api/v1/models

Chat completion (non-streaming)

curl -X POST https://tmpmanueluhn65--c80791de310a11f1a33142dde27851f2.web.val.run/api/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_UPSTREAM_TOKEN" \ -d '{ "model": "gpt-5", "messages": [{"role": "user", "content": "Hello!"}] }'

Chat completion (streaming)

curl -X POST https://tmpmanueluhn65--c80791de310a11f1a33142dde27851f2.web.val.run/api/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_UPSTREAM_TOKEN" \ -d '{ "model": "claude-4.6-sonnet", "messages": [{"role": "user", "content": "Hello!"}], "stream": true }'

Architecture

Rendering mermaid diagram...

Auth

Pass your upstream API token as a Bearer token. The proxy validates the header format and forwards the token to the upstream provider. No tokens are stored.

Streaming Strategy

When stream: true, the proxy:

  1. Fetches the full response from upstream with stream: false
  2. Splits the content into ~4-character chunks
  3. Emits SSE events: first chunk (role), content chunks, finish chunk, [DONE]