API Reference
ClosedRouter is a fully OpenAI-compatible proxy. Point your existing SDK or HTTP client at our endpoint and route between every major AI provider through a single API key.
Quickstart
Get an API key
Sign up at accounts.closedrouter.net (invite-only beta). Your API key is generated automatically.
Make your first request
curl https://api.closedrouter.net/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Hello! What model am I talking to?"}
]
}' {
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "openai/gpt-4o-mini",
"choices": [{
"message": {
"role": "assistant",
"content": "You're talking to GPT-4o-mini, routed through ClosedRouter!"
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 14,
"total_tokens": 26
}
} Swap providers instantly
Change the model field to route to any provider. Same request shape, different model:
# Google Gemini
curl https://api.closedrouter.net/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "google/gemini-2.5-flash", "messages": [{"role": "user", "content": "Hi"}]}'
# xAI Grok
curl https://api.closedrouter.net/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "x-ai/grok-3", "messages": [{"role": "user", "content": "Hi"}]}'
# Zhipu GLM
curl https://api.closedrouter.net/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "z-ai/glm-5.1", "messages": [{"role": "user", "content": "Hi"}]}' Authentication
Include your API key in the Authorization header as a Bearer token:
Authorization: Bearer sk-xxxxxxxxxxxxxxxx Models
Models are identified by provider/model-name. Browse all available models at closedrouter.net/models.
| Model ID | Provider | Context |
|---|---|---|
openai/gpt-4o-mini | OpenAI | 128K |
google/gemini-2.5-flash | 1M | |
x-ai/grok-3 | xAI | 131K |
z-ai/glm-5.1 | Zhipu | 128K |
deepseek/deepseek-chat-v3.1 | DeepSeek | 128K |
moonshotai/kimi-k2.5 | Moonshot | 128K |
List models
curl https://api.closedrouter.net/v1/models \
-H "Authorization: Bearer YOUR_API_KEY" Chat completions
POST /v1/chat/completions
Drop-in replacement for the OpenAI chat completions endpoint. Supports all standard parameters.
| Parameter | Type | Description |
|---|---|---|
model | string | Required. Model ID in provider/model format. |
messages | array | Required. Array of message objects with role and content. |
temperature | number | Sampling temperature (0-2). Default: 1. |
max_tokens | integer | Max tokens in the response. |
top_p | number | Nucleus sampling threshold. |
stream | boolean | Stream partial results via SSE. Default: false. |
stop | string/array | Stop sequences. |
Streaming
Set stream: true to receive partial results as server-sent events:
curl https://api.closedrouter.net/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o-mini",
"stream": true,
"messages": [{"role": "user", "content": "Count to 5"}]
}' data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"1,"},"index":0}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"2,"},"index":0}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"3,"},"index":0}]}
data: [DONE] Using the OpenAI SDK
Set the base_url to ClosedRouter. Everything else stays the same.
Python
from openai import OpenAI
client = OpenAI(
base_url="https://api.closedrouter.net/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content) TypeScript / Node
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.closedrouter.net/v1",
apiKey: "YOUR_API_KEY",
});
const response = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content); For AI agents and tools
ClosedRouter works with any tool that accepts an OpenAI-compatible endpoint:
OPENAI_BASE_URL=https://api.closedrouter.net/v1
OPENAI_API_KEY=YOUR_API_KEY Errors
| Status | Code | Meaning |
|---|---|---|
| 401 | authentication_error | Invalid or missing API key. |
| 403 | forbidden | Account does not have access to this model. |
| 404 | model_not_found | The requested model does not exist. |
| 429 | rate_limit_exceeded | Too many requests. Retry after the header. |
| 500 | upstream_error | The upstream provider returned an error. |
| 502 | bad_gateway | Could not reach the upstream provider. |
Rate limits
During closed beta, rate limits are generous:
| Limit | Value |
|---|---|
| Requests per minute | 60 |
| Tokens per minute | 200,000 |
| Max context length | Model-dependent |
Rate limit headers are included in every response:
x-ratelimit-limit: 60
x-ratelimit-remaining: 58
x-ratelimit-reset: 30