OneRouter provides a unified API that gives you access to hundreds of AI models through a single endpoint, while automatically handling fallbacks and selecting the most cost-effective options. Get started with just a few lines of code using your preferred SDK or framework.
The first step to start using OneRouter is to create an account and get your API key.
After that, feel free to explore our API reference for more details. Or to jump start into our first example below.
from openai import OpenAI
client = OpenAI(
base_url="https://llm.onerouter.pro/v1",
api_key="<API_KEY>",
)
completion = client.chat.completions.create(
model="claude-3-5-sonnet@20240620",
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)
print(completion.choices[0].message.content)
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://llm.onerouter.pro/v1',
apiKey: '<API_KEY>',
});
async function main() {
const completion = await openai.chat.completions.create({
model: 'claude-3-5-sonnet@20240620',
messages: [
{
role: 'user',
content: 'What is the meaning of life?',
},
],
});
console.log(completion.choices[0].message);
}
main();
import requests
import json
response = requests.post(
url="https://llm.onerouter.pro/v1/chat/completions",
headers={
"Authorization": "Bearer <API_KEY>",
"Content-Type": "application/json"
},
data=json.dumps({
"model": "claude-3-5-sonnet@20240620",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
})
)
print(response.json()["choices"][0]["message"]["content"])
fetch('https://llm.onerouter.pro/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: 'Bearer <API_KEY>',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'claude-3-5-sonnet@20240620',
messages: [
{
role: 'user',
content: 'What is the meaning of life?',
},
],
}),
});
curl https://llm.onerouter.pro/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"model": "claude-3-5-sonnet@20240620",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
}'
The API also supports streaming.
For information about using third-party SDKs and frameworks with OneRouter, please see our frameworks documentation.
OneRouter provides a unified API to access all the major LLM models on the market. It also allows users to aggregate their billing in one place and keep track of all of their usage using our analytics.
OneRouter passes through the pricing of the underlying providers, while pooling their uptime, so you get the same pricing you’d get from the provider directly, with a unified API and fallbacks so that you get much better uptime.
To get started, create an account and add credits on the Credits page. Credits are simply deposits on OneRouter that you use for LLM inference. When you use the API or chat interface, we deduct the request cost from your credits. Each model and provider has a different price per million tokens.
Once you have credits you can create API keys and start using the API. You can read our quickstart guide for code samples and more.
The best way to get support is to submit an issue.
For each model we have the pricing displayed per million tokens. There is usually a different price for prompt and completion tokens. There are also models that charge per request, for images and for reasoning tokens. All of these details will be visible on the Logs tab.
When you make a request to OneRouter, we receive the total number of tokens processed by the provider. We then calculate the corresponding cost and deduct it from your credits. You can review your complete usage history in the Activities tab.
You can also add the usage: {include: true} parameter to your chat request to get the usage information in the response.
We offer different discounts ranging from 20% to 80% based on the pricing of underlying providers.
OneRouter charges a $0.35 + 5% fee when you purchase credits. We pass through the pricing of the underlying model providers without any markup, so you pay the same rate as you would directly with the provider.
For more details on our model price, please see our Models tab.
For more details about every request cost, please see our Logs tab.
Regardless of whether the cached result is used or a new prompt is processed, billing will follow the Prompt Cache rate as defined in our pricing documentation. This applies to every request, since Prompt Cache is always active in OneRouter.

OneRouter provides access to a wide variety of LLM models, including frontier models from major AI labs.
For a complete list of models you can visit the Models tab or fetch the list through the models api.
We work on adding models as quickly as we can. We often have partnerships with the labs releasing models and can release models as soon as they are available.
If there is a model missing that you’d like OneRouter to support, feel free to message us on issue.
If you would like to contact us, the best place to reach us is over email.
If a provider returns an error OneRouter will automatically fall back to the next provider. This happens transparently to the user and allows production apps to be much more resilient.
OneRouter uses three authentication methods:
OneRouter implements the OpenAI API specification for /completions and /chat/completions endpoints, allowing you to use any model with the same request/response format.
Additional endpoints like /api/v1/models are also available. See our API documentation for detailed specifications.
The API supports text and images. Images can be passed as URLs or base64 encoded images. PDF and other file types are coming soon.
Streaming uses server-sent events (SSE) for real-time token delivery.
Set stream: true in your request to enable streaming responses.
OneRouter is a drop-in replacement for OpenAI. Therefore, any SDKs that support OpenAI by default also support OneRouter. Check out our docs for more details.
Yes! You can send text, images, PDFs, and audio in the same request. The model will process all inputs together.
Yes. Prompt Cache is enabled by default in OneRouter for all API calls. This means that whenever you send a request, OneRouter will attempt to use the cached prompt/response if applicable.
At this time, Prompt Cache is permanently enabled in OneRouter and cannot be turned off. The design ensures consistent performance optimization and uniform billing.
Please see our Terms of Service and Privacy Policy.
We log basic request metadata (timestamps, model used, token counts). Prompt and completion are not logged by default. We do zero logging of your prompts/completions, even if an error occurs.
OneRouter is a proxy that sends your requests to the model provider for it to be completed. We work with all providers to, when possible, ensure that prompts and completions are not logged or used for training. Providers that do log, or where we have been unable to confirm their policy, will not be routed to unless the model training toggle is switched on in the privacy.
OneRouter uses a credit system where the base currency is US dollars.
All of the pricing on our site and API is denoted in dollars. Users can top up their balance manually.
Per our terms, we reserve the right to expire unused credits after one year of purchase.
If you paid using Stripe, sometimes there is an issue with the Stripe integration and credits can get delayed in showing up on your account. Please allow up to one hour. If your credits still have not appeared after an hour, contact us on email and we will look into it.
The Activity page allows users to view their historic usage and filter the usage by model, provider and api key.
We also provide a Logs page that has live information about the balance and remaining credits for the account.
OneRouter does not currently offer volume discounts, but you can reach out to us over email if you think you have an exceptional use case.
We accept all major credit cards, AliPay, PayPal and WechatPay. if there are any payment methods that you would like us to support please reach out on issue.
Our activity dashboard provides real-time usage metrics. If you would like any specific reports or metrics please contact us.
The best way to reach us is to submit new issue. and email us.
Video modality support is coming soon! We’re working on adding video processing capabilities to expand our multimodal offerings.