400+ models, one endpoint.
Model IDs follow the OpenRouter naming convention: provider/model-name.
# Format: provider/model-name model = "openai/gpt-4o" model = "anthropic/claude-sonnet-4-6" model = "meta-llama/llama-3.1-70b-instruct"
| Model ID | Provider | Notes |
|---|---|---|
| openai/gpt-4o | OpenAI | Best all-round, multimodal |
| openai/gpt-4o-mini | OpenAI | Fast & cheap, great for simple tasks |
| anthropic/claude-sonnet-4-6 | Anthropic | Top reasoning & coding |
| anthropic/claude-haiku-4-5-20251001 | Anthropic | Fast & cost-efficient |
| google/gemini-pro-1.5 | Long context (1M tokens) | |
| google/gemini-flash-1.5 | Very fast, low cost | |
| mistralai/mistral-large | Mistral | Strong EU-based model |
| mistralai/mistral-7b-instruct | Mistral | Lightweight, open-source |
| meta-llama/llama-3.1-70b-instruct | Meta | Powerful open-source |
| meta-llama/llama-3.1-8b-instruct | Meta | Fastest open-source option |
| deepseek/deepseek-chat | DeepSeek | Excellent price/perf ratio |
| qwen/qwen-2.5-72b-instruct | Alibaba | Multilingual, strong on code |
# Smart routing — picks best model for the task
response = client.chat.completions.create(
model="x420/auto",
messages=[{"role": "user", "content": "Summarize this document..."}],
)All models available on OpenRouter are accessible. Use the OpenRouter model explorer to browse the full catalog, then pass the model ID directly.
# List all available models via the API
models = client.models.list()
for model in models:
print(model.id)