X420.aiDocs

Models

400+ models, one endpoint.

How to specify a model

Model IDs follow the OpenRouter naming convention: provider/model-name.

# Format: provider/model-name
model = "openai/gpt-4o"
model = "anthropic/claude-sonnet-4-6"
model = "meta-llama/llama-3.1-70b-instruct"

Popular models

Model IDProviderNotes
openai/gpt-4oOpenAIBest all-round, multimodal
openai/gpt-4o-miniOpenAIFast & cheap, great for simple tasks
anthropic/claude-sonnet-4-6AnthropicTop reasoning & coding
anthropic/claude-haiku-4-5-20251001AnthropicFast & cost-efficient
google/gemini-pro-1.5GoogleLong context (1M tokens)
google/gemini-flash-1.5GoogleVery fast, low cost
mistralai/mistral-largeMistralStrong EU-based model
mistralai/mistral-7b-instructMistralLightweight, open-source
meta-llama/llama-3.1-70b-instructMetaPowerful open-source
meta-llama/llama-3.1-8b-instructMetaFastest open-source option
deepseek/deepseek-chatDeepSeekExcellent price/perf ratio
qwen/qwen-2.5-72b-instructAlibabaMultilingual, strong on code

Smart routing (coming soon)

Set model to x420/auto and X420.ai will route your request to the most cost-efficient model that meets the quality bar for your task. Currently in beta.
# Smart routing — picks best model for the task
response = client.chat.completions.create(
    model="x420/auto",
    messages=[{"role": "user", "content": "Summarize this document..."}],
)

Full model list

All models available on OpenRouter are accessible. Use the OpenRouter model explorer to browse the full catalog, then pass the model ID directly.

# List all available models via the API
models = client.models.list()
for model in models:
    print(model.id)