Installation
ContextRouter is distributed as a Python package with optional extras for different providers. This guide covers installation options and initial configuration.
Requirements
- Python 3.13 or higher
- pip, uv, or another Python package manager
- At least one LLM provider (Vertex AI, OpenAI, or local Ollama)
Basic Installation
Install the core package:
pip install contextrouterThis gives you the framework with minimal dependencies. You’ll need to add extras for specific providers.
Installation with Extras
ContextRouter uses optional dependencies to keep the base package lightweight. Install only what you need:
# Everything (recommended for development)pip install contextrouter[all]
# Provider bundlespip install contextrouter[vertex] # Google Vertex AI (LLM + Search)pip install contextrouter[storage] # PostgreSQL + Google Cloud Storagepip install contextrouter[models-openai] # OpenAI + compatible APIspip install contextrouter[models-anthropic] # Anthropic Claudepip install contextrouter[hf-transformers] # Local HuggingFace modelspip install contextrouter[observability] # Langfuse + OpenTelemetry
# Combinationspip install contextrouter[vertex,storage,observability]Using uv (Recommended)
uv is a fast, modern Python package manager that we recommend:
# Install uv if you haven'tcurl -LsSf https://astral.sh/uv/install.sh | sh
# Install ContextRouteruv pip install contextrouter[all]Development Installation
For contributing or local development:
git clone https://github.com/ContextRouter/contextrouter.gitcd contextrouter
# Create virtual environmentpython -m venv .venvsource .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install in development mode with all extraspip install -e ".[dev,all]"
# Or with uvuv pip install -e ".[dev,all]"Verify Installation
Check that ContextRouter is installed correctly:
# Check versionpython -c "import contextrouter; print(contextrouter.__version__)"
# Or use the CLIcontextrouter --versionEnvironment Configuration
ContextRouter reads configuration from multiple sources (in order of priority):
- Runtime settings (passed directly to functions)
- Environment variables
- settings.toml file
- Default values
Option 1: Environment Variables
Create a .env file in your project root:
# Google Vertex AIVERTEX_PROJECT_ID=your-gcp-projectVERTEX_LOCATION=us-central1
# OpenAI (if using)OPENAI_API_KEY=sk-...
# Anthropic (if using)ANTHROPIC_API_KEY=sk-ant-...
# Local models (if using Ollama)LOCAL_OLLAMA_BASE_URL=http://localhost:11434/v1
# PostgreSQL (if using)POSTGRES_HOST=localhostPOSTGRES_DATABASE=contextrouterPOSTGRES_USER=postgresPOSTGRES_PASSWORD=your-passwordOption 2: Configuration File
Create a settings.toml file:
[models]default_llm = "vertex/gemini-2.0-flash"default_embeddings = "vertex/text-embedding-004"
[vertex]project_id = "your-gcp-project"location = "us-central1"
[postgres]host = "localhost"port = 5432database = "contextrouter"user = "postgres"password = "${POSTGRES_PASSWORD}" # Can reference env vars
[rag]provider = "postgres"reranking_enabled = trueLoading Configuration
ContextRouter automatically detects and loads your configuration:
from contextrouter.core import get_core_config
# Automatically finds .env and settings.toml in current directoryconfig = get_core_config()
# Or specify paths explicitlyconfig = get_core_config( env_path="./custom.env", toml_path="./custom-settings.toml")Provider-Specific Setup
Google Vertex AI
- Create a GCP project with Vertex AI enabled
- Set up authentication:
# Option 1: Application Default Credentialsgcloud auth application-default login
# Option 2: Service accountexport GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json- Configure ContextRouter:
[vertex]project_id = "your-project-id"location = "us-central1"OpenAI
- Get an API key from platform.openai.com
- Set the environment variable:
export OPENAI_API_KEY=sk-...Local Models (Ollama)
- Install and start Ollama:
ollama serveollama pull llama3.2- Configure ContextRouter:
export LOCAL_OLLAMA_BASE_URL=http://localhost:11434/v1Next Steps
With ContextRouter installed, move on to:
- Quick Start — Build your first agent
- Models — Configure your LLM provider in detail
- Configuration Reference — Full settings documentation