Skip to content

Installation

ContextRouter is distributed as a Python package with optional extras for different providers. This guide covers installation options and initial configuration.

Requirements

  • Python 3.13 or higher
  • pip, uv, or another Python package manager
  • At least one LLM provider (Vertex AI, OpenAI, or local Ollama)

Basic Installation

Install the core package:

Terminal window
pip install contextrouter

This gives you the framework with minimal dependencies. You’ll need to add extras for specific providers.

Installation with Extras

ContextRouter uses optional dependencies to keep the base package lightweight. Install only what you need:

Terminal window
# Everything (recommended for development)
pip install contextrouter[all]
# Provider bundles
pip install contextrouter[vertex] # Google Vertex AI (LLM + Search)
pip install contextrouter[storage] # PostgreSQL + Google Cloud Storage
pip install contextrouter[models-openai] # OpenAI + compatible APIs
pip install contextrouter[models-anthropic] # Anthropic Claude
pip install contextrouter[hf-transformers] # Local HuggingFace models
pip install contextrouter[observability] # Langfuse + OpenTelemetry
# Combinations
pip install contextrouter[vertex,storage,observability]

uv is a fast, modern Python package manager that we recommend:

Terminal window
# Install uv if you haven't
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install ContextRouter
uv pip install contextrouter[all]

Development Installation

For contributing or local development:

Terminal window
git clone https://github.com/ContextRouter/contextrouter.git
cd contextrouter
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install in development mode with all extras
pip install -e ".[dev,all]"
# Or with uv
uv pip install -e ".[dev,all]"

Verify Installation

Check that ContextRouter is installed correctly:

Terminal window
# Check version
python -c "import contextrouter; print(contextrouter.__version__)"
# Or use the CLI
contextrouter --version

Environment Configuration

ContextRouter reads configuration from multiple sources (in order of priority):

  1. Runtime settings (passed directly to functions)
  2. Environment variables
  3. settings.toml file
  4. Default values

Option 1: Environment Variables

Create a .env file in your project root:

.env
# Google Vertex AI
VERTEX_PROJECT_ID=your-gcp-project
VERTEX_LOCATION=us-central1
# OpenAI (if using)
OPENAI_API_KEY=sk-...
# Anthropic (if using)
ANTHROPIC_API_KEY=sk-ant-...
# Local models (if using Ollama)
LOCAL_OLLAMA_BASE_URL=http://localhost:11434/v1
# PostgreSQL (if using)
POSTGRES_HOST=localhost
POSTGRES_DATABASE=contextrouter
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your-password

Option 2: Configuration File

Create a settings.toml file:

settings.toml
[models]
default_llm = "vertex/gemini-2.0-flash"
default_embeddings = "vertex/text-embedding-004"
[vertex]
project_id = "your-gcp-project"
location = "us-central1"
[postgres]
host = "localhost"
port = 5432
database = "contextrouter"
user = "postgres"
password = "${POSTGRES_PASSWORD}" # Can reference env vars
[rag]
provider = "postgres"
reranking_enabled = true

Loading Configuration

ContextRouter automatically detects and loads your configuration:

from contextrouter.core import get_core_config
# Automatically finds .env and settings.toml in current directory
config = get_core_config()
# Or specify paths explicitly
config = get_core_config(
env_path="./custom.env",
toml_path="./custom-settings.toml"
)

Provider-Specific Setup

Google Vertex AI

  1. Create a GCP project with Vertex AI enabled
  2. Set up authentication:
Terminal window
# Option 1: Application Default Credentials
gcloud auth application-default login
# Option 2: Service account
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
  1. Configure ContextRouter:
[vertex]
project_id = "your-project-id"
location = "us-central1"

OpenAI

  1. Get an API key from platform.openai.com
  2. Set the environment variable:
Terminal window
export OPENAI_API_KEY=sk-...

Local Models (Ollama)

  1. Install and start Ollama:
Terminal window
ollama serve
ollama pull llama3.2
  1. Configure ContextRouter:
Terminal window
export LOCAL_OLLAMA_BASE_URL=http://localhost:11434/v1

Next Steps

With ContextRouter installed, move on to: