This page documents all configuration options available in ContextRouter.
Configuration Loading
Settings are loaded in this order (later overrides earlier):
- Built-in defaults
- Environment variables (
.env file and system env)
- TOML file (
settings.toml)
- Runtime settings (passed to functions)
from contextrouter.core import get_core_config
config = get_core_config()
config = get_core_config(
toml_path="./custom-settings.toml"
Model Settings
default_llm = "vertex/gemini-2.0-flash"
default_embeddings = "vertex/text-embedding-004"
| Setting | Type | Default | Description |
|---|
default_llm | string | required | Default LLM in format provider/model |
default_embeddings | string | required | Default embedding model |
temperature | float | 0.7 | Generation randomness (0-2) |
max_output_tokens | int | 4096 | Maximum response tokens |
timeout_sec | int | 60 | Request timeout in seconds |
max_retries | int | 3 | Retry attempts on failure |
Provider Settings
Google Vertex AI
project_id = "your-gcp-project"
datastore_id = "your-datastore-id"
| Setting | Env Var | Description |
|---|
project_id | VERTEX_PROJECT_ID | GCP project ID |
location | VERTEX_LOCATION | GCP region |
datastore_id | VERTEX_DATASTORE_ID | Vertex AI Search datastore |
PostgreSQL
database = "contextrouter"
password = "${POSTGRES_PASSWORD}"
| Setting | Default | Description |
|---|
host | localhost | Database host |
port | 5432 | Database port |
database | - | Database name |
user | - | Database user |
password | - | Database password |
pool_size | 10 | Connection pool size |
ssl_mode | prefer | SSL mode |
OpenAI
api_key = "${OPENAI_API_KEY}"
base_url = "https://api.openai.com/v1"
Anthropic
api_key = "${ANTHROPIC_API_KEY}"
Local Models
ollama_base_url = "http://localhost:11434/v1"
vllm_base_url = "http://localhost:8000/v1"
| Env Var | Description |
|---|
LOCAL_OLLAMA_BASE_URL | Ollama server URL |
LOCAL_VLLM_BASE_URL | vLLM server URL |
RAG Settings
# Hybrid search (Postgres only)
hybrid_vector_weight = 0.7
general_retrieval_enabled = true
general_retrieval_final_count = 10
max_retrieval_queries = 3
graph_facts_enabled = true
| Setting | Type | Default | Description |
|---|
provider | string | postgres | Retrieval backend |
reranking_enabled | bool | true | Enable second-pass reranking |
reranker | string | vertex | Reranker: vertex, mmr, none |
hybrid_fusion | string | rrf | Fusion method: rrf, weighted |
enable_fts | bool | true | Enable full-text search |
general_retrieval_final_count | int | 10 | Max total documents |
Ingestion Settings
output_dir = "./ingestion_output"
[ingestion.rag.preprocess]
min_samples_per_category = 3
max_entities_per_chunk = 10
Connector Settings
google_api_key = "${GOOGLE_API_KEY}"
google_cse_id = "${GOOGLE_CSE_ID}"
"https://example.com/feed.xml"
fetch_full_content = true
Security Settings
allowed_origins = ["https://example.com"]
secret_key = "${TOKEN_SECRET_KEY}"
Observability Settings
langfuse_public_key = "${LANGFUSE_PUBLIC_KEY}"
langfuse_secret_key = "${LANGFUSE_SECRET_KEY}"
langfuse_host = "https://cloud.langfuse.com"
trace_all_requests = true
| Env Var | Description |
|---|
LANGFUSE_PUBLIC_KEY | Langfuse public key |
LANGFUSE_SECRET_KEY | Langfuse secret key |
LANGFUSE_HOST | Langfuse server URL |
LOG_LEVEL | Logging level (DEBUG, INFO, WARNING, ERROR) |
Plugin Settings
Router Settings
override_path = "" # Optional: path to custom graph function
Environment Variable Reference
| Variable | Description | Required |
|---|
VERTEX_PROJECT_ID | GCP project ID | For Vertex AI |
VERTEX_LOCATION | GCP region | For Vertex AI |
OPENAI_API_KEY | OpenAI API key | For OpenAI |
ANTHROPIC_API_KEY | Anthropic API key | For Anthropic |
GROQ_API_KEY | Groq API key | For Groq |
OPENROUTER_API_KEY | OpenRouter API key | For OpenRouter |
POSTGRES_PASSWORD | Database password | For Postgres |
GOOGLE_API_KEY | Google API key | For web search |
GOOGLE_CSE_ID | Custom Search Engine ID | For web search |
LOCAL_OLLAMA_BASE_URL | Ollama server URL | For local models |
LOCAL_VLLM_BASE_URL | vLLM server URL | For local models |
RAG_PROVIDER | Default RAG provider | Optional |
RAG_EMBEDDINGS_MODEL | Override embedding model | Optional |
LOG_LEVEL | Logging verbosity | Optional |