- Complete observability of agent steps and LLM interactions
- Built-in reliability with fallbacks, retries, and load balancing
- Access to 1600+ LLMs through a single integration
- Cost tracking and optimization
Quick Start
Setup
1
Install packages
2
Add provider in Model Catalog
Go to Model Catalog → Add Provider. Select your provider, enter API keys, and name it (e.g.,
openai-prod).Your provider slug is @openai-prod.3
Get Portkey API Key
Create an API key at app.portkey.ai/api-keys.Pro tip: Attach a default config for fallbacks and caching.
4
Configure Langchain LLM
Production Features
Observability
Add trace IDs and metadata for filtering:Tracing with Callback Handler
Use the Portkey callback handler for detailed traces:
Reliability
Enable fallbacks via Configs:Guardrails
Add input/output validation:Guardrails Guide
PII detection, content filtering, and custom rules
Caching
Reduce costs with response caching:Switching Providers
Change the provider to switch models:Supported Providers
See all 1600+ supported models

