Skip to main content
Portkey enhances Langchain agents with production features:
  • Complete observability of agent steps and LLM interactions
  • Built-in reliability with fallbacks, retries, and load balancing
  • Access to 1600+ LLMs through a single integration
  • Cost tracking and optimization

Quick Start

from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL

llm = ChatOpenAI(
    api_key="YOUR_PORTKEY_API_KEY",
    base_url=PORTKEY_GATEWAY_URL,
    default_headers=createHeaders(provider="@openai-prod")
)
All agent interactions are now logged in your Portkey dashboard. Open in Colab

Setup

1

Install packages

pip install -qU langchain langchain-openai portkey-ai
2

Add provider in Model Catalog

Go to Model Catalog → Add Provider. Select your provider, enter API keys, and name it (e.g., openai-prod).Your provider slug is @openai-prod.
3

Get Portkey API Key

Create an API key at app.portkey.ai/api-keys.Pro tip: Attach a default config for fallbacks and caching.
4

Configure Langchain LLM

from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL

llm = ChatOpenAI(
    api_key="YOUR_PORTKEY_API_KEY",
    base_url=PORTKEY_GATEWAY_URL,
    default_headers=createHeaders(provider="@openai-prod")
)

Production Features

Observability

Add trace IDs and metadata for filtering:
default_headers=createHeaders(
    provider="@openai-prod",
    trace_id="research-agent-1",
    metadata={"agent_type": "research", "_user": "user_123"}
)

Tracing with Callback Handler

Use the Portkey callback handler for detailed traces:
from portkey_ai.langchain import LangchainCallbackHandler

portkey_handler = LangchainCallbackHandler(
    api_key="YOUR_PORTKEY_API_KEY",
    metadata={"session_id": "session_1", "agent_id": "research_agent_1"}
)

llm = ChatOpenAI(
    api_key="YOUR_OPENAI_API_KEY",
    callbacks=[portkey_handler]
)

Reliability

Enable fallbacks via Configs:
{
  "strategy": { "mode": "fallback" },
  "targets": [
    { "override_params": { "model": "@openai-prod/gpt-4o" } },
    { "override_params": { "model": "@anthropic-prod/claude-sonnet-4" } }
  ]
}

Guardrails

Add input/output validation:
{
  "input_guardrails": ["guardrail-id-xxx"],
  "output_guardrails": ["guardrail-id-yyy"]
}

Guardrails Guide

PII detection, content filtering, and custom rules

Caching

Reduce costs with response caching:
{
  "cache": { "mode": "semantic" }
}

Switching Providers

Change the provider to switch models:
# OpenAI
createHeaders(provider="@openai-prod")

# Anthropic
createHeaders(provider="@anthropic-prod")

# Azure OpenAI
createHeaders(provider="@azure-prod")

# AWS Bedrock
createHeaders(provider="@bedrock-prod")

Supported Providers

See all 1600+ supported models

Next Steps