Skip to main content
Portkey enhances OpenAI Agents SDK with production features—no changes to your agent logic:
  • Complete observability of agent steps, tool use, and handoffs
  • Built-in reliability with fallbacks, retries, and load balancing
  • Access to 1600+ LLMs through the same interface
  • Cost tracking and optimization
  • Guardrails for safe agent behavior

OpenAI Agents SDK Documentation

Learn more about OpenAI Agents SDK

Quick Start

The integration requires one change: set a Portkey-configured client as the default.
from agents import Agent, Runner, set_default_openai_client, set_default_openai_api
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders

# Configure Portkey as the default client
client = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key="YOUR_PORTKEY_API_KEY",
    default_headers=createHeaders(provider="@openai-prod")
)
set_default_openai_client(client, use_for_tracing=False)
set_default_openai_api("chat_completions")

# Your agent code stays exactly the same
agent = Agent(
    name="Math Tutor",
    instructions="You provide help with math problems. Explain your reasoning at each step.",
    model="gpt-4o"
)

result = Runner.run_sync(agent, "What is the derivative of x^2?")
print(result.final_output)
All agent interactions are now logged in your Portkey dashboard.

Setup

1

Install packages

pip install -U openai-agents portkey-ai
2

Add provider in Model Catalog

Go to Model Catalog → Add Provider. Select your provider (OpenAI, Anthropic, etc.), enter API keys, and name it (e.g., openai-prod).Your provider slug is @openai-prod.
3

Get Portkey API Key

Create an API key at app.portkey.ai/api-keys.Pro tip: Attach a default config for fallbacks, caching, and guardrails—applies automatically without code changes.
4

Set the default client

from agents import set_default_openai_client, set_default_openai_api
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders

client = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key="YOUR_PORTKEY_API_KEY",
    default_headers=createHeaders(provider="@openai-prod")
)
set_default_openai_client(client, use_for_tracing=False)
set_default_openai_api("chat_completions")
All agents now route through Portkey.

Integration Approaches

Three ways to integrate, depending on your needs:

Production Features

Observability

All agent interactions are automatically logged:
Add trace IDs to group related requests:
client = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key="YOUR_PORTKEY_API_KEY",
    default_headers=createHeaders(
        provider="@openai-prod",
        trace_id="agent-session-123"
    )
)
Add metadata for filtering and analytics:
default_headers=createHeaders(
    provider="@openai-prod",
    trace_id="homework-tutor",
    metadata={
        "agent_type": "tutor",
        "_user": "user_123",
        "environment": "production"
    }
)
Analytics with metadata filters

Reliability

Enable fallbacks, retries, and load balancing via Configs. Attach to your API key or pass inline:
client = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key="YOUR_PORTKEY_API_KEY",
    default_headers=createHeaders(
        config={
            "strategy": { "mode": "fallback" },
            "targets": [
                { "override_params": { "model": "@openai-prod/gpt-4o" } },
                { "override_params": { "model": "@anthropic-prod/claude-sonnet-4" } }
            ]
        }
    )
)
If GPT-4o fails, requests automatically retry with Claude.

Guardrails

Add input/output validation:
default_headers=createHeaders(
    provider="@openai-prod",
    config={
        "input_guardrails": ["guardrail-id-xxx"],
        "output_guardrails": ["guardrail-id-yyy"]
    }
)
Guardrails can:
  • Detect and redact PII
  • Filter harmful content
  • Validate response formats
  • Apply custom business rules

Guardrails Guide

PII detection, content filtering, and custom rules

Caching

Reduce costs with response caching:
default_headers=createHeaders(
    provider="@openai-prod",
    config={ "cache": { "mode": "simple" } }
)

Switching Providers

Use any of 1600+ models by changing the provider:
# OpenAI
createHeaders(provider="@openai-prod")
# Model: gpt-4o, gpt-4o-mini, o1, etc.

# Anthropic
createHeaders(provider="@anthropic-prod")
# Model: claude-sonnet-4-20250514, claude-3-5-haiku-20241022, etc.

# Google
createHeaders(provider="@google-prod")
# Model: gemini-2.0-flash, gemini-1.5-pro, etc.
Agent code stays the same—just update the model name to match the provider.

Supported Providers

See all 1600+ supported models

Handoffs and Multi-Agent

Portkey works seamlessly with OpenAI Agents’ handoff system:
from agents import Agent, Runner, set_default_openai_client, set_default_openai_api
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders

client = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key="YOUR_PORTKEY_API_KEY",
    default_headers=createHeaders(
        provider="@openai-prod",
        trace_id="homework-session"
    )
)
set_default_openai_client(client, use_for_tracing=False)
set_default_openai_api("chat_completions")

# Define specialist agents
math_tutor = Agent(
    name="Math Tutor",
    handoff_description="Specialist for math questions",
    instructions="Help with math problems. Show your work.",
    model="gpt-4o"
)

history_tutor = Agent(
    name="History Tutor",
    handoff_description="Specialist for history questions",
    instructions="Help with history questions. Provide context.",
    model="gpt-4o"
)

# Triage agent with handoffs
triage = Agent(
    name="Triage",
    instructions="Route to the appropriate tutor based on the question.",
    handoffs=[math_tutor, history_tutor],
    model="gpt-4o"
)

result = Runner.run_sync(triage, "What caused World War I?")
print(result.final_output)
All handoffs are tracked in the same trace on your Portkey dashboard.

Tools

Portkey provides full observability for tool usage:
from agents import Agent, Runner, function_tool

@function_tool
def get_weather(city: str) -> str:
    """Get weather for a city."""
    return f"72°F and sunny in {city}"

agent = Agent(
    name="Assistant",
    instructions="You can check the weather.",
    tools=[get_weather],
    model="gpt-4o"
)

result = Runner.run_sync(agent, "What's the weather in Tokyo?")
Tool calls, parameters, and responses are all logged.

Prompt Templates

Use Portkey’s prompt management for versioned prompts:
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_PORTKEY_API_KEY")

prompt_data = portkey.prompts.render(
    prompt_id="YOUR_PROMPT_ID",
    variables={"subject": "calculus"}
)

agent = Agent(
    name="Tutor",
    instructions=prompt_data.data.messages[0]["content"],
    model="gpt-4o"
)

Prompt Engineering Studio

Prompt versioning and collaboration

Enterprise Governance

Set up centralized control for OpenAI Agents across your organization.
1

Add Provider with Budget

Go to Model Catalog → Add Provider. Set budget limits and rate limits per provider.
2

Create Config

Go to Configs:
{
  "override_params": { "model": "@openai-prod/gpt-4o" }
}
Add fallbacks, guardrails, or routing as needed.
3

Create Team API Keys

Go to API Keys. Create keys per team, attach configs, and set permissions.
4

Distribute to Teams

Teams use their Portkey API key—no raw provider keys needed:
client = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key="TEAM_PORTKEY_API_KEY"  # Config attached to key
)
set_default_openai_client(client, use_for_tracing=False)
set_default_openai_api("chat_completions")
Benefits:
  • Rotate provider keys without code changes
  • Per-team budgets and rate limits
  • Centralized usage analytics
  • Instant access revocation

Enterprise Features

Governance, security, and compliance

FAQ

Yes. Set the default client once—agent and tool code stays unchanged.
Yes. Handoffs, tools, guardrails, memory, streaming—all work.
Use a consistent trace_id across the workflow to see all agent interactions in one trace.
Yes. Portkey stores your provider keys securely. Rotate keys without code changes.
Yes. Streaming responses work normally, and Portkey logs the complete interaction.

Resources