- Complete observability of agent steps, tool use, and handoffs
- Built-in reliability with fallbacks, retries, and load balancing
- Access to 1600+ LLMs through the same interface
- Cost tracking and optimization
- Guardrails for safe agent behavior
OpenAI Agents SDK Documentation
Learn more about OpenAI Agents SDK
Quick Start
The integration requires one change: set a Portkey-configured client as the default.Setup
1
Install packages
2
Add provider in Model Catalog
Go to Model Catalog → Add Provider. Select your provider (OpenAI, Anthropic, etc.), enter API keys, and name it (e.g.,
openai-prod).Your provider slug is @openai-prod.3
Get Portkey API Key
Create an API key at app.portkey.ai/api-keys.Pro tip: Attach a default config for fallbacks, caching, and guardrails—applies automatically without code changes.
4
Set the default client
Integration Approaches
Three ways to integrate, depending on your needs:- Global Client (Recommended)
- Per-Run Config
- Per-Agent Model
Set once, applies to all agents:Best for: Full application migration with minimal code changes.
Production Features
Observability
All agent interactions are automatically logged:

Reliability
Enable fallbacks, retries, and load balancing via Configs. Attach to your API key or pass inline:Automatic Retries
Handles temporary failures automatically
Request Timeouts
Prevent agents from hanging
Conditional Routing
Route based on request attributes
Load Balancing
Distribute across multiple keys
Guardrails
Add input/output validation:- Detect and redact PII
- Filter harmful content
- Validate response formats
- Apply custom business rules
Guardrails Guide
PII detection, content filtering, and custom rules
Caching
Reduce costs with response caching:Switching Providers
Use any of 1600+ models by changing the provider:Supported Providers
See all 1600+ supported models
Handoffs and Multi-Agent
Portkey works seamlessly with OpenAI Agents’ handoff system:Tools
Portkey provides full observability for tool usage:Prompt Templates
Use Portkey’s prompt management for versioned prompts:Prompt Engineering Studio
Prompt versioning and collaboration
Enterprise Governance
Set up centralized control for OpenAI Agents across your organization.1
Add Provider with Budget
Go to Model Catalog → Add Provider. Set budget limits and rate limits per provider.
2
Create Config
3
Create Team API Keys
Go to API Keys. Create keys per team, attach configs, and set permissions.
4
Distribute to Teams
Teams use their Portkey API key—no raw provider keys needed:
- Rotate provider keys without code changes
- Per-team budgets and rate limits
- Centralized usage analytics
- Instant access revocation
Enterprise Features
Governance, security, and compliance
FAQ
Can I use Portkey with existing OpenAI Agents apps?
Can I use Portkey with existing OpenAI Agents apps?
Yes. Set the default client once—agent and tool code stays unchanged.
Does Portkey work with all OpenAI Agents features?
Does Portkey work with all OpenAI Agents features?
Yes. Handoffs, tools, guardrails, memory, streaming—all work.
How do I track multi-agent workflows?
How do I track multi-agent workflows?
Use a consistent
trace_id across the workflow to see all agent interactions in one trace.Can I use my own API keys?
Can I use my own API keys?
Yes. Portkey stores your provider keys securely. Rotate keys without code changes.
Does Portkey support streaming?
Does Portkey support streaming?
Yes. Streaming responses work normally, and Portkey logs the complete interaction.

