Skip to main content
Portkey enhances OpenAI Agents SDK with production features—no changes to your agent logic:
  • Complete observability of agent steps, tool use, and handoffs
  • Built-in reliability with fallbacks, retries, and load balancing
  • Access to 1600+ LLMs through the same interface
  • Cost tracking and optimization
  • Guardrails for safe agent behavior

OpenAI Agents SDK Documentation

Learn more about OpenAI Agents SDK

Quick Start

The integration requires one change: set a Portkey-configured client as the default.
import { Agent, run, setDefaultOpenAIClient, setOpenAIAPI } from '@openai/agents';
import { OpenAI } from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai';

// Configure Portkey as the default client
const client = new OpenAI({
    baseURL: PORTKEY_GATEWAY_URL,
    apiKey: 'YOUR_PORTKEY_API_KEY',
    defaultHeaders: createHeaders({ provider: '@openai-prod' })
});
setDefaultOpenAIClient(client);
setOpenAIAPI('chat_completions');

// Your agent code stays exactly the same
const agent = new Agent({
    name: 'Math Tutor',
    instructions: 'You provide help with math problems. Explain your reasoning at each step.',
    model: 'gpt-4o'
});

const result = await run(agent, 'What is the derivative of x^2?');
console.log(result.finalOutput);
All agent interactions are now logged in your Portkey dashboard.

Setup

1

Install packages

npm install @openai/agents portkey-ai openai
2

Add provider in Model Catalog

Go to Model Catalog → Add Provider. Select your provider (OpenAI, Anthropic, etc.), enter API keys, and name it (e.g., openai-prod).Your provider slug is @openai-prod.
3

Get Portkey API Key

Create an API key at app.portkey.ai/api-keys.Pro tip: Attach a default config for fallbacks, caching, and guardrails—applies automatically without code changes.
4

Set the default client

import { setDefaultOpenAIClient, setOpenAIAPI } from '@openai/agents';
import { OpenAI } from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai';

const client = new OpenAI({
    baseURL: PORTKEY_GATEWAY_URL,
    apiKey: 'YOUR_PORTKEY_API_KEY',
    defaultHeaders: createHeaders({ provider: '@openai-prod' })
});
setDefaultOpenAIClient(client);
setOpenAIAPI('chat_completions');
All agents now route through Portkey.

Production Features

Observability

All agent interactions are automatically logged:
Add trace IDs to group related requests:
const client = new OpenAI({
    baseURL: PORTKEY_GATEWAY_URL,
    apiKey: 'YOUR_PORTKEY_API_KEY',
    defaultHeaders: createHeaders({
        provider: '@openai-prod',
        traceId: 'agent-session-123'
    })
});
Add metadata for filtering and analytics:
defaultHeaders: createHeaders({
    provider: '@openai-prod',
    traceId: 'homework-tutor',
    metadata: {
        agent_type: 'tutor',
        _user: 'user_123',
        environment: 'production'
    }
})
Analytics with metadata filters

Reliability

Enable fallbacks, retries, and load balancing via Configs. Attach to your API key or pass inline:
const client = new OpenAI({
    baseURL: PORTKEY_GATEWAY_URL,
    apiKey: 'YOUR_PORTKEY_API_KEY',
    defaultHeaders: createHeaders({
        config: {
            strategy: { mode: 'fallback' },
            targets: [
                { override_params: { model: '@openai-prod/gpt-4o' } },
                { override_params: { model: '@anthropic-prod/claude-sonnet-4' } }
            ]
        }
    })
});
If GPT-4o fails, requests automatically retry with Claude.

Guardrails

Add input/output validation:
defaultHeaders: createHeaders({
    provider: '@openai-prod',
    config: {
        input_guardrails: ['guardrail-id-xxx'],
        output_guardrails: ['guardrail-id-yyy']
    }
})
Guardrails can:
  • Detect and redact PII
  • Filter harmful content
  • Validate response formats
  • Apply custom business rules

Guardrails Guide

PII detection, content filtering, and custom rules

Caching

Reduce costs with response caching:
defaultHeaders: createHeaders({
    provider: '@openai-prod',
    config: { cache: { mode: 'simple' } }
})

Switching Providers

Use any of 1600+ models by changing the provider:
// OpenAI
createHeaders({ provider: '@openai-prod' })
// Model: gpt-4o, gpt-4o-mini, o1, etc.

// Anthropic
createHeaders({ provider: '@anthropic-prod' })
// Model: claude-sonnet-4-20250514, claude-3-5-haiku-20241022, etc.

// Google
createHeaders({ provider: '@google-prod' })
// Model: gemini-2.0-flash, gemini-1.5-pro, etc.
Agent code stays the same—just update the model name to match the provider.

Supported Providers

See all 1600+ supported models

Handoffs and Multi-Agent

Portkey works seamlessly with OpenAI Agents’ handoff system:
import { Agent, run, setDefaultOpenAIClient, setOpenAIAPI } from '@openai/agents';
import { OpenAI } from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai';

const client = new OpenAI({
    baseURL: PORTKEY_GATEWAY_URL,
    apiKey: 'YOUR_PORTKEY_API_KEY',
    defaultHeaders: createHeaders({
        provider: '@openai-prod',
        traceId: 'homework-session'
    })
});
setDefaultOpenAIClient(client);
setOpenAIAPI('chat_completions');

// Define specialist agents
const mathTutor = new Agent({
    name: 'Math Tutor',
    handoffDescription: 'Specialist for math questions',
    instructions: 'Help with math problems. Show your work.',
    model: 'gpt-4o'
});

const historyTutor = new Agent({
    name: 'History Tutor',
    handoffDescription: 'Specialist for history questions',
    instructions: 'Help with history questions. Provide context.',
    model: 'gpt-4o'
});

// Triage agent with handoffs
const triage = new Agent({
    name: 'Triage',
    instructions: 'Route to the appropriate tutor based on the question.',
    handoffs: [mathTutor, historyTutor],
    model: 'gpt-4o'
});

const result = await run(triage, 'What caused World War I?');
console.log(result.finalOutput);
All handoffs are tracked in the same trace on your Portkey dashboard.

Tools

Portkey provides full observability for tool usage:
import { Agent, run, tool } from '@openai/agents';
import { z } from 'zod';

const getWeather = tool({
    name: 'get_weather',
    description: 'Get weather for a city',
    parameters: z.object({ city: z.string() }),
    async execute({ city }) {
        return `72°F and sunny in ${city}`;
    }
});

const agent = new Agent({
    name: 'Assistant',
    instructions: 'You can check the weather.',
    tools: [getWeather],
    model: 'gpt-4o'
});

const result = await run(agent, "What's the weather in Tokyo?");
Tool calls, parameters, and responses are all logged.

Prompt Templates

Use Portkey’s prompt management for versioned prompts:
import { Portkey } from 'portkey-ai';

const portkey = new Portkey({ apiKey: 'YOUR_PORTKEY_API_KEY' });

const promptData = await portkey.prompts.render({
    promptId: 'YOUR_PROMPT_ID',
    variables: { subject: 'calculus' }
});

const agent = new Agent({
    name: 'Tutor',
    instructions: promptData.data.messages[0].content,
    model: 'gpt-4o'
});

Prompt Engineering Studio

Prompt versioning and collaboration

Enterprise Governance

Set up centralized control for OpenAI Agents across your organization.
1

Add Provider with Budget

Go to Model Catalog → Add Provider. Set budget limits and rate limits per provider.
2

Create Config

Go to Configs:
{
  "override_params": { "model": "@openai-prod/gpt-4o" }
}
Add fallbacks, guardrails, or routing as needed.
3

Create Team API Keys

Go to API Keys. Create keys per team, attach configs, and set permissions.
4

Distribute to Teams

Teams use their Portkey API key—no raw provider keys needed:
const client = new OpenAI({
    baseURL: PORTKEY_GATEWAY_URL,
    apiKey: 'TEAM_PORTKEY_API_KEY'  // Config attached to key
});
setDefaultOpenAIClient(client);
setOpenAIAPI('chat_completions');
Benefits:
  • Rotate provider keys without code changes
  • Per-team budgets and rate limits
  • Centralized usage analytics
  • Instant access revocation

Enterprise Features

Governance, security, and compliance

FAQ

Yes. Set the default client once—agent and tool code stays unchanged.
Yes. Handoffs, tools, guardrails, memory, streaming—all work.
Use a consistent traceId across the workflow to see all agent interactions in one trace.
Yes. Portkey stores your provider keys securely. Rotate keys without code changes.
Yes. Streaming responses work normally, and Portkey logs the complete interaction.

Resources