Skip to main content
Portkey enhances CrewAI with production features:
  • Complete observability of agent steps, tool use, and interactions
  • Built-in reliability with fallbacks, retries, and load balancing
  • Cost tracking and optimization
  • Access to 1600+ LLMs through a single integration
  • Guardrails for safe, compliant agent behavior

CrewAI Documentation

Learn more about CrewAI’s core concepts

Quick Start

1

Install packages

pip install -U crewai portkey-ai
2

Add provider in Model Catalog

Go to Model Catalog → Add Provider. Select your provider (OpenAI, Anthropic, etc.), enter API keys, and name it (e.g., openai-prod).Your provider slug is @openai-prod.
3

Get Portkey API Key

Create an API key at app.portkey.ai/api-keys.Pro tip: Attach a default config for fallbacks, caching, and guardrails—applies automatically.
4

Configure CrewAI

from crewai import Agent, LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL

llm = LLM(
    model="gpt-4o",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy",  # Required by CrewAI, not used
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        provider="@openai-prod"
    )
)

researcher = Agent(
    config=self.agents_config['lead_market_analyst'],
    llm=llm
)

Production Features

Observability

All agent interactions are automatically logged:
Add trace IDs to group related requests:
llm = LLM(
    model="gpt-4o",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy",
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        provider="@openai-prod",
        trace_id="crew-session-123"
    )
)
Add metadata for filtering and analytics:
extra_headers=createHeaders(
    api_key="YOUR_PORTKEY_API_KEY",
    provider="@openai-prod",
    metadata={
        "crew_type": "research",
        "_user": "user_123",
        "environment": "production"
    }
)
Analytics with metadata filters

Reliability

Enable fallbacks, retries, and load balancing via Configs. Attach to your API key or pass inline:
llm = LLM(
    model="gpt-4o",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy",
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        config={
            "strategy": { "mode": "fallback" },
            "targets": [
                { "override_params": { "model": "@openai-prod/gpt-4o" } },
                { "override_params": { "model": "@anthropic-prod/claude-sonnet-4" } }
            ]
        }
    )
)

Guardrails

Add input/output validation:
extra_headers=createHeaders(
    api_key="YOUR_PORTKEY_API_KEY",
    provider="@openai-prod",
    config={
        "input_guardrails": ["guardrail-id-xxx"],
        "output_guardrails": ["guardrail-id-yyy"]
    }
)
Guardrails can:
  • Detect and redact PII
  • Filter harmful content
  • Validate response formats
  • Apply custom business rules

Guardrails Guide

PII detection, content filtering, and custom rules

Caching

Reduce costs with response caching:
extra_headers=createHeaders(
    api_key="YOUR_PORTKEY_API_KEY",
    provider="@openai-prod",
    config={ "cache": { "mode": "simple" } }
)

Switching Providers

Change the provider to switch models:
# OpenAI
createHeaders(api_key="YOUR_PORTKEY_API_KEY", provider="@openai-prod")

# Anthropic
createHeaders(api_key="YOUR_PORTKEY_API_KEY", provider="@anthropic-prod")

# Google
createHeaders(api_key="YOUR_PORTKEY_API_KEY", provider="@google-prod")
Update model in the LLM to match the provider’s model format.

Prompt Templates

Use Portkey’s prompt management for versioned, templated prompts:
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_PORTKEY_API_KEY")

prompt_data = portkey.prompts.render(
    prompt_id="YOUR_PROMPT_ID",
    variables={"agent_role": "Senior Research Scientist"}
)

researcher = Agent(
    role="Senior Research Scientist",
    backstory=prompt_data.data.messages[0]["content"],
    llm=llm
)

Prompt Engineering Studio

Prompt versioning and collaboration

Enterprise Governance

Set up centralized control for CrewAI across your organization.
1

Add Provider with Budget

Go to Model Catalog → Add Provider. Set budget limits and rate limits per provider.
2

Create Config

Go to Configs:
{
  "override_params": { "model": "@openai-prod/gpt-4o" }
}
Add fallbacks, guardrails, or routing as needed.
3

Create Team API Keys

Go to API Keys. Create keys per team, attach configs, and set permissions.
4

Distribute to Teams

Teams use their Portkey API key—no raw provider keys needed:
llm = LLM(
    model="gpt-4o",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="TEAM_PORTKEY_API_KEY"  # Config attached to key
)
Benefits:
  • Rotate provider keys without code changes
  • Per-team budgets and rate limits
  • Centralized usage analytics
  • Instant access revocation

Enterprise Features

Governance, security, and compliance

FAQ

Yes. Update your LLM configuration—agent and crew code stays unchanged.
Yes. Agents, tools, human-in-the-loop, sequential/hierarchical processes all work.
Use a consistent trace_id across agents to see the full workflow in one trace.
Yes. Portkey stores your provider keys securely. Rotate keys without code changes.

Resources