Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2,482 changes: 1,705 additions & 777 deletions docs.json

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions product/ai-gateway/virtual-keys.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: "Virtual Keys"
description: "Portkey's virtual key system allows you to securely store your LLM API keys in our vault, utilizing a unique virtual identifier to streamline API key management."
tag: "Deprecated"
---

<Warning>
Expand Down
46 changes: 25 additions & 21 deletions product/integrations.mdx
Original file line number Diff line number Diff line change
@@ -1,52 +1,56 @@
---
title: Integrations
title: LLM Integrations
description: A step-by-step guide for organization admins to set up their first integration.
---

# **Org Admin Quickstart: Setting Up Your First Model Catalog Integration**
The **Integrations** page is the central command center for Organization Admins. It's where you securely manage all third-party LLM provider credentials and govern their use across all workspaces from a single, unified dashboard.

As an Organization Admin, the Model Catalog gives you the power to centrally manage, provision, and govern all AI models used across your company. This guide will walk you through creating your first centralized integration.
This "create once, provision many" model saves significant time, reduces configuration errors, and gives you complete oversight of your AI stack.

**Prerequisite:** The Model Catalog feature has been enabled for your organization by the Portkey team.
### **Understanding the Integrations Dashboard**

### **Navigating to the Integrations Hub**
The Integrations page is organized into three tabs, each serving a distinct purpose:

1. Click on your organization name in the bottom-left corner of the sidebar.
2. In the menu that appears, select **Integrations**. This is your new central dashboard for all provider connections.
* **`All`**: This is a comprehensive list of all 50+ providers Portkey supports. This is your starting point for connecting a new provider to your organization.
* **`Connected`**: This tab lists all the integrations that you have personally connected at the organization level. It's your primary view for managing your centrally-governed providers.
* **`Workspace-Created`**: This tab gives you complete visibility and governance over any integrations created *by Workspace Admins* for their specific workspaces. It ensures that even with delegated control, you maintain a full audit trail and can manage these instances if needed.


---

### **Creating and Provisioning a New Integration**

This guide walks you through connecting a new provider and making it available to your workspaces.

### **Step 1: Connect a New Provider Integration**
#### **Step 1: Connect the Provider**

This step is similar to creating a Virtual Key, but it's happening at the organization level.
<Info>
If you are an existing Portkey user, this step is similar to creating a Virtual Key, but it's happening at the organization level.
</Info>

1. From the **Integrations** page, find the provider you want to connect (e.g., OpenAI, Azure OpenAI, AWS Bedrock) and click **Connect**.
1. From the **`All`** tab, find the provider you want to connect (e.g., OpenAI, Azure OpenAI, AWS Bedrock) and click **Connect**.
2. Fill in the details:
* **Integration Name:** A friendly name for you to identify this connection (e.g., "Azure Production - US East").
* **Slug:** A unique, URL-friendly identifier. This will be used by developers to call models (e.g., `azure-prod-useast`).
* **Credentials:** Securely enter your API keys or other authentication details. These are encrypted and will not be visible after saving.
3. Click **Next**.

### **Step 2: Provision the Integration to Workspaces**
#### **Step 2: Provision to Workspaces**

Here, you decide which teams get access to this provider and under what conditions.

1. You will see a list of all workspaces within your organization.
2. Use the toggle next to a workspace name to **enable or disable** access.
3. For each enabled workspace, you can optionally click **Edit Budget & Rate Limits** to set specific spending caps or request limits that apply *only to that workspace* for this integration.
4. **(Optional) For New Workspaces:** Toggle on **"Automatically provision this integration for new workspaces"** to ensure any future teams automatically get access with a default budget/rate limit you define.
4. **(Optional) For Provisioning to New Workspaces:** Toggle on **"Automatically provision this integration for new workspaces"** to ensure any future teams automatically get access with a default budget/rate limit you define.
5. Click **Next**.

#### **Step 3: Provision Specific Models**

### **Step 3: Provision Specific Models**

This is where you enforce model governance.
This is where you enforce model governance and control costs.

1. You will see a list of all models available from the provider you're connecting.
2. By default, all models may be selected. You can **Clear all** and then select only the models you wish to approve for use.
3. **(Optional) For Dynamic Models:** If you're using a provider like Fireworks AI with many community models, you can toggle on **"Automatically enable new models"**. This is useful, but for providers like OpenAI or Azure, we recommend an explicit allow-list for better cost control.
2. You can **Clear all** and then select only the models you wish to approve for use.
3. **(Optional) For Dynamic Models:** If you're using a provider like Fireworks AI with many community models, you can toggle on **"Automatically enable new models"**. For providers like OpenAI or Azure, we recommend an explicit allow-list for better cost control.
4. Click **Create Integration**.

**That's it!** You have successfully created a centrally managed integration. The workspaces you provisioned will now see this as an available "AI Provider" in their Model Catalog, with access only to the models you specified and constrained by the budgets you set. You can now repeat this process for all your providers.

**That's it!** You have successfully created and provisioned a centrally managed integration. It will now appear in your **`Connected`** tab. The workspaces you provisioned will see this as an available "AI Provider" in their Model Catalog, with access only to the models you specified and constrained by the budgets you set.
4 changes: 4 additions & 0 deletions product/integrations/agents.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: Agents
url: /integrations/agents
---
4 changes: 4 additions & 0 deletions product/integrations/ai-apps.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: AI Apps
url: /integrations/ai-apps
---
4 changes: 4 additions & 0 deletions product/integrations/cloud.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: Cloud Providers
url: /integrations/cloud
---
4 changes: 4 additions & 0 deletions product/integrations/guardrails.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: Guardrails
url: /product/guardrails/list-of-guardrail-checks
---
4 changes: 4 additions & 0 deletions product/integrations/libraries.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: Libraries
url: /integrations/libraries
---
4 changes: 4 additions & 0 deletions product/integrations/plugins.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: Gateway Plugins
url: /integrations/plugins
---
4 changes: 4 additions & 0 deletions product/integrations/tracing.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: Tracing
url: /integrations/tracing
---
71 changes: 35 additions & 36 deletions product/model-catalog.mdx
Original file line number Diff line number Diff line change
@@ -1,59 +1,58 @@
---
title: Model Catalog
description: This guide is focused on the end-user (the developer) and explains how to take advantage of the new, simplified workflow.
description: Explore and query every AI model available to your workspace, with instant code snippets for all supported providers.
---

The new Model Catalog makes discovering and using AI models in Portkey easier and more flexible than ever before. This guide shows you how to find available models and how to call them in your code.
The **Model Catalog** is the evolution of Virtual Keys, providing a centralized and powerful way to manage, discover, and use AI models within your workspace. It consists of two main sections: **AI Providers**, where you manage your connections, and **Models**, where you explore what you can use.

### **The Model Garden: Your Personal AI Model Gallery**
### **How it Works: Inheritance from the Organization**

Inside your workspace, the "Virtual Keys" item on the sidebar has been replaced by the **Model Catalog**. Clicking this takes you to two tabs: "AI Providers" and "Models".
The most significant change with the Model Catalog is the concept of inheritance. Think of it this way:

The **Models** tab is your new "Model Garden." It's a central gallery that lists every single model you have been given access to by your admins, across all providers (OpenAI, Anthropic, Google, etc.).
1. Your **Organization Admin** creates a master **Integration** at the company level (e.g., for "Azure Production"). They add the credentials and can set default budgets, rate limits, and an allow-list of approved models for that integration.
2. When they provision this integration to your workspace, a corresponding **AI Provider** is automatically created in your Model Catalog.
3. This new AI Provider in your workspace *inherits* all the settings from the organization-level integration, including its credentials, model access, and spending limits.

- **Discover Models:** Browse or search for any model available to you.
- **See Providers:** Click on a model to see which provider(s) you can use to access it. For example, `claude-3-opus` might be available via both AWS Bedrock and Anthropic direct.
- **Get Code Snippets:** The best part! Click a model, select your provider and language, and Portkey will generate the exact code snippet you need to start making calls.
This "create once, provision many" approach provides central governance while giving workspaces the flexibility they need.



### **The New Way to Call Models: Simple and Powerful**
---

With the Model Catalog, you no longer need to rely on your admin to bind a specific virtual key to your API key. You can now choose your provider and model directly in your request.
### **The Model Catalog Experience by Role**

The `model` parameter now accepts a new format: `@{provider_slug}/{model_slug}`
Your experience with the Model Catalog will differ based on your role within the Portkey organization.

- `provider_slug`: The unique slug for the provider (e.g., `openai-prod`). You can find this in your Model Garden.
- `model_slug`: The name of the model you want to use (e.g., `gpt-4o`).
#### **For Workspace Members (Developers): Discover and Build**

#### **Example: Switching Between Providers on the Fly**
As a developer, your experience is simplified and streamlined. You primarily interact with the **Models** tab, which acts as your personal "Model Garden."

Imagine your admin has given you access to OpenAI via a provider with the slug `openai-main` and to Anthropic on Bedrock via `bedrock-us-east-1`. With a single Portkey API key, you can do this:
- **Discover Models:** The "Models" tab is a complete gallery of every single model you have been given access to by your admins.
- **Get Code Snippets:** Click on any model, and Portkey will generate the exact code snippet you need to start making calls, with the correct provider and model slugs already included.
- **Simplified API Calls:** You can now call any model directly using the `model` parameter, formatted as `@{provider_slug}/{model_slug}`. This lets you switch between providers and models on the fly with a single Portkey API key.

```python
# main.py
from portkey import Portkey

# Your single Portkey API Key is all you need
client = Portkey()

# Make a call to GPT-4o
response_openai = client.chat.completions.create(
model="@openai-main/gpt-4o",
messages=[{"role": "user", "content": "Explain the theory of relativity in simple terms."}]
# Switch between a model on OpenAI and one on Bedrock seamlessly
client.chat.completions.create(
model="@openai-prod/gpt-4o",
messages=[...]
)
print(response_openai.choices[0].message.content)

# In the same script, switch to Claude 3 Sonnet
response_claude = client.chat.completions.create(
model="@bedrock-us-east-1/anthropic.claude-3-sonnet-20240229-v1:0",
messages=[{"role": "user", "content": "Write a short poem about coding."}]
client.chat.completions.create(
model="@bedrock-us/claude-3-sonnet-v1",
messages=[...]
)
print(response_claude.choices[0].message.content)
```

### **What About the Old Way?**
#### **For Workspace Admins: Manage and Customize**

As a Workspace Admin, you have more control over the providers within your workspace via the **AI Providers** tab.

You will see a list of providers that have been inherited from the organization. From here, you have two primary options when you click **Create Provider**:

1. **Inherit from an Org Integration:** You can create *additional* providers that are based on an existing org-level integration. This is useful for subdividing access within your team. For example, if your workspace has a $1000 budget on the main "Azure Prod" integration, you could create a new provider from it named "azure-prod-experimental" and give it a stricter $100 budget for a specific project.
2. **Create a New Workspace-Exclusive Integration:** If your Org Admin has enabled the permission, you can create a brand new integration from scratch. This provider is exclusive to your workspace and functions just like the old Virtual Keys did.

#### **For Organization Admins: A View into Workspaces**

**It still works!** If you are using code that passes a `virtual_key` in the header or in a Portkey Config, it will continue to function without any changes.
While Org Admins primarily work in the main **[Integrations](/product/integrations)** dashboard, the Model Catalog provides a crucial feedback loop:

The new `model` parameter format is an enhancement for flexibility and ease of use, allowing you to access the full power of your workspace's Model Garden with minimal friction.
When a Workspace Admin creates a new, workspace-exclusive integration (option #2 above), you gain full visibility. This new integration will automatically appear on your main Integrations page under the **"Workspace-Created"** tab, ensuring you always have a complete audit trail of all provider credentials being used across the organization.
Loading