Welcome to Agent OS Docker: a robust, production-ready application for serving Agentic Applications as an API. It includes:
- An AgentOS instance: An API-based interface for production-ready Agentic Applications.
- A PostgreSQL database for storing Agent sessions, knowledge, and memories.
- A set of pre-built Agents to use as a starting point.
For more information, checkout Agno and give it a โญ๏ธ
Follow these steps to get your Agent OS up and running:
Get Docker Desktop should be installed and running. Get OpenAI API key
git clone https://github.com/agno-agi/agent-infra-docker.git
cd agent-infra-docker
We use GPT 5 as the default model, please export the OPENAI_API_KEY
environment variable to get started.
export OPENAI_API_KEY="YOUR_API_KEY_HERE"
Note: You can use any model provider, just update the agents in the
/agents
folder and add the required library in thepyproject.toml
andrequirements.txt
file.
ag infra up
Or run the application using docker compose (Remove the --build
flag if you already have the image built):
docker compose up -d --build
This command starts:
- The AgentOS instance, which is a FastAPI server, running on http://localhost:8000.
- The PostgreSQL database, accessible on
localhost:5432
.
Once started, you can:
- Test the API at http://localhost:8000/docs.
- Open the Agno AgentOS UI.
- Connect your OS with
http://localhost:8000
as the endpoint. You can name itAgentOS
(or any name you prefer). - Explore all the features of AgentOS or go straight to the Chat page to interact with your Agents.
When you're done, stop the application using:
ag infra down
Or:
docker compose down
The /agents
folder contains pre-built agents that you can use as a starting point.
- Web Search Agent: A simple agent that can search the web.
- Agno Assist: An Agent that can help answer questions about Agno.
- Finance Agent: An agent that uses the Financial Datasets API to get stock prices and financial data.
To setup your local virtual environment:
We use uv
for python environment and package management. Install it by following the the uv
documentation or use the command below for unix-like systems:
curl -LsSf https://astral.sh/uv/install.sh | sh
Run the dev_setup.sh
script. This will create a virtual environment and install project dependencies:
./scripts/dev_setup.sh
Activate the created virtual environment:
source .venv/bin/activate
(On Windows, the command might differ, e.g., .venv\Scripts\activate
)
If you need to add or update python dependencies:
Add or update your desired Python package dependencies in the [dependencies]
section of the pyproject.toml
file.
The requirements.txt
file is used to build the application image. After modifying pyproject.toml
, regenerate requirements.txt
using:
./scripts/generate_requirements.sh
To upgrade all existing dependencies to their latest compatible versions, run:
./scripts/generate_requirements.sh upgrade
Rebuild your Docker images to include the updated dependencies:
docker compose up -d --build
This project comes with a set of integration tests that you can use to ensure the application is working as expected.
First, start the application:
docker compose up -d
Then, run the tests:
pytest tests/
Then close the application again:
docker compose down
Need help, have a question, or want to connect with the community?
- ๐ Read the Agno Docs for more in-depth information.
- ๐ฌ Chat with us on Discord for live discussions.
- โ Ask a question on Discourse for community support.
- ๐ Report an Issue on GitHub if you find a bug or have a feature request.
This repository includes a Dockerfile
for building a production-ready container image of the application.
The general process to run in production is:
- Update the
scripts/build_image.sh
file and set your IMAGE_NAME and IMAGE_TAG variables. - Build and push the image to your container registry:
./scripts/build_image.sh
- Run in your cloud provider of choice.
- Configure for Production
- Ensure your production environment variables (e.g.,
OPENAI_API_KEY
, database connection strings) are securely managed. Most cloud providers offer a way to set these as environment variables for your deployed service. - Review the agent configurations in the
/agents
directory and ensure they are set up for your production needs (e.g., correct model versions, any production-specific settings).
- Build Your Production Docker Image
-
Update the
scripts/build_image.sh
script to set your desiredIMAGE_NAME
andIMAGE_TAG
(e.g.,your-repo/agent-api:v1.0.0
). -
Run the script to build and push the image:
./scripts/build_image.sh
- Deploy to a Cloud Service With your image in a registry, you can deploy it to various cloud services that support containerized applications. Some common options include:
-
Serverless Container Platforms:
- Google Cloud Run: A fully managed platform that automatically scales your stateless containers. Ideal for HTTP-driven applications.
- AWS App Runner: Similar to Cloud Run, AWS App Runner makes it easy to deploy containerized web applications and APIs at scale.
- Azure Container Apps: Build and deploy modern apps and microservices using serverless containers.
-
Container Orchestration Services:
- Amazon Elastic Container Service (ECS): A highly scalable, high-performance container orchestration service that supports Docker containers. Often used with AWS Fargate for serverless compute or EC2 instances for more control.
- Google Kubernetes Engine (GKE): A managed Kubernetes service for deploying, managing, and scaling containerized applications using Google infrastructure.
- Azure Kubernetes Service (AKS): A managed Kubernetes service for deploying and managing containerized applications in Azure.
-
Platform as a Service (PaaS) with Docker Support
- Railway.app: Offers a simple way to deploy applications from a Dockerfile. It handles infrastructure, scaling, and networking.
- Render: Another platform that simplifies deploying Docker containers, databases, and static sites.
- Heroku: While traditionally known for buildpacks, Heroku also supports deploying Docker containers.
-
Specialized Platforms:
- Modal: A platform designed for running Python code (including web servers like FastAPI) in the cloud, often with a focus on batch jobs, scheduled functions, and model inference, but can also serve web endpoints.
The specific deployment steps will vary depending on the chosen provider. Generally, you'll point the service to your container image in the registry and configure aspects like port mapping (the application runs on port 8000 by default inside the container), environment variables, scaling parameters, and any necessary database connections.
- Database Configuration
- The default
docker-compose.yml
sets up a PostgreSQL database for local development. In production, you will typically use a managed database service provided by your cloud provider (e.g., AWS RDS, Google Cloud SQL, Azure Database for PostgreSQL) for better reliability, scalability, and manageability. - Ensure your deployed application is configured with the correct database connection URL for your production database instance. This is usually set via an environment variables.