To run agents in production, we need to:

  1. Serve them using an application like FastApi, Django or Streamlit.
  2. Manage their sessions, memory and knowlege in a database.
  3. Monitor, evaluate and improve their performance.

Phidata not only makes building Agents easy but also provides templates that can be deployed to AWS with 1 command. Here’s how they work:

  • Create your codebase using a template: phi ws create
  • Run your application locally: phi ws up
  • Run your application on AWS: phi ws up prd:aws

We strongly believe that data used by AI applications should be stored securely inside your VPC.

We fully support BYOC (Bring Your Own Cloud) and encourage you to use your own AWS account.

Agent App

Let’s build an agent-app which includes a Streamlit UI, FastApi server and Postgres database for memory and knowledge. Run it locally using docker or deploy to production on AWS.

Setup

1

Create a virtual environment

2

Install phidata

3

Install docker

Install docker desktop to run your app locally

4

Export your OpenAI key

You can get an API key from here.

Create your codebase

Create your codebase using the agent-app template

This will create a folder agent-app with the following structure:

agent-app                     # root directory
├── agents                  # add your Agents here
├── app                     # add streamlit apps here
├── api                     # add fastApi routes here
├── db                      # add database tables here
├── Dockerfile              # Dockerfile for the application
├── pyproject.toml          # python project definition
├── requirements.txt        # python dependencies generated using pyproject.toml
├── scripts                 # helper scripts
├── utils                   # shared utilities
└── workspace               # phidata workspace directory
    ├── dev_resources.py    # dev resources running locally
    ├── prd_resources.py    # production resources running on AWS
    ├── secrets             # secrets
    └── settings.py         # phidata workspace settings

Test your Agents using Streamlit

Streamlit allows us to build micro front-ends for testing our Agents. Start the app using:

Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.

  • Open localhost:8501 to view your AI Agent.
  • The streamlit apps are defined in the app folder
  • The Agents are defined in the agents folder.

Serve your Agents using FastApi

Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like next.js backed by a RestApi built using FastApi.

Your Agent App comes ready-to-use with FastApi endpoints. Start the api using:

  • Open localhost:8000/docs to view the API Endpoints.
  • Test the /v1/playground/agent/run endpoint with
{
  "message": "howdy",
  "agent_id": "example-agent",
  "stream": true
}

Building your AI Product

The agent-app comes with common endpoints that you can use to build your AI product. This API is developed in close collaboration with real AI Apps and are a great starting point.

The general workflow is:

  • Your front-end/product will call the /v1/playground/agent/run to run Agents.
  • Using the session_id returned, your product can continue and serve chats to its users.

Delete local resources

Play around and stop the workspace using:

or stop individual Apps using:

Next

Congratulations on running an Agent App locally. Next Steps: