The LLM OS proposes that LLMs are the CPU/Kernal of an emerging operating system and can solve problems by coordination multiple resources. Andrej Karpathy talks about it in this tweet, this tweet and this video. Here’s a video of me building the LLM OS.

Using this template, we can run the llm-os locally using docker and in production on AWS.



Create a virtual environment

Open the Terminal and create a python virtual environment.

python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activate

Install phidata

Install phidata using pip

pip install -U "phidata[aws]"

Install docker

Install docker desktop to run your app locally

Create your codebase

Create your codebase using the llm-os template

phi ws create -t llm-os -n llm-os

This will create a folder llm-os with the following structure:

llm-os                      # root directory of your llm-os
├── ai
    ├──       # AI Assistants
├── app                     # Streamlit app
├── api                     # FastApi routes
├── db                      # database settings
├── Dockerfile              # Dockerfile for the application
├── pyproject.toml          # python project definition
├── requirements.txt        # python dependencies generated using pyproject.toml
├── scripts                 # helper scripts
├── utils                   # shared utilities
└── workspace               # phidata workspace directory
    ├──    # dev resources running locally
    ├──    # production resources running on AWS
    ├── secrets             # storing secrets
    └──         # phidata workspace settings

Set Credentials

We use gpt-4o as the LLM, so export your OPENAI_API_KEY. You can get one from OpenAI if needed.

export OPENAI_API_KEY=sk-***

If you’d like to use the research assistant, export your EXA_API_KEY. You can get one from here if needed.

export EXA_API_KEY=***


We’ll build a simple front-end for the LLM OS using streamlit. Start the app group using:

phi ws up --group app

Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.


  • Open localhost:8501 to view your LLM OS.
  • Enter a username.
  • Add blog post to knowledge: and ask: what did sam altman wish he knew?
  • Test Web search: Whats happening in france?
  • Test Calculator: What is 10!
  • Test Finance: What is the price of AAPL?
  • Test Finance: Write a comparison between nvidia and amd, use all finance tools available and summarize the key points
  • Test Research: Write a report on Hashicorp IBM acquisition


Optional: Serve your LLM OS as an API

Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like next.js backed by a RestApi built using FastApi.

Your LLM OS comes with ready-to-use FastApi endpoints.


Enable FastApi

Update the workspace/ file and set dev_api_enabled=True

ws_settings = WorkspaceSettings(
    # Uncomment the following line

Start FastApi

phi ws up --group api

Press Enter to confirm


View API Endpoints

  "message": "Whats 10!",
  "assistant": "LLM_OS"

Build an AI Product using the LLM OS

Your llm-os comes with pre-configured API endpoints that can be used to build your AI product. The general workflow is:

  • Call the /assitants/create endpoint to create a new run for a user.
  "user_id": "my-app-user-1",
  "assistant": "LLM_OS"
  • The response contains a run_id that can be used to build a chat interface by calling the /assitants/chat endpoint.
  "message": "whats 10!",
  "stream": true,
  "run_id": "372224c0-cd5e-4e87-a29d-65d33d9353a5",
  "user_id": "my-app-user-1",
  "assistant": "LLM_OS"

These routes are defined the api/routes folder and can be customized to your use case.

Message us on discord if you need help.

Delete local resources

Play around and stop the workspace using:

phi ws down

or stop individual Apps using:

phi ws down --group app


Congratulations on running your LLM OS locally. Next Steps: