LLM OS
The LLM OS proposes that LLMs are the CPU/Kernal of an emerging operating system and can solve problems by coordination multiple resources. Andrej Karpathy talks about it in this tweet, this tweet and this video. Here’s a video of me building the LLM OS.
Using this template, we can run the llm-os locally using docker and in production on AWS.
Setup
Create a virtual environment
Open the Terminal
and create a python virtual environment.
Install phidata
Install phidata
using pip
Install docker
Install docker desktop to run your app locally
Create your codebase
Create your codebase using the llm-os
template
This will create a folder llm-os
with the following structure:
llm-os # root directory of your llm-os
├── ai
├── assistants.py # AI Assistants
├── app # Streamlit app
├── api # FastApi routes
├── db # database settings
├── Dockerfile # Dockerfile for the application
├── pyproject.toml # python project definition
├── requirements.txt # python dependencies generated using pyproject.toml
├── scripts # helper scripts
├── utils # shared utilities
└── workspace # phidata workspace directory
├── dev_resources.py # dev resources running locally
├── prd_resources.py # production resources running on AWS
├── secrets # storing secrets
└── settings.py # phidata workspace settings
Set Credentials
We use gpt-4o as the LLM, so export your OPENAI_API_KEY
. You can get one from OpenAI if needed.
If you’d like to use the research assistant, export your EXA_API_KEY
. You can get one from here if needed.
Run LLM OS
We’ll build a simple front-end for the LLM OS using streamlit. Start the app
group using:
Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.
LLM OS
- Open localhost:8501 to view your LLM OS.
- Enter a username.
- Add blog post to knowledge: https://blog.samaltman.com/what-i-wish-someone-had-told-me and ask: what did sam altman wish he knew?
- Test Web search: Whats happening in france?
- Test Calculator: What is 10!
- Test Finance: What is the price of AAPL?
- Test Finance: Write a comparison between nvidia and amd, use all finance tools available and summarize the key points
- Test Research: Write a report on Hashicorp IBM acquisition
Optional: Serve your LLM OS as an API
Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like next.js backed by a RestApi built using FastApi.
Your LLM OS comes with ready-to-use FastApi endpoints.
Enable FastApi
Update the workspace/settings.py
file and set dev_api_enabled=True
...
ws_settings = WorkspaceSettings(
...
# Uncomment the following line
dev_api_enabled=True,
...
Start FastApi
Press Enter to confirm
View API Endpoints
- Open localhost:8000/docs to view the API Endpoints.
- Test the
v1/assitants/chat
endpoint with
{
"message": "Whats 10!",
"assistant": "LLM_OS"
}
Build an AI Product using the LLM OS
Your llm-os
comes with pre-configured API endpoints that can be used to build your AI product. The general workflow is:
- Call the
/assitants/create
endpoint to create a new run for a user.
{
"user_id": "my-app-user-1",
"assistant": "LLM_OS"
}
- The response contains a
run_id
that can be used to build a chat interface by calling the/assitants/chat
endpoint.
{
"message": "whats 10!",
"stream": true,
"run_id": "372224c0-cd5e-4e87-a29d-65d33d9353a5",
"user_id": "my-app-user-1",
"assistant": "LLM_OS"
}
These routes are defined the api/routes
folder and can be customized to your use case.
Message us on discord if you need help.
Delete local resources
Play around and stop the workspace using:
or stop individual Apps using:
Next
Congratulations on running your LLM OS locally. Next Steps:
- Run your LLM OS on AWS
- Read how to update workspace settings
- Read how to create a git repository for your workspace
- Read how to manage the development application
- Read how to format and validate your code
- Read how to add python libraries
- Chat with us on discord
Was this page helpful?