Knowledge Base is a database of information that the Assistant can search to improve its responses. This information is stored in a vector database and provides LLMs with business context, which makes them respond in a context-aware manner. The general syntax is:

from phi.assistant import Assistant, AssistantKnowledge

# Create knowledge base
knowledge_base = AssistantKnowledge(vector_db=...)

# Add information to the knowledge base
knowledge_base.load_text("The sky is blue")

# Add the knowledge base to the Assistant
assistant = Assistant(knowledge_base=knowledge_base)

Vector Databases

While any type of storage can act as a knowledge base, vector databases offer the best solution for retrieving relevant results from dense information quickly.

Our goal is to search relevant information from a knowledge base quickly, here’s how vector databases are used with LLMs:

1

Chunk the information

Break down the knowledge into smaller chunks to ensure our search query matches only relevant results.

2

Load the knowledge base

Convert the chunks into embedding vectors and store them in a vector database.

3

Search the knowledge base

When the user sends a message, we convert the input message into an embedding and “search” for nearest neighbors in the vector database.

Loading the Knowledge Base

Before you can use a knowledge base, it needs to be loaded with embeddings that will be used for retrieval. Use one of the following knowledge bases to simplify the chunking, loading, searching and optimization process: