The WebsiteKnowledgeBase reads websites, converts them into vector embeddings and loads them to a vector_db.

Usage

We are using a local PgVector database for this example. Make sure it’s running

pip install bs4
knowledge_base.py
from phi.knowledge.website import WebsiteKnowledgeBase
from phi.vectordb.pgvector import PgVector

knowledge_base = WebsiteKnowledgeBase(
    urls=["https://docs.phidata.com/introduction"],
    # Number of links to follow from the seed URLs
    max_links=10,
    # Table name: ai.website_documents
    vector_db=PgVector(
        table_name="website_documents",
        db_url="postgresql+psycopg://ai:ai@localhost:5532/ai",
    ),
)

Then use the knowledge_base with an Agent:

agent.py
from phi.agent import Agent
from knowledge_base import knowledge_base

agent = Agent(
    knowledge=knowledge_base,
    search_knowledge=True,
)
agent.knowledge.load(recreate=False)

agent.print_response("Ask me about something from the knowledge base")

Params

ParameterTypeDefaultDescription
urlsList[str]-URLs to read
readerWebsiteReader-A WebsiteReader that reads the urls and converts them into Documents for the vector database.
max_depthint3Maximum depth to crawl.
max_linksint10Number of links to crawl.
vector_dbVectorDb-Vector Database for the Knowledge Base.
num_documentsint5Number of documents to return on search.
optimize_onint-Number of documents to optimize the vector db on.
chunking_strategyChunkingStrategyFixedSizeChunkingThe chunking strategy to use.