The WikipediaKnowledgeBase reads wikipedia topics, converts them into vector embeddings and loads them to a vector databse.

Usage

We are using a local PgVector database for this example. Make sure it’s running

pip install wikipedia
knowledge_base.py
from phi.knowledge.wikipedia import WikipediaKnowledgeBase
from phi.vectordb.pgvector import PgVector

knowledge_base = WikipediaKnowledgeBase(
    topics=["Manchester United", "Real Madrid"],
    # Table name: ai.wikipedia_documents
    vector_db=PgVector(
        table_name="wikipedia_documents",
        db_url="postgresql+psycopg://ai:ai@localhost:5532/ai",
    ),
)

Then use the knowledge_base with an Agent:

agent.py
from phi.agent import Agent
from knowledge_base import knowledge_base

agent = Agent(
    knowledge=knowledge_base,
    search_knowledge=True,
)
agent.knowledge.load(recreate=False)

agent.print_response("Ask me about something from the knowledge base")

Params

ParameterTypeDefaultDescription
topicsList[str]-Topics to read
vector_dbVectorDb-Vector Database for the Knowledge Base.
readerReader-A Reader that reads the topics and converts them into Documents for the vector database.
num_documentsint5Number of documents to return on search.
optimize_onint-Number of documents to optimize the vector db on.
chunking_strategyChunkingStrategyCharacterChunksThe chunking strategy to use.