Example

python_agent.py
from phi.agent.python import PythonAgent
from phi.file.local.csv import CsvFile

python_agent = PythonAgent(
    files=[
        CsvFile(
            path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
            description="Contains information about movies from IMDB.",
        )
    ],
    pip_install=True,
    show_function_calls=True,
)

python_agent.print_response("What is the average rating of movies?")

PythonAgent Params

name
str
default: "PythonAgent"

Name of the PythonAgent.

files
List[File]
default: "None"

List of Files available for the PythonAgent.

file_information
str
default: "None"

Provide information about Files as a string.

charting_libraries
List[str]
default: "['plotly', 'matplotlib', 'seaborn']"

List of charting libraries the PythonAgent can use.

followups
bool
default: "False"

If the PythonAgent is allowed to ask follow-up questions.

read_tool_call_history
bool
default: "True"

If the DuckDbAgent is allowed to read the tool call history.

base_dir
Path
default: "."

Where to save files if needed.

save_and_run
bool
default: "True"

If the PythonAgent is allowed to save and run python code.

pip_install
bool
default: "False"

If the PythonAgent is allowed to pip install libraries. Disabled by default for security reasons.

run_code
bool
default: "False"

If the PythonAgent is allowed to run python code directly. Disabled by default for security reasons.

list_files
bool
default: "False"

If the PythonAgent is allowed to list files.

run_files
bool
default: "True"

If the PythonAgent is allowed to run files.

read_files
bool
default: "False"

If the PythonAgent is allowed to read files.

safe_globals
dict
default: "None"

Provide a list of global variables to for the PythonAgent.

safe_locals
dict
default: "None"

Provide a list of local variables to for the PythonAgent.

Agent Reference

PythonAgent is a subclass of the Agent class and has access to the same params

llm
LLM

LLM to use for this Agent

introduction
str

Agent introduction. This is added to the chat history when a run is started.

name
str

Agent name

agent_data
Dict[str, Any]

Metadata associated with this agent

run_id
str

Run UUID (autogenerated if not set)

run_name
str

Run name

run_data
Dict[str, Any]

Metadata associated with this run

user_id
str

ID of the user participating in this run

user_data
Dict[str, Any]

Metadata associated the user participating in this run

memory
AgentMemory
default: "AgentMemory()"

Agent Memory

add_chat_history_to_messages
bool
default: "False"

Add chat history to the messages sent to the LLM.

add_chat_history_to_prompt
bool
default: "False"

Add chat history to the prompt sent to the LLM.

num_history_messages
int
default: "6"

Number of previous messages to add to prompt or messages sent to the LLM.

knowledge_base
AgentKnowledge

Agent Knowledge Base

add_references_to_prompt
bool
default: "False"

Enable RAG by adding references from the knowledge base to the prompt.

storage
AgentStorage

Agent Storage

db_row
AgentRun

AgentRun from the database: DO NOT SET MANUALLY

tools
List[Union[Tool, ToolRegistry, Callable, Dict, Function]]

A list of tools provided to the LLM. Tools are functions the model may generate JSON inputs for. If you provide a dict, it is not called by the model.

show_tool_calls
bool
default: "False"

Show tool calls in LLM messages.

tool_call_limit
int

Maximum number of tool calls allowed.

tool_choice
Union[str, Dict[str, Any]]

Controls which (if any) tool is called by the model.

  • "none" means the model will not call a tool and instead generates a message.
  • "auto" means the model can pick between generating a message or calling a tool.
  • Specifying a particular function via
{
  "type: "function",
  "function": {"name": "my_function"}
}

forces the model to call that tool.

"none" is the default when no tools are present. "auto" is the default if tools are present.

read_chat_history
bool
default: "False"

If True, adds a tool that allows the LLM to get the chat history.

search_knowledge
bool
default: "False"

If True, adds a tool that allows the LLM to search the knowledge base.

update_knowledge
bool
default: "False"

If True, adds a tool that allows the LLM to update the knowledge base.

read_tool_call_history
bool
default: "False"

If True, adds a tool that allows the LLM to get the tool call history.

use_tools
bool
default: "False"

Allow the agent to use tools

additional_messages
List[Union[Dict, Message]]
additional_messages
List[Union[Dict, Message]]

List of additional messages added to the messages list after the system prompt. Use these for few-shot learning or to provide additional context to the LLM.

system_prompt
str

Provide the system prompt as a string

system_prompt_template
PromptTemplate

Provide the system prompt as a PromptTemplate

build_default_system_prompt
Callable[..., Optional[str]]

If True, build a default system prompt using instructions and extra_instructions

description
str

Agent description for the default system prompt

task
str

Agent task

instructions
List[str]

List of instructions for the default system prompt

extra_instructions
List[str]

List of extra_instructions for the default system prompt Use these when you want to use the default prompt but also add some extra instructions

expected_output
str

Expected output added to the system prompt

add_to_system_prompt
str

Add a string to the end of the default system prompt

add_knowledge_base_instructions
bool
default: "True"

If True, add instructions for using the knowledge base to the default system prompt if knowledge base is provided

prevent_hallucinations
bool
default: "False"

If True, add instructions for letting the user know that the agent does not know the answer

prevent_prompt_injection
bool
default: "False"

If True, add instructions to prevent prompt injection attacks

limit_tool_access
bool
default: "False"

If True, add instructions for limiting tool access to the default system prompt if tools are provided

add_datetime_to_instructions
bool
default: "False"

If True, add the current datetime to the prompt to give the agent a sense of time This allows for relative times like "tomorrow" to be used in the prompt

markdown
bool
default: "False"

If markdown=true, formats the output using markdown

user_prompt
Union[List[Dict], str]

Provides the user prompt as a string. Note: this will ignore the input message provided to the run function

user_prompt_template
PromptTemplate

Provides the user prompt as a PromptTemplate

build_default_user_prompt
bool
default: "True"

If True, build a default user prompt using references and chat history

references_function
Callable[..., Optional[str]]

Function to build references for the default user_prompt. This function, if provided, is called when add_references_to_prompt is True

Signature:

def references(agent: Agent, query: str) -> Optional[str]:
    ...
references_format
Literal['json', 'yaml']
default: "json"

Format of the references

chat_history_function
Callable[..., Optional[str]]

Function to build the chat_history for the default user_prompt. This function, if provided, is called when add_chat_history_to_prompt is True

Signature:

def chat_history(agent: Agent) -> str:
    ...
output_model
Union[str, List, Type[BaseModel]]

Provide an output model for the responses

parse_output
bool
default: "True"

If True, the output is converted into the output_model (pydantic model or json dict)

output
Any

Final LLM response i.e. the final output of this agent

save_output_to_file
Str

Save the output to a file

task_data
Dict[str, Any]

Metadata associated with the agent tasks

team
List['Agent']

Agent team.

role
str

When the agent is part of a team, this is the role of the agent in the team

add_delegation_instructions
bool
default: "True"

Add instructions for delegating tasks to another agents

debug_mode
bool
default: "False"

If True, show debug logs

monitoring
bool
default: "False"

If True, logs Agent runs on phidata.com