Example

python_agent.py
from phi.agent.python import PythonAgent
from phi.file.local.csv import CsvFile

python_agent = PythonAgent(
    files=[
        CsvFile(
            path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
            description="Contains information about movies from IMDB.",
        )
    ],
    pip_install=True,
    show_function_calls=True,
)

python_agent.print_response("What is the average rating of movies?")

PythonAgent Params

ParameterTypeDefaultDescription
namestr"PythonAgent"Name of the PythonAgent.
filesList[File]NoneList of Files available for the PythonAgent.
file_informationstrNoneProvide information about Files as a string.
charting_librariesList[str]['plotly', 'matplotlib', 'seaborn']List of charting libraries the PythonAgent can use.
followupsboolFalseIf the PythonAgent is allowed to ask follow-up questions.
read_tool_call_historyboolTrueIf the DuckDbAgent is allowed to read the tool call history.
base_dirPath.Where to save files if needed.
save_and_runboolTrueIf the PythonAgent is allowed to save and run python code.
pip_installboolFalseIf the PythonAgent is allowed to pip install libraries. Disabled by default for security reasons.
run_codeboolFalseIf the PythonAgent is allowed to run python code directly. Disabled by default for security reasons.
list_filesboolFalseIf the PythonAgent is allowed to list files.
run_filesboolTrueIf the PythonAgent is allowed to run files.
read_filesboolFalseIf the PythonAgent is allowed to read files.
safe_globalsdictNoneProvide a list of global variables to for the PythonAgent.
safe_localsdictNoneProvide a list of local variables to for the PythonAgent.
add_chat_history_to_messagesboolTrueIf the chat history should be added to the messages.
num_history_messagesint6Number of history messages to add to the response.

Agent Reference

PythonAgent is a subclass of the Agent class and has access to the same params

ParameterTypeDefaultDescription
modelOptional[Model]NoneModel to use for this Agent (alias: "provider")
nameOptional[str]NoneAgent name
agent_idOptional[str]NoneAgent UUID (autogenerated if not set)
agent_dataOptional[Dict[str, Any]]NoneMetadata associated with this agent
introductionOptional[str]NoneAgent introduction. This is added to the chat history when a run is started.
user_idOptional[str]NoneID of the user interacting with this agent
user_dataOptional[Dict[str, Any]]NoneMetadata associated with the user interacting with this agent
session_idOptional[str]NoneSession UUID (autogenerated if not set)
session_nameOptional[str]NoneSession name
session_dataOptional[Dict[str, Any]]NoneMetadata associated with this session
memoryAgentMemoryAgentMemory()Agent Memory
add_history_to_messagesboolFalseAdd chat history to the messages sent to the Model. (alias: "add_chat_history_to_messages")
num_history_responsesint3Number of historical responses to add to the messages.
knowledgeOptional[AgentKnowledge]NoneAgent Knowledge (alias: "knowledge_base")
add_contextboolFalseEnable RAG by adding context from AgentKnowledge to the user prompt.
retrieverOptional[Callable[..., Optional[list[dict]]]]NoneFunction to get context to add to the user_message
context_formatLiteral["json", "yaml"]"json"Format of the context
add_context_instructionsboolFalseIf True, add instructions for using the context to the system prompt
storageOptional[AgentStorage]NoneAgent Storage
toolsOptional[List[Union[Tool, Toolkit, Callable, Dict, Function]]]NoneA list of tools provided to the Model.
show_tool_callsboolFalseShow tool calls in Agent response.
tool_call_limitOptional[int]NoneMaximum number of tool calls allowed.
tool_choiceOptional[Union[str, Dict[str, Any]]]NoneControls which (if any) tool is called by the model.
reasoningboolFalseEnable reasoning by working through the problem step by step.
reasoning_modelOptional[Model]NoneModel to use for reasoning
reasoning_agentOptional[Agent]NoneAgent to use for reasoning
reasoning_min_stepsint1Minimum number of reasoning steps
reasoning_max_stepsint10Maximum number of reasoning steps
read_chat_historyboolFalseAdd a tool that allows the Model to read the chat history.
search_knowledgeboolTrueAdd a tool that allows the Model to search the knowledge base (aka Agentic RAG)
update_knowledgeboolFalseAdd a tool that allows the Model to update the knowledge base.
read_tool_call_historyboolFalseAdd a tool that allows the Model to get the tool call history.
add_messagesOptional[List[Union[Dict, Message]]]NoneA list of extra messages added after the system message and before the user message.
system_promptOptional[str]NoneSystem prompt: provide the system prompt as a string
system_prompt_templateOptional[PromptTemplate]NoneSystem prompt template: provide the system prompt as a PromptTemplate
use_default_system_messageboolTrueIf True, build a default system message using agent settings and use that
system_message_rolestr"system"Role for the system message
descriptionOptional[str]NoneA description of the Agent that is added to the start of the system message.
taskOptional[str]NoneThe task the agent should achieve.
instructionsOptional[List[str]]NoneList of instructions for the agent.
guidelinesOptional[List[str]]NoneList of guidelines for the agent.
expected_outputOptional[str]NoneProvide the expected output from the Agent.
additional_contextOptional[str]NoneAdditional context added to the end of the system message.
prevent_hallucinationsboolFalseIf True, add instructions to return "I dont know" when the agent does not know the answer.
prevent_prompt_leakageboolFalseIf True, add instructions to prevent prompt leakage
limit_tool_accessboolFalseIf True, add instructions for limiting tool access to the default system prompt if tools are provided
markdownboolFalseIf markdown=true, add instructions to format the output using markdown
add_name_to_instructionsboolFalseIf True, add the agent name to the instructions
add_datetime_to_instructionsboolFalseIf True, add the current datetime to the instructions to give the agent a sense of time
user_promptOptional[Union[List, Dict, str]]NoneUser prompt: provide the user prompt as a string
user_prompt_templateOptional[PromptTemplate]NoneUser prompt template: provide the user prompt as a PromptTemplate
use_default_user_messageboolTrueIf True, build a default user prompt using references and chat history
user_message_rolestr"user"Role for the user message
response_modelOptional[Type[BaseModel]]NoneProvide a response model to get the response as a Pydantic model (alias: "output_model")
parse_responseboolTrueIf True, the response from the Model is converted into the response_model
structured_outputsboolFalseUse the structured_outputs from the Model if available
save_response_to_fileOptional[str]NoneSave the response to a file
teamOptional[List["Agent"]]NoneAn Agent can have a team of agents that it can transfer tasks to.
roleOptional[str]NoneWhen the agent is part of a team, this is the role of the agent in the team
add_transfer_instructionsboolTrueAdd instructions for transferring tasks to team members
debug_modeboolFalsedebug_mode=True enables debug logs
monitoringboolFalsemonitoring=True logs Agent information to phidata.app for monitoring
telemetryboolTruetelemetry=True logs minimal telemetry for analytics