The Agent class provides an easy to use interface to language models.

Example

agent.py
from phi.agent import Agent

agent = Agent(description="You help people with their health and fitness goals.")

# -*- Print a response
agent.print_response('Share a quick healthy breakfast recipe.', markdown=True)

# -*- Get the response as a string
response = agent.run('Share a quick healthy breakfast recipe.', stream=False)

# -*- Get the response as a stream
response = ""
for delta in agent.run('Share a quick healthy breakfast recipe.'):
    response += delta

Agent Params

ParameterTypeDefaultDescription
modelOptional[Model]NoneModel to use for this Agent (alias: "provider")
nameOptional[str]NoneAgent name
agent_idOptional[str]NoneAgent UUID (autogenerated if not set)
agent_dataOptional[Dict[str, Any]]NoneMetadata associated with this agent
introductionOptional[str]NoneAgent introduction. This is added to the chat history when a run is started.
user_idOptional[str]NoneID of the user interacting with this agent
user_dataOptional[Dict[str, Any]]NoneMetadata associated with the user interacting with this agent
session_idOptional[str]NoneSession UUID (autogenerated if not set)
session_nameOptional[str]NoneSession name
session_dataOptional[Dict[str, Any]]NoneMetadata associated with this session
memoryAgentMemoryAgentMemory()Agent Memory
add_history_to_messagesboolFalseAdd chat history to the messages sent to the Model. (alias: "add_chat_history_to_messages")
num_history_responsesint3Number of historical responses to add to the messages.
knowledgeOptional[AgentKnowledge]NoneAgent Knowledge (alias: "knowledge_base")
add_contextboolFalseEnable RAG by adding context from AgentKnowledge to the user prompt.
retrieverOptional[Callable[..., Optional[list[dict]]]]NoneFunction to get context to add to the user_message
context_formatLiteral["json", "yaml"]"json"Format of the context
add_context_instructionsboolFalseIf True, add instructions for using the context to the system prompt
storageOptional[AgentStorage]NoneAgent Storage
toolsOptional[List[Union[Tool, Toolkit, Callable, Dict, Function]]]NoneA list of tools provided to the Model.
show_tool_callsboolFalseShow tool calls in Agent response.
tool_call_limitOptional[int]NoneMaximum number of tool calls allowed.
tool_choiceOptional[Union[str, Dict[str, Any]]]NoneControls which (if any) tool is called by the model.
reasoningboolFalseEnable reasoning by working through the problem step by step.
reasoning_modelOptional[Model]NoneModel to use for reasoning
reasoning_agentOptional[Agent]NoneAgent to use for reasoning
reasoning_min_stepsint1Minimum number of reasoning steps
reasoning_max_stepsint10Maximum number of reasoning steps
read_chat_historyboolFalseAdd a tool that allows the Model to read the chat history.
search_knowledgeboolTrueAdd a tool that allows the Model to search the knowledge base (aka Agentic RAG)
update_knowledgeboolFalseAdd a tool that allows the Model to update the knowledge base.
read_tool_call_historyboolFalseAdd a tool that allows the Model to get the tool call history.
add_messagesOptional[List[Union[Dict, Message]]]NoneA list of extra messages added after the system message and before the user message.
system_promptOptional[str]NoneSystem prompt: provide the system prompt as a string
system_prompt_templateOptional[PromptTemplate]NoneSystem prompt template: provide the system prompt as a PromptTemplate
use_default_system_messageboolTrueIf True, build a default system message using agent settings and use that
system_message_rolestr"system"Role for the system message
descriptionOptional[str]NoneA description of the Agent that is added to the start of the system message.
taskOptional[str]NoneThe task the agent should achieve.
instructionsOptional[List[str]]NoneList of instructions for the agent.
guidelinesOptional[List[str]]NoneList of guidelines for the agent.
expected_outputOptional[str]NoneProvide the expected output from the Agent.
additional_contextOptional[str]NoneAdditional context added to the end of the system message.
prevent_hallucinationsboolFalseIf True, add instructions to return "I dont know" when the agent does not know the answer.
prevent_prompt_leakageboolFalseIf True, add instructions to prevent prompt leakage
limit_tool_accessboolFalseIf True, add instructions for limiting tool access to the default system prompt if tools are provided
markdownboolFalseIf markdown=true, add instructions to format the output using markdown
add_name_to_instructionsboolFalseIf True, add the agent name to the instructions
add_datetime_to_instructionsboolFalseIf True, add the current datetime to the instructions to give the agent a sense of time
user_promptOptional[Union[List, Dict, str]]NoneUser prompt: provide the user prompt as a string
user_prompt_templateOptional[PromptTemplate]NoneUser prompt template: provide the user prompt as a PromptTemplate
use_default_user_messageboolTrueIf True, build a default user prompt using references and chat history
user_message_rolestr"user"Role for the user message
response_modelOptional[Type[BaseModel]]NoneProvide a response model to get the response as a Pydantic model (alias: "output_model")
parse_responseboolTrueIf True, the response from the Model is converted into the response_model
structured_outputsboolFalseUse the structured_outputs from the Model if available
save_response_to_fileOptional[str]NoneSave the response to a file
teamOptional[List["Agent"]]NoneAn Agent can have a team of agents that it can transfer tasks to.
roleOptional[str]NoneWhen the agent is part of a team, this is the role of the agent in the team
add_transfer_instructionsboolTrueAdd instructions for transferring tasks to team members
debug_modeboolFalsedebug_mode=True enables debug logs
monitoringboolFalsemonitoring=True logs Agent information to phidata.app for monitoring
telemetryboolTruetelemetry=True logs minimal telemetry for analytics