Agent
The Agent
class provides an easy to use interface to language models.
Example
from phi.agent import Agent
agent = Agent(description="You help people with their health and fitness goals.")
# -*- Print a response
agent.print_response('Share a quick healthy breakfast recipe.', markdown=True)
# -*- Get the response as a string
response = agent.run('Share a quick healthy breakfast recipe.', stream=False)
# -*- Get the response as a stream
response = ""
for delta in agent.run('Share a quick healthy breakfast recipe.'):
response += delta
Agent Params
LLM to use for this Agent
Agent introduction. This is added to the chat history when a run is started.
Agent name
Metadata associated with this agent
Run UUID (autogenerated if not set)
Run name
Metadata associated with this run
ID of the user participating in this run
Metadata associated the user participating in this run
Agent Memory
Add chat history to the messages sent to the LLM.
Add chat history to the prompt sent to the LLM.
Number of previous messages to add to prompt or messages sent to the LLM.
Agent Knowledge Base
Enable RAG by adding references from the knowledge base to the prompt.
Agent Storage
AgentRun from the database: DO NOT SET MANUALLY
A list of tools provided to the LLM. Tools are functions the model may generate JSON inputs for. If you provide a dict, it is not called by the model.
Show tool calls in LLM messages.
Maximum number of tool calls allowed.
Controls which (if any) tool is called by the model.
- "none" means the model will not call a tool and instead generates a message.
- "auto" means the model can pick between generating a message or calling a tool.
- Specifying a particular function via
{
"type: "function",
"function": {"name": "my_function"}
}
forces the model to call that tool.
"none" is the default when no tools are present. "auto" is the default if tools are present.
If True, adds a tool that allows the LLM to get the chat history.
If True, adds a tool that allows the LLM to search the knowledge base.
If True, adds a tool that allows the LLM to update the knowledge base.
If True, adds a tool that allows the LLM to get the tool call history.
Allow the agent to use tools
List of additional messages added to the messages list after the system prompt. Use these for few-shot learning or to provide additional context to the LLM.
Provide the system prompt as a string
Provide the system prompt as a PromptTemplate
If True, build a default system prompt using instructions and extra_instructions
Agent description for the default system prompt
Agent task
List of instructions for the default system prompt
List of extra_instructions for the default system prompt Use these when you want to use the default prompt but also add some extra instructions
Expected output added to the system prompt
Add a string to the end of the default system prompt
If True, add instructions for using the knowledge base to the default system prompt if knowledge base is provided
If True, add instructions for letting the user know that the agent does not know the answer
If True, add instructions to prevent prompt injection attacks
If True, add instructions for limiting tool access to the default system prompt if tools are provided
If True, add the current datetime to the prompt to give the agent a sense of time This allows for relative times like "tomorrow" to be used in the prompt
If markdown=true, formats the output using markdown
Provides the user prompt as a string. Note: this will ignore the input message provided to the run function
Provides the user prompt as a PromptTemplate
If True, build a default user prompt using references and chat history
Function to build references for the default user_prompt
. This function, if provided, is called when add_references_to_prompt
is True
Signature:
def references(agent: Agent, query: str) -> Optional[str]:
...
Format of the references
Function to build the chat_history for the default user_prompt
. This function, if provided, is called when add_chat_history_to_prompt
is True
Signature:
def chat_history(agent: Agent) -> str:
...
Provide an output model for the responses
If True, the output is converted into the output_model (pydantic model or json dict)
Final LLM response i.e. the final output of this agent
Save the output to a file
Metadata associated with the agent tasks
Agent team.
When the agent is part of a team, this is the role of the agent in the team
Add instructions for delegating tasks to another agents
If True, show debug logs
If True, logs Agent runs on phidata.com
Was this page helpful?