The Assistant class provides an easy to use interface to language models.

Example

assistant.py
from phi.assistant import Assistant

assistant = Assistant(description="You help people with their health and fitness goals.")

# -*- Print a response
assistant.print_response('Share a quick healthy breakfast recipe.', markdown=True)

# -*- Get the response as a string
response = assistant.run('Share a quick healthy breakfast recipe.', stream=False)

# -*- Get the response as a stream
response = ""
for delta in assistant.run('Share a quick healthy breakfast recipe.'):
    response += delta

Assistant Params

llm
LLM

LLM to use for this Assistant

introduction
str

Assistant introduction. This is added to the chat history when a run is started.

name
str

Assistant name

assistant_data
Dict[str, Any]

Metadata associated with this assistant

run_id
str

Run UUID (autogenerated if not set)

run_name
str

Run name

run_data
Dict[str, Any]

Metadata associated with this run

user_id
str

ID of the user participating in this run

user_data
Dict[str, Any]

Metadata associated the user participating in this run

memory
AssistantMemory
default: "AssistantMemory()"

Assistant Memory

add_chat_history_to_messages
bool
default: "False"

Add chat history to the messages sent to the LLM.

add_chat_history_to_prompt
bool
default: "False"

Add chat history to the prompt sent to the LLM.

num_history_messages
int
default: "6"

Number of previous messages to add to prompt or messages sent to the LLM.

knowledge_base
AssistantKnowledge

Assistant Knowledge Base

add_references_to_prompt
bool
default: "False"

Enable RAG by adding references from the knowledge base to the prompt.

storage
AssistantStorage

Assistant Storage

db_row
AssistantRun

AssistantRun from the database: DO NOT SET MANUALLY

tools
List[Union[Tool, ToolRegistry, Callable, Dict, Function]]

A list of tools provided to the LLM. Tools are functions the model may generate JSON inputs for. If you provide a dict, it is not called by the model.

use_tools
bool
default: "False"

Allow the assistant to use tools

show_tool_calls
bool
default: "False"

Show tool calls in LLM messages.

tool_call_limit
int

Maximum number of tool calls allowed.

tool_choice
Union[str, Dict[str, Any]]

Controls which (if any) tool is called by the model.

  • "none" means the model will not call a tool and instead generates a message.
  • "auto" means the model can pick between generating a message or calling a tool.
  • Specifying a particular function via
{
  "type: "function",
  "function": {"name": "my_function"}
}

forces the model to call that tool.

"none" is the default when no tools are present. "auto" is the default if tools are present.

update_knowledge_base
bool
default: "False"

If use_tools is True and update_knowledge_base is True, then a tool is added that allows the LLM to update the knowledge base.

read_tool_call_history
bool
default: "False"

If use_tools is True and read_tool_call_history is True, then a tool is added that allows the LLM to get the tool call history.

format_messages
bool
default: "True"

If True, phidata will add the system prompt, references, and chat history If False, the input messages are sent to the LLM as is

system_prompt
str

Provide the system prompt as a string

system_prompt_template
PromptTemplate

Provide the system prompt as a PromptTemplate

system_prompt_function
Callable[..., Optional[str]]

Provide the system prompt as a function. This function is provided the "Assistant object" as an argument and should return the system_prompt as a string.

Signature:

def system_prompt_function(assistant: Assistant) -> str:
    ...
build_default_system_prompt
Callable[..., Optional[str]]

If True, build a default system prompt using instructions and extra_instructions

description
str

Assistant description for the default system prompt

instructions
List[str]

List of instructions for the default system prompt

extra_instructions
List[str]

List of extra_instructions for the default system prompt Use these when you want to use the default prompt but also add some extra instructions

add_to_system_prompt
str

Add a string to the end of the default system prompt

add_knowledge_base_instructions
bool
default: "True"

If True, add instructions for using the knowledge base to the default system prompt if knowledge base is provided

prevent_hallucinations
bool
default: "False"

If True, add instructions for letting the user know that the assistant does not know the answer

prevent_prompt_injection
bool
default: "False"

If True, add instructions to prevent prompt injection attacks

limit_tool_access
bool
default: "False"

If True, add instructions for limiting tool access to the default system prompt if tools are provided

add_datetime_to_instructions
bool
default: "False"

If True, add the current datetime to the prompt to give the assistant a sense of time This allows for relative times like "tomorrow" to be used in the prompt

markdown
bool
default: "False"

If markdown=true, formats the output using markdown

user_prompt
Union[List[Dict], str]

Provides the user prompt as a string. Note: this will ignore the input message provided to the run function

user_prompt_template
PromptTemplate

Provides the user prompt as a PromptTemplate

user_prompt_function
Callable[..., str]

Provides the user prompt as a function. This function is provided the "Assistant object" and the "Input message" as arguments and should return the user_prompt as a Union[List[Dict], str]. If add_references_to_prompt is True, then references are also provided as an argument. If add_chat_history_to_prompt is True, then chat_history is also provided as an argument.

Signature:

def custom_user_prompt_function(
    assistant: Assistant,
    message: Union[List[Dict], str],
    references: Optional[str] = None,
    chat_history: Optional[str] = None,
) -> Union[List[Dict], str]:
    ...
build_default_user_prompt
bool
default: "True"

If True, build a default user prompt using references and chat history

references_function
Callable[..., Optional[str]]

Function to build references for the default user_prompt. This function, if provided, is called when add_references_to_prompt is True

Signature:

def references(assistant: Assistant, query: str) -> Optional[str]:
    ...
references_format
Literal['json', 'yaml']
default: "json"

Format of the references

chat_history_function
Callable[..., Optional[str]]

Function to build the chat_history for the default user_prompt. This function, if provided, is called when add_chat_history_to_prompt is True

Signature:

def chat_history(assistant: Assistant) -> str:
    ...
output_model
Union[str, List, Type[BaseModel]]

Provide an output model for the responses

parse_output
bool
default: "True"

If True, the output is converted into the output_model (pydantic model or json dict)

output
Any

Final LLM response i.e. the final output of this assistant

tasks
[List[Task]]

Tasks allow the Assistant to generate a response using a list of tasks If tasks is None or empty, a single default LLM task is created for this assistant

task_data
Dict[str, Any]

Metadata associated with the assistant tasks

role
str

Role of the assistant

debug_mode
bool
default: "False"

If True, show debug logs

monitoring
bool
default: "False"

If True, logs Assistant runs on phidata.com