Assistant
The Assistant
class provides an easy to use interface to language models.
Example
from phi.assistant import Assistant
assistant = Assistant(description="You help people with their health and fitness goals.")
# -*- Print a response
assistant.print_response('Share a quick healthy breakfast recipe.', markdown=True)
# -*- Get the response as a string
response = assistant.run('Share a quick healthy breakfast recipe.', stream=False)
# -*- Get the response as a stream
response = ""
for delta in assistant.run('Share a quick healthy breakfast recipe.'):
response += delta
Assistant Params
LLM to use for this Assistant
Assistant introduction. This is added to the chat history when a run is started.
Assistant name
Metadata associated with this assistant
Run UUID (autogenerated if not set)
Run name
Metadata associated with this run
ID of the user participating in this run
Metadata associated the user participating in this run
Assistant Memory
Add chat history to the messages sent to the LLM.
Add chat history to the prompt sent to the LLM.
Number of previous messages to add to prompt or messages sent to the LLM.
Assistant Knowledge Base
Enable RAG by adding references from the knowledge base to the prompt.
Assistant Storage
AssistantRun from the database: DO NOT SET MANUALLY
A list of tools provided to the LLM. Tools are functions the model may generate JSON inputs for. If you provide a dict, it is not called by the model.
Allow the assistant to use tools
Show tool calls in LLM messages.
Maximum number of tool calls allowed.
Controls which (if any) tool is called by the model.
- "none" means the model will not call a tool and instead generates a message.
- "auto" means the model can pick between generating a message or calling a tool.
- Specifying a particular function via
{
"type: "function",
"function": {"name": "my_function"}
}
forces the model to call that tool.
"none" is the default when no tools are present. "auto" is the default if tools are present.
If use_tools is True and update_knowledge_base is True, then a tool is added that allows the LLM to update the knowledge base.
If use_tools is True and read_tool_call_history is True, then a tool is added that allows the LLM to get the tool call history.
If True, phidata will add the system prompt, references, and chat history If False, the input messages are sent to the LLM as is
Provide the system prompt as a string
Provide the system prompt as a PromptTemplate
Provide the system prompt as a function. This function is provided the "Assistant object" as an argument and should return the system_prompt as a string.
Signature:
def system_prompt_function(assistant: Assistant) -> str:
...
If True, build a default system prompt using instructions and extra_instructions
Assistant description for the default system prompt
List of instructions for the default system prompt
List of extra_instructions for the default system prompt Use these when you want to use the default prompt but also add some extra instructions
Add a string to the end of the default system prompt
If True, add instructions for using the knowledge base to the default system prompt if knowledge base is provided
If True, add instructions for letting the user know that the assistant does not know the answer
If True, add instructions to prevent prompt injection attacks
If True, add instructions for limiting tool access to the default system prompt if tools are provided
If True, add the current datetime to the prompt to give the assistant a sense of time This allows for relative times like "tomorrow" to be used in the prompt
If markdown=true, formats the output using markdown
Provides the user prompt as a string. Note: this will ignore the input message provided to the run function
Provides the user prompt as a PromptTemplate
Provides the user prompt as a function. This function is provided the "Assistant object" and the "Input message" as arguments
and should return the user_prompt as a Union[List[Dict], str].
If add_references_to_prompt
is True, then references
are also provided as an argument.
If add_chat_history_to_prompt
is True, then chat_history
is also provided as an argument.
Signature:
def custom_user_prompt_function(
assistant: Assistant,
message: Union[List[Dict], str],
references: Optional[str] = None,
chat_history: Optional[str] = None,
) -> Union[List[Dict], str]:
...
If True, build a default user prompt using references and chat history
Function to build references for the default user_prompt
. This function, if provided, is called when add_references_to_prompt
is True
Signature:
def references(assistant: Assistant, query: str) -> Optional[str]:
...
Format of the references
Function to build the chat_history for the default user_prompt
. This function, if provided, is called when add_chat_history_to_prompt
is True
Signature:
def chat_history(assistant: Assistant) -> str:
...
Provide an output model for the responses
If True, the output is converted into the output_model (pydantic model or json dict)
Final LLM response i.e. the final output of this assistant
Tasks allow the Assistant to generate a response using a list of tasks If tasks is None or empty, a single default LLM task is created for this assistant
Metadata associated with the assistant tasks
Role of the assistant
If True, show debug logs
If True, logs Assistant runs on phidata.com
Was this page helpful?