PythonAssistant
Example
from phi.assistant.python import PythonAssistant
from phi.file.local.csv import CsvFile
python_assistant = PythonAssistant(
files=[
CsvFile(
path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
description="Contains information about movies from IMDB.",
)
],
pip_install=True,
show_function_calls=True,
)
python_assistant.print_response("What is the average rating of movies?")
PythonAssistant Params
Name of the PythonAssistant.
List of Files available for the PythonAssistant.
Provide information about Files as a string.
List of charting libraries the PythonAssistant can use.
If the PythonAssistant is allowed to ask follow-up questions.
If the DuckDbAssistant is allowed to read the tool call history.
Where to save files if needed.
If the PythonAssistant is allowed to save and run python code.
If the PythonAssistant is allowed to pip install
libraries. Disabled by default
for security reasons.
If the PythonAssistant is allowed to run python code directly. Disabled by default for security reasons.
If the PythonAssistant is allowed to list files.
If the PythonAssistant is allowed to run files.
If the PythonAssistant is allowed to read files.
Provide a list of global variables to for the PythonAssistant.
Provide a list of local variables to for the PythonAssistant.
Assistant Reference
PythonAssistant
is a subclass of the Assistant
class and has access to the same params
LLM to use for this Assistant
Assistant introduction. This is added to the chat history when a run is started.
Assistant name
Metadata associated with this assistant
Run UUID (autogenerated if not set)
Run name
Metadata associated with this run
ID of the user participating in this run
Metadata associated the user participating in this run
Assistant Memory
Add chat history to the messages sent to the LLM.
Add chat history to the prompt sent to the LLM.
Number of previous messages to add to prompt or messages sent to the LLM.
Assistant Knowledge Base
Enable RAG by adding references from the knowledge base to the prompt.
Assistant Storage
AssistantRun from the database: DO NOT SET MANUALLY
A list of tools provided to the LLM. Tools are functions the model may generate JSON inputs for. If you provide a dict, it is not called by the model.
Show tool calls in LLM messages.
Maximum number of tool calls allowed.
Controls which (if any) tool is called by the model.
- "none" means the model will not call a tool and instead generates a message.
- "auto" means the model can pick between generating a message or calling a tool.
- Specifying a particular function via
{
"type: "function",
"function": {"name": "my_function"}
}
forces the model to call that tool.
"none" is the default when no tools are present. "auto" is the default if tools are present.
If True, adds a tool that allows the LLM to get the chat history.
If True, adds a tool that allows the LLM to search the knowledge base.
If True, adds a tool that allows the LLM to update the knowledge base.
If True, adds a tool that allows the LLM to get the tool call history.
Allow the assistant to use tools
List of additional messages added to the messages list after the system prompt. Use these for few-shot learning or to provide additional context to the LLM.
Provide the system prompt as a string
Provide the system prompt as a PromptTemplate
If True, build a default system prompt using instructions and extra_instructions
Assistant description for the default system prompt
Assistant task
List of instructions for the default system prompt
List of extra_instructions for the default system prompt Use these when you want to use the default prompt but also add some extra instructions
Expected output added to the system prompt
Add a string to the end of the default system prompt
If True, add instructions for using the knowledge base to the default system prompt if knowledge base is provided
If True, add instructions for letting the user know that the assistant does not know the answer
If True, add instructions to prevent prompt injection attacks
If True, add instructions for limiting tool access to the default system prompt if tools are provided
If True, add the current datetime to the prompt to give the assistant a sense of time This allows for relative times like "tomorrow" to be used in the prompt
If markdown=true, formats the output using markdown
Provides the user prompt as a string. Note: this will ignore the input message provided to the run function
Provides the user prompt as a PromptTemplate
If True, build a default user prompt using references and chat history
Function to build references for the default user_prompt
. This function, if provided, is called when add_references_to_prompt
is True
Signature:
def references(assistant: Assistant, query: str) -> Optional[str]:
...
Format of the references
Function to build the chat_history for the default user_prompt
. This function, if provided, is called when add_chat_history_to_prompt
is True
Signature:
def chat_history(assistant: Assistant) -> str:
...
Provide an output model for the responses
If True, the output is converted into the output_model (pydantic model or json dict)
Final LLM response i.e. the final output of this assistant
Save the output to a file
Metadata associated with the assistant tasks
Assistant team.
When the assistant is part of a team, this is the role of the assistant in the team
Add instructions for delegating tasks to another assistants
If True, show debug logs
If True, logs Assistant runs on phidata.com