Example

python_assistant.py
from phi.assistant.python import PythonAssistant
from phi.file.local.csv import CsvFile

python_assistant = PythonAssistant(
    files=[
        CsvFile(
            path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
            description="Contains information about movies from IMDB.",
        )
    ],
    pip_install=True,
    show_function_calls=True,
)

python_assistant.print_response("What is the average rating of movies?")

PythonAssistant Params

name
str
default: "PythonAssistant"

Name of the PythonAssistant.

files
List[File]
default: "None"

List of Files available for the PythonAssistant.

file_information
str
default: "None"

Provide information about Files as a string.

charting_libraries
List[str]
default: "['plotly', 'matplotlib', 'seaborn']"

List of charting libraries the PythonAssistant can use.

followups
bool
default: "False"

If the PythonAssistant is allowed to ask follow-up questions.

read_tool_call_history
bool
default: "True"

If the DuckDbAssistant is allowed to read the tool call history.

base_dir
Path
default: "."

Where to save files if needed.

save_and_run
bool
default: "True"

If the PythonAssistant is allowed to save and run python code.

pip_install
bool
default: "False"

If the PythonAssistant is allowed to pip install libraries. Disabled by default for security reasons.

run_code
bool
default: "False"

If the PythonAssistant is allowed to run python code directly. Disabled by default for security reasons.

list_files
bool
default: "False"

If the PythonAssistant is allowed to list files.

run_files
bool
default: "True"

If the PythonAssistant is allowed to run files.

read_files
bool
default: "False"

If the PythonAssistant is allowed to read files.

safe_globals
dict
default: "None"

Provide a list of global variables to for the PythonAssistant.

safe_locals
dict
default: "None"

Provide a list of local variables to for the PythonAssistant.

Assistant Reference

PythonAssistant is a subclass of the Assistant class and has access to the same params

llm
LLM

LLM to use for this Assistant

introduction
str

Assistant introduction. This is added to the chat history when a run is started.

name
str

Assistant name

assistant_data
Dict[str, Any]

Metadata associated with this assistant

run_id
str

Run UUID (autogenerated if not set)

run_name
str

Run name

run_data
Dict[str, Any]

Metadata associated with this run

user_id
str

ID of the user participating in this run

user_data
Dict[str, Any]

Metadata associated the user participating in this run

memory
AssistantMemory
default: "AssistantMemory()"

Assistant Memory

add_chat_history_to_messages
bool
default: "False"

Add chat history to the messages sent to the LLM.

add_chat_history_to_prompt
bool
default: "False"

Add chat history to the prompt sent to the LLM.

num_history_messages
int
default: "6"

Number of previous messages to add to prompt or messages sent to the LLM.

knowledge_base
AssistantKnowledge

Assistant Knowledge Base

add_references_to_prompt
bool
default: "False"

Enable RAG by adding references from the knowledge base to the prompt.

storage
AssistantStorage

Assistant Storage

db_row
AssistantRun

AssistantRun from the database: DO NOT SET MANUALLY

tools
List[Union[Tool, ToolRegistry, Callable, Dict, Function]]

A list of tools provided to the LLM. Tools are functions the model may generate JSON inputs for. If you provide a dict, it is not called by the model.

show_tool_calls
bool
default: "False"

Show tool calls in LLM messages.

tool_call_limit
int

Maximum number of tool calls allowed.

tool_choice
Union[str, Dict[str, Any]]

Controls which (if any) tool is called by the model.

  • "none" means the model will not call a tool and instead generates a message.
  • "auto" means the model can pick between generating a message or calling a tool.
  • Specifying a particular function via
{
  "type: "function",
  "function": {"name": "my_function"}
}

forces the model to call that tool.

"none" is the default when no tools are present. "auto" is the default if tools are present.

read_chat_history
bool
default: "False"

If True, adds a tool that allows the LLM to get the chat history.

search_knowledge
bool
default: "False"

If True, adds a tool that allows the LLM to search the knowledge base.

update_knowledge
bool
default: "False"

If True, adds a tool that allows the LLM to update the knowledge base.

read_tool_call_history
bool
default: "False"

If True, adds a tool that allows the LLM to get the tool call history.

use_tools
bool
default: "False"

Allow the assistant to use tools

additional_messages
List[Union[Dict, Message]]
additional_messages
List[Union[Dict, Message]]

List of additional messages added to the messages list after the system prompt. Use these for few-shot learning or to provide additional context to the LLM.

system_prompt
str

Provide the system prompt as a string

system_prompt_template
PromptTemplate

Provide the system prompt as a PromptTemplate

build_default_system_prompt
Callable[..., Optional[str]]

If True, build a default system prompt using instructions and extra_instructions

description
str

Assistant description for the default system prompt

task
str

Assistant task

instructions
List[str]

List of instructions for the default system prompt

extra_instructions
List[str]

List of extra_instructions for the default system prompt Use these when you want to use the default prompt but also add some extra instructions

expected_output
str

Expected output added to the system prompt

add_to_system_prompt
str

Add a string to the end of the default system prompt

add_knowledge_base_instructions
bool
default: "True"

If True, add instructions for using the knowledge base to the default system prompt if knowledge base is provided

prevent_hallucinations
bool
default: "False"

If True, add instructions for letting the user know that the assistant does not know the answer

prevent_prompt_injection
bool
default: "False"

If True, add instructions to prevent prompt injection attacks

limit_tool_access
bool
default: "False"

If True, add instructions for limiting tool access to the default system prompt if tools are provided

add_datetime_to_instructions
bool
default: "False"

If True, add the current datetime to the prompt to give the assistant a sense of time This allows for relative times like "tomorrow" to be used in the prompt

markdown
bool
default: "False"

If markdown=true, formats the output using markdown

user_prompt
Union[List[Dict], str]

Provides the user prompt as a string. Note: this will ignore the input message provided to the run function

user_prompt_template
PromptTemplate

Provides the user prompt as a PromptTemplate

build_default_user_prompt
bool
default: "True"

If True, build a default user prompt using references and chat history

references_function
Callable[..., Optional[str]]

Function to build references for the default user_prompt. This function, if provided, is called when add_references_to_prompt is True

Signature:

def references(assistant: Assistant, query: str) -> Optional[str]:
    ...
references_format
Literal['json', 'yaml']
default: "json"

Format of the references

chat_history_function
Callable[..., Optional[str]]

Function to build the chat_history for the default user_prompt. This function, if provided, is called when add_chat_history_to_prompt is True

Signature:

def chat_history(assistant: Assistant) -> str:
    ...
output_model
Union[str, List, Type[BaseModel]]

Provide an output model for the responses

parse_output
bool
default: "True"

If True, the output is converted into the output_model (pydantic model or json dict)

output
Any

Final LLM response i.e. the final output of this assistant

save_output_to_file
Str

Save the output to a file

task_data
Dict[str, Any]

Metadata associated with the assistant tasks

team
List['Assistant']

Assistant team.

role
str

When the assistant is part of a team, this is the role of the assistant in the team

add_delegation_instructions
bool
default: "True"

Add instructions for delegating tasks to another assistants

debug_mode
bool
default: "False"

If True, show debug logs

monitoring
bool
default: "False"

If True, logs Assistant runs on phidata.com