Under the hood an Assistant will convert the description and instructions into a system prompt and send the input message as a user prompt.

This is merely a formatting benefit, we DO NOT ALTER or abstract any information.

System Prompt

The description is added to the start of the system prompt and instructions are added as a numbered list inside <instructions></instructions> tags. For example:

instructions.py
from phi.assistant import Assistant

assistant = Assistant(
    description="You are a famous short story writer asked to write for a magazine",
    instructions=["You are a pilot on a plane flying from Hawaii to Japan."],
    markdown=True,
    debug_mode=True,
)
assistant.print_response("Tell me a 2 sentence horror story.")

Will translate to:

DEBUG    ============== system ==============
DEBUG    You are a famous short story writer asked to write for a magazine
         YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.
         <instructions>
         1. You are a pilot on a plane flying from Hawaii to Japan.
         2. Use markdown to format your answers.
         </instructions>
DEBUG    ============== user ==============
DEBUG    Tell me a 2 sentence horror story.
DEBUG    Time to generate response: 1.5415s
DEBUG    Estimated completion tokens: 40

User Prompt

In most cases, the user prompt is simply the input message sent to the Assistant.

If a knowledge_base is provided and add_references_to_prompt=True, the user prompt is updated to include:

user_prompt += f"""Use this information from the knowledge base if it helps:
<knowledge_base>
{references}
</knowledge_base>
"""

If add_chat_history_to_prompt=True, the user prompt is updated to include messages from the chat history. The number of messages can by set using num_history_messages:

user_prompt += f"""Use the following chat history to reference past messages:
<chat_history>
{chat_history}
</chat_history>
"""

Overriding Default Prompts

You can completely override the default prompts using:

system_prompt
str
default: "None"

Provide the system prompt as a string.

system_prompt_template
PromptTemplate
default: "None"

Provide the system prompt as a PromptTemplate

user_prompt
str
default: "None"

Provide the user prompt as a string. Note: this will ignore the message sent to the run function.

user_prompt_template
PromptTemplate
default: "None"

Provide the user prompt as a PromptTemplate.

Example:

from phi.assistant import Assistant

assistant = Assistant(
    system_prompt="Share a 2 sentence story about",
    user_prompt="Love in the year 12000.",
    debug_mode=True,
)
assistant.print_response()

Customize the System Prompt

The system prompt can be customized using:

description
str
default: "None"

A description of the Assistant that is added to top the system prompt.

instructions
List[str]
default: "None"

List of instructions added to the system prompt in <instructions> tags. Default instructions are also created depending on values for markdown, output_model etc.

extra_instructions
List[str]
default: "None"

List of extra_instructions added to the default system prompt. Use these when you want to add some extra instructions at the end of the default instructions.

add_to_system_prompt
str
default: "None"

Add a string to the end of the default system prompt.

add_knowledge_base_instructions
bool
default: "True"

If True, add instructions for using the knowledge base to the system prompt if knowledge base is provided

prevent_hallucinations
bool
default: "False"

If True, add instructions to return “I dont know” when the assistant does not know the answer.

prevent_prompt_injection
bool
default: "False"

If True, add instructions to prevent prompt injection attacks.

limit_tool_access
bool
default: "False"

If True, add instructions for limiting tool access to the default system prompt if tools are provided

add_datetime_to_instructions
bool
default: "False"

If True, add the current datetime to the prompt to give the assistant a sense of time. This allows for relative times like “tomorrow” to be used in the prompt

markdown
bool
default: "False"

Add an instruction to format the output using markdown.

output_model
Optional[Union[str, List, Type[BaseModel]]]
default: "None"

Provide an output model for the responses. Accepts a pydantic model, list of strings where each string is a key the LLM should return a value for or just a string.

parse_output
bool
default: "True"

If True, the output is converted into the output_model (pydantic model or json dict). Otherwise returned as is.

Customize the User Prompt

The user prompt can be customized using:

add_references_to_prompt
bool
default: "False"

Enable RAG by adding references from the knowledge base to the prompt.

add_chat_history_to_prompt
bool
default: "False"

Adds the formatted chat history to the user prompt.

num_history_messages
int
default: "6"

Number of previous messages to add to the prompt or messages.