The GPT models are the best in class LLMs and used as the default LLM by Assistants.

Authentication

Set your OPENAI_API_KEY environment variable. You can get one from OpenAI here.

export OPENAI_API_KEY=sk-***

Example

Use OpenAIChat with your Assistant:

from phi.assistant import Assistant
from phi.llm.openai import OpenAIChat

assistant = Assistant(
    llm=OpenAIChat(model="gpt-4-turbo", max_tokens=500, temperature=0.3),
    description="You provide 15 minute healthy recipes.",
)
assistant.print_response("Share a breakfast recipe.", markdown=True)

Params

For more information, please refer to the OpenAI docs as well.

model
str
default: "gpt-4-turbo"

OpenAI model ID.

seed
int

If specified, openai system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

max_tokens
int

The maximum number of tokens to generate in the chat completion.

temperature
float

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

frequency_penalty
float

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

response_format
Dict[str, Any]

An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.

frequency_penalty
float

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

presence_penalty
float

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

stop
List[str]

Up to 4 sequences where the API will stop generating further tokens.

user
str

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

top_p
float

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

logit_bias
Any

Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100.

headers
Dict[str, Any]

Headers added to the OpenAI request.

api_key
str

OpenAI API Key

organization
str

OpenAI organization

base_url
str

OpenAI Base URL

client_params
Dict[str, Any]

Additional key word argumens used when creating the OpenAI() client.

openai_client
OpenAI

Provide your own OpenAI() client to use