Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.phidata.com/llms.txt

Use this file to discover all available pages before exploring further.

The GPT models are the best in class LLMs and used as the default LLM by Agents.

Authentication

Set your OPENAI_API_KEY environment variable. You can get one from OpenAI here.
export OPENAI_API_KEY=sk-***

Example

Use OpenAIChat with your Agent:

from phi.agent import Agent, RunResponse
from phi.model.openai import OpenAIChat

agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    markdown=True
)

# Get the response in a variable
# run: RunResponse = agent.run("Share a 2 sentence horror story.")
# print(run.content)

# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story.")

Params

For more information, please refer to the OpenAI docs as well.
NameTypeDefaultDescription
idstr"gpt-4o"The id of the OpenAI model to use.
namestr"OpenAIChat"The name of this chat model instance.
providerstr"OpenAI " + idThe provider of the model.
storeOptional[bool]NoneWhether or not to store the output of this chat completion request for use in the model distillation or evals products.
frequency_penaltyOptional[float]NonePenalizes new tokens based on their frequency in the text so far.
logit_biasOptional[Any]NoneModifies the likelihood of specified tokens appearing in the completion.
logprobsOptional[bool]NoneInclude the log probabilities on the logprobs most likely tokens.
max_tokensOptional[int]NoneThe maximum number of tokens to generate in the chat completion.
presence_penaltyOptional[float]NonePenalizes new tokens based on whether they appear in the text so far.
response_formatOptional[Any]NoneAn object specifying the format that the model must output.
seedOptional[int]NoneA seed for deterministic sampling.
stopOptional[Union[str, List[str]]]NoneUp to 4 sequences where the API will stop generating further tokens.
temperatureOptional[float]NoneControls randomness in the model's output.
top_logprobsOptional[int]NoneHow many log probability results to return per token.
userOptional[str]NoneA unique identifier representing your end-user.
top_pOptional[float]NoneControls diversity via nucleus sampling.
extra_headersOptional[Any]NoneAdditional headers to send with the request.
extra_queryOptional[Any]NoneAdditional query parameters to send with the request.
request_paramsOptional[Dict[str, Any]]NoneAdditional parameters to include in the request.
api_keyOptional[str]NoneThe API key for authenticating with OpenAI.
organizationOptional[str]NoneThe organization to use for API requests.
base_urlOptional[Union[str, httpx.URL]]NoneThe base URL for API requests.
timeoutOptional[float]NoneThe timeout for API requests.
max_retriesOptional[int]NoneThe maximum number of retries for failed requests.
default_headersOptional[Any]NoneDefault headers to include in all requests.
default_queryOptional[Any]NoneDefault query parameters to include in all requests.
http_clientOptional[httpx.Client]NoneAn optional pre-configured HTTP client.
client_paramsOptional[Dict[str, Any]]NoneAdditional parameters for client configuration.
clientOptional[OpenAIClient]NoneThe OpenAI client instance.
async_clientOptional[AsyncOpenAIClient]NoneThe asynchronous OpenAI client instance.
structured_outputsboolFalseWhether to use the structured outputs from the Model.
supports_structured_outputsboolTrueWhether the Model supports structured outputs.
add_images_to_message_contentboolTrueWhether to add images to the message content.