The GPT models are the best in class LLMs and used as the default LLM by Agents.

Authentication

Set your OPENAI_API_KEY environment variable. You can get one from OpenAI here.

Example

Use OpenAIChat with your Agent:

Params

For more information, please refer to the OpenAI docs as well.

ParameterTypeDefaultDescription
idstr"gpt-4o"OpenAI model ID.
namestr"OpenAIChat"Name identifier for the OpenAI chat model.
providerstr-Provider of the model, combining “OpenAI” with the model ID.
storeOptional[bool]-If set, determines whether to store the conversation.
frequency_penaltyOptional[float]-Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
logit_biasOptional[Any]-Modify the likelihood of specified tokens appearing in the completion.
logprobsOptional[bool]-Whether to return log probabilities of the output tokens.
max_tokensOptional[int]-The maximum number of tokens to generate in the chat completion.
presence_penaltyOptional[float]-Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
response_formatOptional[Any]-An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.
seedOptional[int]-If specified, OpenAI system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
stopOptional[Union[str, List[str]]]-Up to 4 sequences where the API will stop generating further tokens.
temperatureOptional[float]-What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
top_logprobsOptional[int]-The number of most likely tokens to return at each token position, along with their log probabilities.
userOptional[str]-A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
top_pOptional[float]-An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
extra_headersOptional[Any]-Additional headers to be included in the API request.
extra_queryOptional[Any]-Additional query parameters to be included in the API request.
request_paramsOptional[Dict[str, Any]]-Additional parameters to be included in the API request.
api_keyOptional[str]-OpenAI API Key for authentication.
organizationOptional[str]-OpenAI organization identifier.
base_urlOptional[Union[str, httpx.URL]]-Base URL for the OpenAI API.
timeoutOptional[float]-Timeout for API requests in seconds.
max_retriesOptional[int]-Maximum number of retries for failed API requests.
default_headersOptional[Any]-Default headers to be included in all API requests.
default_queryOptional[Any]-Default query parameters to be included in all API requests.
http_clientOptional[httpx.Client]-Custom HTTP client for making API requests.
client_paramsOptional[Dict[str, Any]]-Additional parameters for configuring the OpenAI client.
clientOptional[OpenAIClient]-Custom OpenAI client instance.
async_clientOptional[AsyncOpenAIClient]-Custom asynchronous OpenAI client instance.