Models
OpenAI
The GPT models are the best in class LLMs and used as the default LLM by Agents.
Authentication
Set your OPENAI_API_KEY
environment variable. You can get one from OpenAI here.
Example
Use OpenAIChat
with your Agent
:
Params
For more information, please refer to the OpenAI docs as well.
Parameter | Type | Default | Description |
---|---|---|---|
id | str | "gpt-4o" | OpenAI model ID. |
name | str | "OpenAIChat" | Name identifier for the OpenAI chat model. |
provider | str | - | Provider of the model, combining “OpenAI” with the model ID. |
store | Optional[bool] | - | If set, determines whether to store the conversation. |
frequency_penalty | Optional[float] | - | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. |
logit_bias | Optional[Any] | - | Modify the likelihood of specified tokens appearing in the completion. |
logprobs | Optional[bool] | - | Whether to return log probabilities of the output tokens. |
max_tokens | Optional[int] | - | The maximum number of tokens to generate in the chat completion. |
presence_penalty | Optional[float] | - | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. |
response_format | Optional[Any] | - | An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON. |
seed | Optional[int] | - | If specified, OpenAI system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. |
stop | Optional[Union[str, List[str]]] | - | Up to 4 sequences where the API will stop generating further tokens. |
temperature | Optional[float] | - | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. |
top_logprobs | Optional[int] | - | The number of most likely tokens to return at each token position, along with their log probabilities. |
user | Optional[str] | - | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. |
top_p | Optional[float] | - | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. |
extra_headers | Optional[Any] | - | Additional headers to be included in the API request. |
extra_query | Optional[Any] | - | Additional query parameters to be included in the API request. |
request_params | Optional[Dict[str, Any]] | - | Additional parameters to be included in the API request. |
api_key | Optional[str] | - | OpenAI API Key for authentication. |
organization | Optional[str] | - | OpenAI organization identifier. |
base_url | Optional[Union[str, httpx.URL]] | - | Base URL for the OpenAI API. |
timeout | Optional[float] | - | Timeout for API requests in seconds. |
max_retries | Optional[int] | - | Maximum number of retries for failed API requests. |
default_headers | Optional[Any] | - | Default headers to be included in all API requests. |
default_query | Optional[Any] | - | Default query parameters to be included in all API requests. |
http_client | Optional[httpx.Client] | - | Custom HTTP client for making API requests. |
client_params | Optional[Dict[str, Any]] | - | Additional parameters for configuring the OpenAI client. |
client | Optional[OpenAIClient] | - | Custom OpenAI client instance. |
async_client | Optional[AsyncOpenAIClient] | - | Custom asynchronous OpenAI client instance. |
Was this page helpful?