Models
Mistral
Example
Mistral Params
Parameter | Type | Default | Description |
---|---|---|---|
name | str | "Mistral" | The name of the LLM. |
model | str | "mistral-large-latest" | The Mistral model to use. |
temperature | float | - | Controls randomness in output generation. |
max_tokens | int | 1024 | Maximum number of tokens to generate. |
top_p | float | - | Controls diversity of output generation. |
random_seed | int | - | Seed for random number generation. |
safe_mode | bool | - | Enables content filtering. |
safe_prompt | bool | - | Applies content filtering to prompts. |
response_format | Union[Dict[str, Any], ChatCompletionResponseFormat] | - | Specifies the desired response format. |
api_key | str | - | Your Mistral API key. |
endpoint | str | - | Custom API endpoint URL. |
max_retries | int | - | Maximum number of API call retries. |
timeout | int | - | Timeout for API calls in seconds. |
mistral_client | MistralClient | - | Custom Mistral client instance. |
LLM Params
Mistral
is a subclass of the LLM
class and has access to the same params
Parameter | Type | Default | Description |
---|---|---|---|
model | str | - | ID of the model to use. |
name | str | - | Name for this LLM. Note: This is not sent to the LLM API. |
metrics | Dict[str, Any] | - | Metrics collected for this LLM. Note: This is not sent to the LLM API. |
response_format | Any | - | Format of the response. |
tools | List[Union[Tool, Dict]] | - | A list of tools provided to the LLM. Tools are functions the model may generate JSON inputs for. If you provide a dict, it is not called by the model. Always add tools using the add_tool() method. |
tool_choice | Union[str, Dict[str, Any]] | - | Controls which (if any) function is called by the model. "none" means the model will not call a function and instead generates a message. "auto" means the model can pick between generating a message or calling a function. Specifying a particular function via {"type": "function", "function": {"name": "my_function"}} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. |
run_tools | bool | True | If True, runs tools. |
show_tool_calls | bool | - | If True, shows tool calls in the response. |
functions | Dict[str, Function] | - | Functions extracted from the tools. Note: These are not sent to the LLM API and are only used for execution. |
function_call_limit | int | 20 | Maximum number of function calls allowed. |
function_call_stack | List[FunctionCall] | - | Stack of function calls. |
system_prompt | str | - | System prompt provided to the LLM. |
instructions | str | - | Instructions provided to the LLM. |