Mistral is a platform for providing endpoints for Large Language models.

Authentication

Set your MISTRAL_API_KEY environment variable.

export MISTRAL_API_KEY=***

Usage

Use Mistral with your Assistant:

from phi.assistant import Assistant
from phi.llm.Mistral import Mistral

assistant = Assistant(
    llm=Mistral(model="mistral-large-latest"),
    description="You help people with their health and fitness goals.",
)
assistant.print_response("Share a quick healthy breakfast recipe.", markdown=True)

Params

name
str
default: "Mistral"

The name identifier for the assistant.

model
str
default: "mistral-large-latest"

The specific model ID used for generating responses.

temperature
float

The sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

max_tokens
int
default: "1024"

The maximum number of tokens to generate in the response.

top_p
float

The nucleus sampling parameter. The model considers the results of the tokens with top_p probability mass.

random_seed
int

The seed for random number generation to ensure reproducibility of results.

safe_mode
bool

Enable safe mode to filter potentially harmful or inappropriate content.

safe_prompt
bool

Enable safe prompt mode to filter potentially harmful or inappropriate prompts.

response_format
Union[Dict[str, Any], ChatCompletionResponseFormat]

The format of the response, either as a dictionary or as a ChatCompletionResponseFormat object.

api_key
str

The API key for authenticating requests to the service.

endpoint
str

The API endpoint URL for making requests to the service.

max_retries
int

The maximum number of retry attempts for failed requests.

timeout
int

The timeout duration for requests, specified in seconds.

mistral_client
MistralClient

An instance of MistralClient provided for making API requests.