Models
Ollama
Run Large Language Models locally with Ollama
Ollama is a fantastic tool for running models locally. Install ollama and run a model using
After you have the local model running, use the Ollama
model to access them
Example
Params
Parameter | Type | Default | Description |
---|---|---|---|
id | str | "llama3.2" | The name of the model to be used. |
name | str | "Ollama" | The name identifier for the agent. |
provider | str | "Ollama {id}" | The provider of the model, combining “Ollama” with the model ID. |
format | Optional[str] | - | The response format, either None for default or a specific format like “json”. |
options | Optional[Any] | - | Additional options to include with the request, e.g., temperature or stop sequences. |
keep_alive | Optional[Union[float, str]] | - | The keep-alive duration for maintaining persistent connections, specified in seconds or as a string. |
request_params | Optional[Dict[str, Any]] | - | Additional parameters to include in the request. |
host | Optional[str] | - | The host URL for making API requests to the Ollama service. |
timeout | Optional[Any] | - | The timeout duration for requests, can be specified in seconds. |
client_params | Optional[Dict[str, Any]] | - | Additional parameters for client configuration. |
client | Optional[OllamaClient] | - | An instance of OllamaClient provided for making API requests. |
async_client | Optional[AsyncOllamaClient] | - | An instance of AsyncOllamaClient for making asynchronous API requests. |