Run Large Language Models locally with Ollama

Ollama is a fantastic tool for running models locally. Install ollama and run a model using

After you have the local model running, use the Ollama model to access them

Example

Params

ParameterTypeDefaultDescription
idstr"llama3.2"The name of the model to be used.
namestr"Ollama"The name identifier for the agent.
providerstr"Ollama {id}"The provider of the model, combining “Ollama” with the model ID.
formatOptional[str]-The response format, either None for default or a specific format like “json”.
optionsOptional[Any]-Additional options to include with the request, e.g., temperature or stop sequences.
keep_aliveOptional[Union[float, str]]-The keep-alive duration for maintaining persistent connections, specified in seconds or as a string.
request_paramsOptional[Dict[str, Any]]-Additional parameters to include in the request.
hostOptional[str]-The host URL for making API requests to the Ollama service.
timeoutOptional[Any]-The timeout duration for requests, can be specified in seconds.
client_paramsOptional[Dict[str, Any]]-Additional parameters for client configuration.
clientOptional[OllamaClient]-An instance of OllamaClient provided for making API requests.
async_clientOptional[AsyncOllamaClient]-An instance of AsyncOllamaClient for making asynchronous API requests.