LLMs
Ollama
Ollama is a fantastic tool for running LLMs locally. Install ollama and run a model using
After you have the local model running, use the Ollama
LLM to access them
Usage
Params
model
str
Model name
host
str
Host url
format
str
default: ""Response format, ""
or "json"
timeout
Any
default: "None"Timeout for requests
options
Dict[str, Any]
default: "None"Dictionary of options to send with the request, example: {temperature: 0.1, stop: ['\n']}
keep_alive
Union[float, str]
default: "None"client_kwargs
Dict[str, Any]
default: "None"Additional {key: value}
dict sent when initalizing the Ollama()
client.
ollama_client
ollama.Client()
default: "None"Provide your own ollama.Client()
Message us on discord if you need help.
Was this page helpful?