Class: LLM::Ollama

Inherits:
Provider show all
Includes:
Format
Defined in:
lib/llm/providers/ollama.rb,
lib/llm/providers/ollama/format.rb,
lib/llm/providers/ollama/error_handler.rb,
lib/llm/providers/ollama/response_parser.rb

Overview

The Ollama class implements a provider for [Ollama](ollama.ai/)

Defined Under Namespace

Modules: Format, ResponseParser Classes: ErrorHandler

Constant Summary collapse

HOST =
"localhost"

Instance Method Summary collapse

Methods included from Format

#format

Methods inherited from Provider

#chat, #chat!, #inspect

Methods included from HTTPClient

#request

Constructor Details

#initialize(secret) ⇒ Ollama

Returns a new instance of Ollama.

Parameters:

  • secret (String)

    The secret key for authentication



17
18
19
# File 'lib/llm/providers/ollama.rb', line 17

def initialize(secret, **)
  super(secret, host: HOST, port: 11434, ssl: false, **)
end

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”



48
49
50
# File 'lib/llm/providers/ollama.rb', line 48

def assistant_role
  "assistant"
end

#complete(prompt, role = :user, **params) ⇒ LLM::Response::Completion

Parameters:

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

Returns:

See Also:



37
38
39
40
41
42
43
44
# File 'lib/llm/providers/ollama.rb', line 37

def complete(prompt, role = :user, **params)
  params   = {model: "llama3.2", stream: false}.merge!(params)
  req      = Net::HTTP::Post.new("/api/chat", headers)
  messages = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)]
  req.body = JSON.dump({messages: messages.map(&:to_h)}.merge!(params))
  res      = request(@http, req)
  Response::Completion.new(res).extend(response_parser)
end

#embed(input, **params) ⇒ LLM::Response::Embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

Returns:



24
25
26
27
28
29
30
# File 'lib/llm/providers/ollama.rb', line 24

def embed(input, **params)
  params   = {model: "llama3.2"}.merge!(params)
  req      = Net::HTTP::Post.new("/v1/embeddings", headers)
  req.body = JSON.dump({input:}.merge!(params))
  res      = request(@http, req)
  Response::Embedding.new(res).extend(response_parser)
end

#modelsHash<String, LLM::Model>

Returns a hash of available models

Returns:

  • (Hash<String, LLM::Model>)

    Returns a hash of available models



54
55
56
# File 'lib/llm/providers/ollama.rb', line 54

def models
  @models ||= load_models!("ollama")
end