Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Includes:
HTTPClient
Defined in:
lib/llm/provider.rb

Overview

This class is abstract.
Note:

This class is not meant to be instantiated directly. Instead, use one of the subclasses that implement the methods defined here.

The Provider class represents an abstract class for LLM (Language Model) providers.

See Also:

  • OpenAI
  • Anthropic
  • Gemini
  • Ollama

Direct Known Subclasses

Anthropic, Gemini, Ollama, OpenAI, VoyageAI

Instance Method Summary collapse

Methods included from HTTPClient

#request

Constructor Details

#initialize(secret, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • secret (String)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response



30
31
32
33
34
35
36
# File 'lib/llm/provider.rb', line 30

def initialize(secret, host:, port: 443, timeout: 60, ssl: true)
  @secret = secret
  @http = Net::HTTP.new(host, port).tap do |http|
    http.use_ssl = ssl
    http.read_timeout = timeout
  end
end

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


186
187
188
# File 'lib/llm/provider.rb', line 186

def assistant_role
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


171
172
173
# File 'lib/llm/provider.rb', line 171

def audio
  raise NotImplementedError
end

#chat(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a lazy version of a LLM::Chat object.

Starts a new lazy chat powered by the chat completions API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



98
99
100
# File 'lib/llm/provider.rb', line 98

def chat(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).lazy.chat(prompt, role)
end

#chat!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a non-lazy version of a LLM::Chat object.

Starts a new chat powered by the chat completions API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



114
115
116
# File 'lib/llm/provider.rb', line 114

def chat!(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).chat(prompt, role)
end

#complete(prompt, role = :user, model:, **params) ⇒ LLM::Response::Completion

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(ENV["KEY"])
messages = [
  {role: "system", content: "Your task is to answer all of my questions"},
  {role: "system", content: "Your answers should be short and concise"},
]
res = llm.complete("Hello. What is the answer to 5 + 2 ?", :user, messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String)

    The model to use for the completion

  • params (Hash)

    Other completion parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



82
83
84
# File 'lib/llm/provider.rb', line 82

def complete(prompt, role = :user, model:, **params)
  raise NotImplementedError
end

#embed(input, model:, **params) ⇒ LLM::Response::Embedding

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



57
58
59
# File 'lib/llm/provider.rb', line 57

def embed(input, model:, **params)
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


178
179
180
# File 'lib/llm/provider.rb', line 178

def files
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Gemini::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


164
165
166
# File 'lib/llm/provider.rb', line 164

def images
  raise NotImplementedError
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


42
43
44
# File 'lib/llm/provider.rb', line 42

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @secret=[REDACTED] @http=#{@http.inspect}>"
end

#modelsHash<String, LLM::Model>

Returns a hash of available models

Returns:

  • (Hash<String, LLM::Model>)

    Returns a hash of available models

Raises:

  • (NotImplementedError)


193
194
195
# File 'lib/llm/provider.rb', line 193

def models
  raise NotImplementedError
end

#respond(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a lazy variant of a LLM::Chat object.

Starts a new lazy chat powered by the responses API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



130
131
132
# File 'lib/llm/provider.rb', line 130

def respond(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).lazy.respond(prompt, role)
end

#respond!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a non-lazy variant of a LLM::Chat object.

Starts a new chat powered by the responses API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



146
147
148
# File 'lib/llm/provider.rb', line 146

def respond!(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).respond(prompt, role)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


157
158
159
# File 'lib/llm/provider.rb', line 157

def responses
  raise NotImplementedError
end