Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/provider.rb

Overview

This class is abstract.
Note:

This class is not meant to be instantiated directly. Instead, use one of the subclasses that implement the methods defined here.

The Provider class represents an abstract class for LLM (Language Model) providers.

See Also:

  • OpenAI
  • Anthropic
  • Gemini
  • Ollama

Direct Known Subclasses

Anthropic, Gemini, Ollama, OpenAI, VoyageAI

Instance Method Summary collapse

Constructor Details

#initialize(secret, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • secret (String)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response



29
30
31
32
33
34
35
# File 'lib/llm/provider.rb', line 29

def initialize(secret, host:, port: 443, timeout: 60, ssl: true)
  @secret = secret
  @http = Net::HTTP.new(host, port).tap do |http|
    http.use_ssl = ssl
    http.read_timeout = timeout
  end
end

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


185
186
187
# File 'lib/llm/provider.rb', line 185

def assistant_role
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


170
171
172
# File 'lib/llm/provider.rb', line 170

def audio
  raise NotImplementedError
end

#chat(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a lazy version of a LLM::Chat object.

Starts a new lazy chat powered by the chat completions API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



97
98
99
# File 'lib/llm/provider.rb', line 97

def chat(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).lazy.chat(prompt, role)
end

#chat!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a non-lazy version of a LLM::Chat object.

Starts a new chat powered by the chat completions API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



113
114
115
# File 'lib/llm/provider.rb', line 113

def chat!(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).chat(prompt, role)
end

#complete(prompt, role = :user, model:, **params) ⇒ LLM::Response::Completion

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(ENV["KEY"])
messages = [
  {role: "system", content: "Your task is to answer all of my questions"},
  {role: "system", content: "Your answers should be short and concise"},
]
res = llm.complete("Hello. What is the answer to 5 + 2 ?", :user, messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String)

    The model to use for the completion

  • params (Hash)

    Other completion parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



81
82
83
# File 'lib/llm/provider.rb', line 81

def complete(prompt, role = :user, model:, **params)
  raise NotImplementedError
end

#embed(input, model:, **params) ⇒ LLM::Response::Embedding

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



56
57
58
# File 'lib/llm/provider.rb', line 56

def embed(input, model:, **params)
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


177
178
179
# File 'lib/llm/provider.rb', line 177

def files
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Gemini::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


163
164
165
# File 'lib/llm/provider.rb', line 163

def images
  raise NotImplementedError
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


41
42
43
# File 'lib/llm/provider.rb', line 41

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @secret=[REDACTED] @http=#{@http.inspect}>"
end

#modelsHash<String, LLM::Model>

Returns a hash of available models

Returns:

  • (Hash<String, LLM::Model>)

    Returns a hash of available models

Raises:

  • (NotImplementedError)


192
193
194
# File 'lib/llm/provider.rb', line 192

def models
  raise NotImplementedError
end

#respond(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a lazy variant of a LLM::Chat object.

Starts a new lazy chat powered by the responses API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



129
130
131
# File 'lib/llm/provider.rb', line 129

def respond(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).lazy.respond(prompt, role)
end

#respond!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a non-lazy variant of a LLM::Chat object.

Starts a new chat powered by the responses API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



145
146
147
# File 'lib/llm/provider.rb', line 145

def respond!(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).respond(prompt, role)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


156
157
158
# File 'lib/llm/provider.rb', line 156

def responses
  raise NotImplementedError
end