Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/provider.rb

Overview

This class is abstract.

The Provider class represents an abstract class for LLM (Language Model) providers.

Direct Known Subclasses

Anthropic, Gemini, Ollama, OpenAI, VoyageAI

Instance Method Summary collapse

Constructor Details

#initialize(secret, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • secret (String)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response



20
21
22
23
24
25
26
# File 'lib/llm/provider.rb', line 20

def initialize(secret, host:, port: 443, timeout: 60, ssl: true)
  @secret = secret
  @http = Net::HTTP.new(host, port).tap do |http|
    http.use_ssl = ssl
    http.read_timeout = timeout
  end
end

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


189
190
191
# File 'lib/llm/provider.rb', line 189

def assistant_role
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


167
168
169
# File 'lib/llm/provider.rb', line 167

def audio
  raise NotImplementedError
end

#chat(prompt, role = :user, model: default_model, schema: nil, **params) ⇒ LLM::Chat

Note:

This method creates a lazy version of a LLM::Chat object.

Starts a new lazy chat powered by the chat completions API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: default_model)

    The model to use for the completion

  • schema (#to_json, nil) (defaults to: nil)

    The schema that describes the expected response format

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



91
92
93
# File 'lib/llm/provider.rb', line 91

def chat(prompt, role = :user, model: default_model, schema: nil, **params)
  LLM::Chat.new(self, **params.merge(model:, schema:)).lazy.chat(prompt, role)
end

#chat!(prompt, role = :user, model: default_model, schema: nil, **params) ⇒ LLM::Chat

Note:

This method creates a non-lazy version of a LLM::Chat object.

Starts a new chat powered by the chat completions API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: default_model)

    The model to use for the completion

  • schema (#to_json, nil) (defaults to: nil)

    The schema that describes the expected response format

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



108
109
110
# File 'lib/llm/provider.rb', line 108

def chat!(prompt, role = :user, model: default_model, schema: nil, **params)
  LLM::Chat.new(self, **params.merge(model:, schema:)).chat(prompt, role)
end

#complete(prompt, role = :user, model: default_model, schema: nil, **params) ⇒ LLM::Response::Completion

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(ENV["KEY"])
messages = [
  {role: "system", content: "Your task is to answer all of my questions"},
  {role: "system", content: "Your answers should be short and concise"},
]
res = llm.complete("Hello. What is the answer to 5 + 2 ?", :user, messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: default_model)

    The model to use for the completion

  • schema (#to_json, nil) (defaults to: nil)

    The schema that describes the expected response format

  • params (Hash)

    Other completion parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



74
75
76
# File 'lib/llm/provider.rb', line 74

def complete(prompt, role = :user, model: default_model, schema: nil, **params)
  raise NotImplementedError
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

    Returns the default model for chat completions

Raises:

  • (NotImplementedError)


196
197
198
# File 'lib/llm/provider.rb', line 196

def default_model
  raise NotImplementedError
end

#embed(input, model: nil, **params) ⇒ LLM::Response::Embedding

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: nil)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



47
48
49
# File 'lib/llm/provider.rb', line 47

def embed(input, model: nil, **params)
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


174
175
176
# File 'lib/llm/provider.rb', line 174

def files
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Gemini::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


160
161
162
# File 'lib/llm/provider.rb', line 160

def images
  raise NotImplementedError
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


32
33
34
# File 'lib/llm/provider.rb', line 32

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @secret=[REDACTED] @http=#{@http.inspect}>"
end

#modelsLLM::OpenAI::Models

Returns an interface to the models API

Returns:

Raises:

  • (NotImplementedError)


181
182
183
# File 'lib/llm/provider.rb', line 181

def models
  raise NotImplementedError
end

#respond(prompt, role = :user, model: default_model, schema: nil, **params) ⇒ LLM::Chat

Note:

This method creates a lazy variant of a LLM::Chat object.

Starts a new lazy chat powered by the responses API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: default_model)

    The model to use for the completion

  • schema (#to_json, nil) (defaults to: nil)

    The schema that describes the expected response format

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



125
126
127
# File 'lib/llm/provider.rb', line 125

def respond(prompt, role = :user, model: default_model, schema: nil, **params)
  LLM::Chat.new(self, **params.merge(model:, schema:)).lazy.respond(prompt, role)
end

#respond!(prompt, role = :user, model: default_model, schema: nil, **params) ⇒ LLM::Chat

Note:

This method creates a non-lazy variant of a LLM::Chat object.

Starts a new chat powered by the responses API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: default_model)

    The model to use for the completion

  • schema (#to_json, nil) (defaults to: nil)

    The schema that describes the expected response format

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



142
143
144
# File 'lib/llm/provider.rb', line 142

def respond!(prompt, role = :user, model: default_model, schema: nil, **params)
  LLM::Chat.new(self, **params.merge(model:, schema:)).respond(prompt, role)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


153
154
155
# File 'lib/llm/provider.rb', line 153

def responses
  raise NotImplementedError
end

#schemaJSON::Schema

Returns an object that can generate a JSON schema

Returns:



203
204
205
206
207
208
# File 'lib/llm/provider.rb', line 203

def schema
  @schema ||= begin
    require_relative "../json/schema"
    JSON::Schema.new
  end
end