Class: LLM::Provider Abstract
Overview
This class is not meant to be instantiated directly. Instead, use one of the subclasses that implement the methods defined here.
The Provider class represents an abstract class for LLM (Language Model) providers.
Instance Method Summary collapse
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API.
-
#chat(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
Starts a new lazy chat powered by the chat completions API.
-
#chat!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
Starts a new chat powered by the chat completions API.
-
#complete(prompt, role = :user, model:, **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#embed(input, model:, **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API.
-
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API.
-
#initialize(secret, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider
constructor
A new instance of Provider.
-
#inspect ⇒ String
Returns an inspection of the provider object.
-
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models.
-
#respond(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
Starts a new lazy chat powered by the responses API.
-
#respond!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
Starts a new chat powered by the responses API.
-
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
Methods included from HTTPClient
Constructor Details
#initialize(secret, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider
Returns a new instance of Provider.
30 31 32 33 34 35 36 |
# File 'lib/llm/provider.rb', line 30 def initialize(secret, host:, port: 443, timeout: 60, ssl: true) @secret = secret @http = Net::HTTP.new(host, port).tap do |http| http.use_ssl = ssl http.read_timeout = timeout end end |
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
186 187 188 |
# File 'lib/llm/provider.rb', line 186 def assistant_role raise NotImplementedError end |
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API
171 172 173 |
# File 'lib/llm/provider.rb', line 171 def audio raise NotImplementedError end |
#chat(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
This method creates a lazy version of a LLM::Chat object.
Starts a new lazy chat powered by the chat completions API
98 99 100 |
# File 'lib/llm/provider.rb', line 98 def chat(prompt, role = :user, model: nil, **params) LLM::Chat.new(self, params).lazy.chat(prompt, role) end |
#chat!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
This method creates a non-lazy version of a LLM::Chat object.
Starts a new chat powered by the chat completions API
114 115 116 |
# File 'lib/llm/provider.rb', line 114 def chat!(prompt, role = :user, model: nil, **params) LLM::Chat.new(self, params).chat(prompt, role) end |
#complete(prompt, role = :user, model:, **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
82 83 84 |
# File 'lib/llm/provider.rb', line 82 def complete(prompt, role = :user, model:, **params) raise NotImplementedError end |
#embed(input, model:, **params) ⇒ LLM::Response::Embedding
Provides an embedding
57 58 59 |
# File 'lib/llm/provider.rb', line 57 def (input, model:, **params) raise NotImplementedError end |
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API
178 179 180 |
# File 'lib/llm/provider.rb', line 178 def files raise NotImplementedError end |
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API
164 165 166 |
# File 'lib/llm/provider.rb', line 164 def images raise NotImplementedError end |
#inspect ⇒ String
The secret key is redacted in inspect for security reasons
Returns an inspection of the provider object
42 43 44 |
# File 'lib/llm/provider.rb', line 42 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} @secret=[REDACTED] @http=#{@http.inspect}>" end |
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models
193 194 195 |
# File 'lib/llm/provider.rb', line 193 def models raise NotImplementedError end |
#respond(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
This method creates a lazy variant of a LLM::Chat object.
Starts a new lazy chat powered by the responses API
130 131 132 |
# File 'lib/llm/provider.rb', line 130 def respond(prompt, role = :user, model: nil, **params) LLM::Chat.new(self, params).lazy.respond(prompt, role) end |
#respond!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
This method creates a non-lazy variant of a LLM::Chat object.
Starts a new chat powered by the responses API
146 147 148 |
# File 'lib/llm/provider.rb', line 146 def respond!(prompt, role = :user, model: nil, **params) LLM::Chat.new(self, params).respond(prompt, role) end |
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
157 158 159 |
# File 'lib/llm/provider.rb', line 157 def responses raise NotImplementedError end |