Class: LLM::Context
- Inherits:
-
Object
- Object
- LLM::Context
- Includes:
- Deserializer
- Defined in:
- lib/llm/context.rb,
lib/llm/context/deserializer.rb
Overview
LLM::Context represents a stateful interaction with an LLM, including conversation history, tools, execution state, and cost tracking. It evolves over time as the system runs.
Context is the stateful environment in which an LLM operates. This is not just prompt context; it is an active, evolving execution boundary for LLM workflows.
A context can use the chat completions API that all providers support or the responses API that currently only OpenAI supports.
Defined Under Namespace
Modules: Deserializer
Instance Attribute Summary collapse
-
#llm ⇒ LLM::Provider
readonly
Returns a provider.
-
#messages ⇒ LLM::Buffer<LLM::Message>
readonly
Returns the accumulated message history for this context.
-
#mode ⇒ Symbol
readonly
Returns the context mode.
Instance Method Summary collapse
-
#call(target) ⇒ Array<LLM::Function::Return>
Calls a named collection of work through the context.
-
#context_window ⇒ Integer
Returns the model’s context window.
-
#cost ⇒ LLM::Cost
Returns an approximate cost for a given context based on both the provider, and model.
-
#deserialize(path: nil, string: nil) ⇒ LLM::Context
(also: #restore)
Restore a saved context state.
-
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called.
-
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image.
-
#initialize(llm, params = {}) ⇒ Context
constructor
A new instance of Context.
- #inspect ⇒ String
-
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file.
-
#model ⇒ String
Returns the model a Context is actively using.
-
#prompt(&b) ⇒ LLM::Prompt
(also: #build_prompt)
Build a role-aware prompt for a single request.
-
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file.
-
#respond(prompt, params = {}) ⇒ LLM::Response
Interact with the context via the responses API.
-
#returns ⇒ Array<LLM::Function::Return>
Returns tool returns accumulated in this context.
-
#serialize(path:) ⇒ void
(also: #save)
Save the current context state.
-
#talk(prompt, params = {}) ⇒ LLM::Response
(also: #chat)
Interact with the context via the chat completions API.
- #to_h ⇒ Hash
- #to_json ⇒ String
-
#tracer ⇒ LLM::Tracer
Returns an LLM tracer.
-
#usage ⇒ LLM::Object?
Returns token usage accumulated in this context This method returns token usage for the latest assistant message, and it returns nil for non-assistant messages.
-
#wait(strategy) ⇒ Array<LLM::Function::Return>
Waits for queued tool work to finish.
Methods included from Deserializer
Constructor Details
#initialize(llm, params = {}) ⇒ Context
Returns a new instance of Context.
60 61 62 63 64 65 |
# File 'lib/llm/context.rb', line 60 def initialize(llm, params = {}) @llm = llm @mode = params.delete(:mode) || :completions @params = {model: llm.default_model, schema: nil}.compact.merge!(params) @messages = LLM::Buffer.new(llm) end |
Instance Attribute Details
#llm ⇒ LLM::Provider (readonly)
Returns a provider
43 44 45 |
# File 'lib/llm/context.rb', line 43 def llm @llm end |
#messages ⇒ LLM::Buffer<LLM::Message> (readonly)
Returns the accumulated message history for this context
38 39 40 |
# File 'lib/llm/context.rb', line 38 def @messages end |
#mode ⇒ Symbol (readonly)
Returns the context mode
48 49 50 |
# File 'lib/llm/context.rb', line 48 def mode @mode end |
Instance Method Details
#call(target) ⇒ Array<LLM::Function::Return>
Calls a named collection of work through the context.
This currently supports ‘:functions`, forwarding to `functions.call`.
148 149 150 151 152 153 |
# File 'lib/llm/context.rb', line 148 def call(target) case target when :functions then functions.call else raise ArgumentError, "Unknown target: #{target.inspect}. Expected :functions" end end |
#context_window ⇒ Integer
This method returns 0 when the provider or model can’t be found within Registry.
Returns the model’s context window. The context window is the maximum amount of input and output tokens a model can consider in a single request.
333 334 335 336 337 338 339 340 |
# File 'lib/llm/context.rb', line 333 def context_window LLM .registry_for(llm) .limit(model:) .context rescue LLM::NoSuchModelError, LLM::NoSuchRegistryError 0 end |
#cost ⇒ LLM::Cost
Returns an approximate cost for a given context based on both the provider, and model
316 317 318 319 320 321 322 323 |
# File 'lib/llm/context.rb', line 316 def cost return LLM::Cost.new(0, 0) unless usage cost = LLM.registry_for(llm).cost(model:) LLM::Cost.new( (cost.input.to_f / 1_000_000.0) * usage.input_tokens, (cost.output.to_f / 1_000_000.0) * usage.output_tokens ) end |
#deserialize(path: nil, string: nil) ⇒ LLM::Context Also known as: restore
Restore a saved context state
298 299 300 301 302 303 304 305 306 307 308 309 |
# File 'lib/llm/context.rb', line 298 def deserialize(path: nil, string: nil) payload = if path.nil? and string.nil? raise ArgumentError, "a path or string is required" elsif path ::File.binread(path) else string end ctx = LLM.json.load(payload) @messages.concat [*ctx["messages"]].map { (_1) } self end |
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called
127 128 129 130 131 132 133 134 135 136 137 138 |
# File 'lib/llm/context.rb', line 127 def functions return_ids = returns.map(&:id) @messages .select(&:assistant?) .flat_map do |msg| fns = msg.functions.select { _1.pending? && !return_ids.include?(_1.id) } fns.each do |fn| fn.tracer = tracer fn.model = msg.model end end.extend(LLM::Function::Array) end |
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image
224 225 226 |
# File 'lib/llm/context.rb', line 224 def image_url(url) LLM::Object.from(value: url, kind: :image_url) end |
#inspect ⇒ String
118 119 120 121 122 |
# File 'lib/llm/context.rb', line 118 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} " \ "@llm=#{@llm.class}, @mode=#{@mode.inspect}, @params=#{@params.inspect}, " \ "@messages=#{@messages.inspect}>" end |
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file
234 235 236 |
# File 'lib/llm/context.rb', line 234 def local_file(path) LLM::Object.from(value: LLM.File(path), kind: :local_file) end |
#model ⇒ String
Returns the model a Context is actively using
258 259 260 |
# File 'lib/llm/context.rb', line 258 def model .find(&:assistant?)&.model || @params[:model] end |
#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt
Build a role-aware prompt for a single request.
Prefer this method over #build_prompt. The older method name is kept for backward compatibility.
213 214 215 |
# File 'lib/llm/context.rb', line 213 def prompt(&b) LLM::Prompt.new(@llm, &b) end |
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file
244 245 246 |
# File 'lib/llm/context.rb', line 244 def remote_file(res) LLM::Object.from(value: res, kind: :remote_file) end |
#respond(prompt, params = {}) ⇒ LLM::Response
Not all LLM providers support this API
Interact with the context via the responses API. This method immediately sends a request to the LLM and returns the response.
105 106 107 108 109 110 111 112 113 114 |
# File 'lib/llm/context.rb', line 105 def respond(prompt, params = {}) params = @params.merge(params) res_id = params[:store] == false ? nil : @messages.find(&:assistant?)&.response&.response_id params = params.merge(previous_response_id: res_id, input: @messages.to_a).compact res = @llm.responses.create(prompt, params) role = params[:role] || @llm.user_role @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)] @messages.concat [res.choices[-1]] res end |
#returns ⇒ Array<LLM::Function::Return>
Returns tool returns accumulated in this context
158 159 160 161 162 163 164 165 166 |
# File 'lib/llm/context.rb', line 158 def returns @messages .select(&:tool_return?) .flat_map do |msg| LLM::Function::Return === msg.content ? [msg.content] : [*msg.content].grep(LLM::Function::Return) end end |
#serialize(path:) ⇒ void Also known as: save
This method returns an undefined value.
Save the current context state
284 285 286 |
# File 'lib/llm/context.rb', line 284 def serialize(path:) ::File.binwrite path, LLM.json.dump(self) end |
#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat
Interact with the context via the chat completions API. This method immediately sends a request to the LLM and returns the response.
79 80 81 82 83 84 85 86 87 88 89 |
# File 'lib/llm/context.rb', line 79 def talk(prompt, params = {}) return respond(prompt, params) if mode == :responses params = params.merge(messages: @messages.to_a) params = @params.merge(params) res = @llm.complete(prompt, params) role = params[:role] || @llm.user_role role = @llm.tool_role if params[:role].nil? && [*prompt].grep(LLM::Function::Return).any? @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)] @messages.concat [res.choices[-1]] res end |
#to_h ⇒ Hash
264 265 266 |
# File 'lib/llm/context.rb', line 264 def to_h {model:, messages:} end |
#to_json ⇒ String
270 271 272 |
# File 'lib/llm/context.rb', line 270 def to_json(...) {schema_version: 1}.merge!(to_h).to_json(...) end |
#tracer ⇒ LLM::Tracer
Returns an LLM tracer
251 252 253 |
# File 'lib/llm/context.rb', line 251 def tracer @llm.tracer end |
#usage ⇒ LLM::Object?
Returns token usage accumulated in this context This method returns token usage for the latest assistant message, and it returns nil for non-assistant messages.
194 195 196 |
# File 'lib/llm/context.rb', line 194 def usage @messages.find(&:assistant?)&.usage end |
#wait(strategy) ⇒ Array<LLM::Function::Return>
Waits for queued tool work to finish.
This prefers queued streamed tool work when the configured stream exposes a non-empty queue. Otherwise it falls back to waiting on the context’s pending functions directly.
178 179 180 181 182 183 184 185 |
# File 'lib/llm/context.rb', line 178 def wait(strategy) stream = @params[:stream] if LLM::Stream === stream && !stream.queue.empty? stream.wait(strategy) else functions.wait(strategy) end end |