Class: LLM::Context

Inherits:
Object
  • Object
show all
Includes:
Deserializer
Defined in:
lib/llm/context.rb,
lib/llm/context/deserializer.rb

Overview

LLM::Context represents a stateful interaction with an LLM, including conversation history, tools, execution state, and cost tracking. It evolves over time as the system runs.

Context is the stateful environment in which an LLM operates. This is not just prompt context; it is an active, evolving execution boundary for LLM workflows.

A context can use the chat completions API that all providers support or the responses API that currently only OpenAI supports.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)

prompt = LLM::Prompt.new(llm) do
  system "Be concise and show your reasoning briefly."
  user "If a train goes 60 mph for 1.5 hours, how far does it travel?"
  user "Now double the speed for the same time."
end

ctx.talk(prompt)
ctx.messages.each { |m| puts "[#{m.role}] #{m.content}" }

Defined Under Namespace

Modules: Deserializer

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Deserializer

#deserialize_message

Constructor Details

#initialize(llm, params = {}) ⇒ Context

Returns a new instance of Context.

Parameters:

  • llm (LLM::Provider)

    A provider

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :mode (Symbol)

    Defaults to :completions

  • :model (String)

    Defaults to the provider’s default model

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil



60
61
62
63
64
65
# File 'lib/llm/context.rb', line 60

def initialize(llm, params = {})
  @llm = llm
  @mode = params.delete(:mode) || :completions
  @params = {model: llm.default_model, schema: nil}.compact.merge!(params)
  @messages = LLM::Buffer.new(llm)
end

Instance Attribute Details

#llmLLM::Provider (readonly)

Returns a provider

Returns:



43
44
45
# File 'lib/llm/context.rb', line 43

def llm
  @llm
end

#messagesLLM::Buffer<LLM::Message> (readonly)

Returns the accumulated message history for this context



38
39
40
# File 'lib/llm/context.rb', line 38

def messages
  @messages
end

#modeSymbol (readonly)

Returns the context mode

Returns:

  • (Symbol)


48
49
50
# File 'lib/llm/context.rb', line 48

def mode
  @mode
end

Instance Method Details

#call(target) ⇒ Array<LLM::Function::Return>

Calls a named collection of work through the context.

This currently supports ‘:functions`, forwarding to `functions.call`.

Parameters:

  • target (Symbol)

    The work collection to call

Returns:



148
149
150
151
152
153
# File 'lib/llm/context.rb', line 148

def call(target)
  case target
  when :functions then functions.call
  else raise ArgumentError, "Unknown target: #{target.inspect}. Expected :functions"
  end
end

#context_windowInteger

Note:

This method returns 0 when the provider or model can’t be found within Registry.

Returns the model’s context window. The context window is the maximum amount of input and output tokens a model can consider in a single request.

Returns:

  • (Integer)


333
334
335
336
337
338
339
340
# File 'lib/llm/context.rb', line 333

def context_window
  LLM
    .registry_for(llm)
    .limit(model:)
    .context
rescue LLM::NoSuchModelError, LLM::NoSuchRegistryError
  0
end

#costLLM::Cost

Returns an approximate cost for a given context based on both the provider, and model

Returns:

  • (LLM::Cost)

    Returns an approximate cost for a given context based on both the provider, and model



316
317
318
319
320
321
322
323
# File 'lib/llm/context.rb', line 316

def cost
  return LLM::Cost.new(0, 0) unless usage
  cost = LLM.registry_for(llm).cost(model:)
  LLM::Cost.new(
    (cost.input.to_f / 1_000_000.0)  * usage.input_tokens,
    (cost.output.to_f / 1_000_000.0) * usage.output_tokens
  )
end

#deserialize(path: nil, string: nil) ⇒ LLM::Context Also known as: restore

Restore a saved context state

Parameters:

  • path (String, nil) (defaults to: nil)

    The path to a JSON file

  • string (String, nil) (defaults to: nil)

    A raw JSON string

Returns:

Raises:

  • (SystemCallError)

    Might raise a number of SystemCallError subclasses



298
299
300
301
302
303
304
305
306
307
308
309
# File 'lib/llm/context.rb', line 298

def deserialize(path: nil, string: nil)
  payload = if path.nil? and string.nil?
    raise ArgumentError, "a path or string is required"
  elsif path
    ::File.binread(path)
  else
    string
  end
  ctx = LLM.json.load(payload)
  @messages.concat [*ctx["messages"]].map { deserialize_message(_1) }
  self
end

#functionsArray<LLM::Function>

Returns an array of functions that can be called

Returns:



127
128
129
130
131
132
133
134
135
136
137
138
# File 'lib/llm/context.rb', line 127

def functions
  return_ids = returns.map(&:id)
  @messages
    .select(&:assistant?)
    .flat_map do |msg|
      fns = msg.functions.select { _1.pending? && !return_ids.include?(_1.id) }
      fns.each do |fn|
        fn.tracer = tracer
        fn.model  = msg.model
      end
    end.extend(LLM::Function::Array)
end

#image_url(url) ⇒ LLM::Object

Recongize an object as a URL to an image

Parameters:

  • url (String)

    The URL

Returns:



224
225
226
# File 'lib/llm/context.rb', line 224

def image_url(url)
  LLM::Object.from(value: url, kind: :image_url)
end

#inspectString

Returns:

  • (String)


118
119
120
121
122
# File 'lib/llm/context.rb', line 118

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} " \
  "@llm=#{@llm.class}, @mode=#{@mode.inspect}, @params=#{@params.inspect}, " \
  "@messages=#{@messages.inspect}>"
end

#local_file(path) ⇒ LLM::Object

Recongize an object as a local file

Parameters:

  • path (String)

    The path

Returns:



234
235
236
# File 'lib/llm/context.rb', line 234

def local_file(path)
  LLM::Object.from(value: LLM.File(path), kind: :local_file)
end

#modelString

Returns the model a Context is actively using

Returns:

  • (String)


258
259
260
# File 'lib/llm/context.rb', line 258

def model
  messages.find(&:assistant?)&.model || @params[:model]
end

#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt

Build a role-aware prompt for a single request.

Prefer this method over #build_prompt. The older method name is kept for backward compatibility.

Examples:

prompt = ctx.prompt do
  system "Your task is to assist the user"
  user "Hello, can you assist me?"
end
ctx.talk(prompt)

Parameters:

  • b (Proc)

    A block that composes messages. If it takes one argument, it receives the prompt object. Otherwise it runs in prompt context.

Returns:



213
214
215
# File 'lib/llm/context.rb', line 213

def prompt(&b)
  LLM::Prompt.new(@llm, &b)
end

#remote_file(res) ⇒ LLM::Object

Reconginize an object as a remote file

Parameters:

Returns:



244
245
246
# File 'lib/llm/context.rb', line 244

def remote_file(res)
  LLM::Object.from(value: res, kind: :remote_file)
end

#respond(prompt, params = {}) ⇒ LLM::Response

Note:

Not all LLM providers support this API

Interact with the context via the responses API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
res = ctx.respond("What is the capital of France?")
puts res.output_text

Parameters:

  • params (defaults to: {})

    The params, including optional :role (defaults to :user), :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.



105
106
107
108
109
110
111
112
113
114
# File 'lib/llm/context.rb', line 105

def respond(prompt, params = {})
  res_id = @messages.find(&:assistant?)&.response&.response_id
  params = params.merge(previous_response_id: res_id, input: @messages.to_a).compact
  params = @params.merge(params)
  res = @llm.responses.create(prompt, params)
  role = params[:role] || @llm.user_role
  @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)]
  @messages.concat [res.choices[-1]]
  res
end

#returnsArray<LLM::Function::Return>

Returns tool returns accumulated in this context

Returns:



158
159
160
161
162
163
164
165
166
# File 'lib/llm/context.rb', line 158

def returns
  @messages
    .select(&:tool_return?)
    .flat_map do |msg|
      LLM::Function::Return === msg.content ?
        [msg.content] :
        [*msg.content].grep(LLM::Function::Return)
    end
end

#serialize(path:) ⇒ void Also known as: save

This method returns an undefined value.

Save the current context state

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
ctx.talk "Hello"
ctx.save(path: "context.json")

Raises:

  • (SystemCallError)

    Might raise a number of SystemCallError subclasses



284
285
286
# File 'lib/llm/context.rb', line 284

def serialize(path:)
  ::File.binwrite path, LLM.json.dump(self)
end

#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat

Interact with the context via the chat completions API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
res = ctx.talk("Hello, what is your name?")
puts res.messages[0].content

Parameters:

  • params (defaults to: {})

    The params, including optional :role (defaults to :user), :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.



79
80
81
82
83
84
85
86
87
88
89
# File 'lib/llm/context.rb', line 79

def talk(prompt, params = {})
  return respond(prompt, params) if mode == :responses
  params = params.merge(messages: @messages.to_a)
  params = @params.merge(params)
  res = @llm.complete(prompt, params)
  role = params[:role] || @llm.user_role
  role = @llm.tool_role if params[:role].nil? && [*prompt].grep(LLM::Function::Return).any?
  @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)]
  @messages.concat [res.choices[-1]]
  res
end

#to_hHash

Returns:

  • (Hash)


264
265
266
# File 'lib/llm/context.rb', line 264

def to_h
  {model:, messages:}
end

#to_jsonString

Returns:

  • (String)


270
271
272
# File 'lib/llm/context.rb', line 270

def to_json(...)
  {schema_version: 1}.merge!(to_h).to_json(...)
end

#tracerLLM::Tracer

Returns an LLM tracer

Returns:



251
252
253
# File 'lib/llm/context.rb', line 251

def tracer
  @llm.tracer
end

#usageLLM::Object?

Note:

Returns token usage accumulated in this context This method returns token usage for the latest assistant message, and it returns nil for non-assistant messages.

Returns:



194
195
196
# File 'lib/llm/context.rb', line 194

def usage
  @messages.find(&:assistant?)&.usage
end

#wait(strategy) ⇒ Array<LLM::Function::Return>

Waits for queued tool work to finish.

This prefers queued streamed tool work when the configured stream exposes a non-empty queue. Otherwise it falls back to waiting on the context’s pending functions directly.

Parameters:

  • strategy (Symbol)

    The concurrency strategy to use

Returns:



178
179
180
181
182
183
184
185
# File 'lib/llm/context.rb', line 178

def wait(strategy)
  stream = @params[:stream]
  if LLM::Stream === stream && !stream.queue.empty?
    stream.wait(strategy)
  else
    functions.wait(strategy)
  end
end