Class: LLM::Context

Inherits:
Object
  • Object
show all
Includes:
Deserializer
Defined in:
lib/llm/context.rb,
lib/llm/context/deserializer.rb

Overview

LLM::Context represents a stateful interaction with an LLM, including conversation history, tools, execution state, and cost tracking. It evolves over time as the system runs.

Context is the stateful environment in which an LLM operates. This is not just prompt context; it is an active, evolving execution boundary for LLM workflows.

A context can use the chat completions API that all providers support or the responses API that currently only OpenAI supports.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)

prompt = LLM::Prompt.new(llm) do
  system "Be concise and show your reasoning briefly."
  user "If a train goes 60 mph for 1.5 hours, how far does it travel?"
  user "Now double the speed for the same time."
end

ctx.talk(prompt)
ctx.messages.each { |m| puts "[#{m.role}] #{m.content}" }

Defined Under Namespace

Modules: Deserializer

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Deserializer

#deserialize_message

Constructor Details

#initialize(llm, params = {}) ⇒ Context

Returns a new instance of Context.

Parameters:

  • llm (LLM::Provider)

    A provider

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :mode (Symbol)

    Defaults to :completions

  • :model (String)

    Defaults to the provider’s default model

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil



60
61
62
63
64
65
66
# File 'lib/llm/context.rb', line 60

def initialize(llm, params = {})
  @llm = llm
  @mode = params.delete(:mode) || :completions
  @params = {model: llm.default_model, schema: nil}.compact.merge!(params)
  @messages = LLM::Buffer.new(llm)
  @owner = Fiber.current
end

Instance Attribute Details

#llmLLM::Provider (readonly)

Returns a provider

Returns:



43
44
45
# File 'lib/llm/context.rb', line 43

def llm
  @llm
end

#messagesLLM::Buffer<LLM::Message> (readonly)

Returns the accumulated message history for this context



38
39
40
# File 'lib/llm/context.rb', line 38

def messages
  @messages
end

#modeSymbol (readonly)

Returns the context mode

Returns:

  • (Symbol)


48
49
50
# File 'lib/llm/context.rb', line 48

def mode
  @mode
end

Instance Method Details

#call(target) ⇒ Array<LLM::Function::Return>

Calls a named collection of work through the context.

This currently supports ‘:functions`, forwarding to `functions.call`.

Parameters:

  • target (Symbol)

    The work collection to call

Returns:



149
150
151
152
153
154
# File 'lib/llm/context.rb', line 149

def call(target)
  case target
  when :functions then functions.call
  else raise ArgumentError, "Unknown target: #{target.inspect}. Expected :functions"
  end
end

#context_windowInteger

Note:

This method returns 0 when the provider or model can’t be found within Registry.

Returns the model’s context window. The context window is the maximum amount of input and output tokens a model can consider in a single request.

Returns:

  • (Integer)


343
344
345
346
347
348
349
350
# File 'lib/llm/context.rb', line 343

def context_window
  LLM
    .registry_for(llm)
    .limit(model:)
    .context
rescue LLM::NoSuchModelError, LLM::NoSuchRegistryError
  0
end

#costLLM::Cost

Returns an approximate cost for a given context based on both the provider, and model

Returns:

  • (LLM::Cost)

    Returns an approximate cost for a given context based on both the provider, and model



326
327
328
329
330
331
332
333
# File 'lib/llm/context.rb', line 326

def cost
  return LLM::Cost.new(0, 0) unless usage
  cost = LLM.registry_for(llm).cost(model:)
  LLM::Cost.new(
    (cost.input.to_f / 1_000_000.0)  * usage.input_tokens,
    (cost.output.to_f / 1_000_000.0) * usage.output_tokens
  )
end

#deserialize(path: nil, string: nil) ⇒ LLM::Context Also known as: restore

Restore a saved context state

Parameters:

  • path (String, nil) (defaults to: nil)

    The path to a JSON file

  • string (String, nil) (defaults to: nil)

    A raw JSON string

Returns:

Raises:

  • (SystemCallError)

    Might raise a number of SystemCallError subclasses



308
309
310
311
312
313
314
315
316
317
318
319
# File 'lib/llm/context.rb', line 308

def deserialize(path: nil, string: nil)
  payload = if path.nil? and string.nil?
    raise ArgumentError, "a path or string is required"
  elsif path
    ::File.binread(path)
  else
    string
  end
  ctx = LLM.json.load(payload)
  @messages.concat [*ctx["messages"]].map { deserialize_message(_1) }
  self
end

#functionsArray<LLM::Function>

Returns an array of functions that can be called

Returns:



128
129
130
131
132
133
134
135
136
137
138
139
# File 'lib/llm/context.rb', line 128

def functions
  return_ids = returns.map(&:id)
  @messages
    .select(&:assistant?)
    .flat_map do |msg|
      fns = msg.functions.select { _1.pending? && !return_ids.include?(_1.id) }
      fns.each do |fn|
        fn.tracer = tracer
        fn.model  = msg.model
      end
    end.extend(LLM::Function::Array)
end

#image_url(url) ⇒ LLM::Object

Recongize an object as a URL to an image

Parameters:

  • url (String)

    The URL

Returns:



234
235
236
# File 'lib/llm/context.rb', line 234

def image_url(url)
  LLM::Object.from(value: url, kind: :image_url)
end

#inspectString

Returns:

  • (String)


119
120
121
122
123
# File 'lib/llm/context.rb', line 119

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} " \
  "@llm=#{@llm.class}, @mode=#{@mode.inspect}, @params=#{@params.inspect}, " \
  "@messages=#{@messages.inspect}>"
end

#interrupt!nil Also known as: cancel!

Interrupt the active request, if any. This is inspired by Go’s context cancellation model.

Returns:

  • (nil)


192
193
194
# File 'lib/llm/context.rb', line 192

def interrupt!
  llm.interrupt!(@owner)
end

#local_file(path) ⇒ LLM::Object

Recongize an object as a local file

Parameters:

  • path (String)

    The path

Returns:



244
245
246
# File 'lib/llm/context.rb', line 244

def local_file(path)
  LLM::Object.from(value: LLM.File(path), kind: :local_file)
end

#modelString

Returns the model a Context is actively using

Returns:

  • (String)


268
269
270
# File 'lib/llm/context.rb', line 268

def model
  messages.find(&:assistant?)&.model || @params[:model]
end

#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt

Build a role-aware prompt for a single request.

Prefer this method over #build_prompt. The older method name is kept for backward compatibility.

Examples:

prompt = ctx.prompt do
  system "Your task is to assist the user"
  user "Hello, can you assist me?"
end
ctx.talk(prompt)

Parameters:

  • b (Proc)

    A block that composes messages. If it takes one argument, it receives the prompt object. Otherwise it runs in prompt context.

Returns:



223
224
225
# File 'lib/llm/context.rb', line 223

def prompt(&b)
  LLM::Prompt.new(@llm, &b)
end

#remote_file(res) ⇒ LLM::Object

Reconginize an object as a remote file

Parameters:

Returns:



254
255
256
# File 'lib/llm/context.rb', line 254

def remote_file(res)
  LLM::Object.from(value: res, kind: :remote_file)
end

#respond(prompt, params = {}) ⇒ LLM::Response

Note:

Not all LLM providers support this API

Interact with the context via the responses API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
res = ctx.respond("What is the capital of France?")
puts res.output_text

Parameters:

  • params (defaults to: {})

    The params, including optional :role (defaults to :user), :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.



106
107
108
109
110
111
112
113
114
115
# File 'lib/llm/context.rb', line 106

def respond(prompt, params = {})
  params = @params.merge(params)
  res_id = params[:store] == false ? nil : @messages.find(&:assistant?)&.response&.response_id
  params = params.merge(previous_response_id: res_id, input: @messages.to_a).compact
  res = @llm.responses.create(prompt, params)
  role = params[:role] || @llm.user_role
  @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)]
  @messages.concat [res.choices[-1]]
  res
end

#returnsArray<LLM::Function::Return>

Returns tool returns accumulated in this context

Returns:



159
160
161
162
163
164
165
166
167
# File 'lib/llm/context.rb', line 159

def returns
  @messages
    .select(&:tool_return?)
    .flat_map do |msg|
      LLM::Function::Return === msg.content ?
        [msg.content] :
        [*msg.content].grep(LLM::Function::Return)
    end
end

#serialize(path:) ⇒ void Also known as: save

This method returns an undefined value.

Save the current context state

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
ctx.talk "Hello"
ctx.save(path: "context.json")

Raises:

  • (SystemCallError)

    Might raise a number of SystemCallError subclasses



294
295
296
# File 'lib/llm/context.rb', line 294

def serialize(path:)
  ::File.binwrite path, LLM.json.dump(self)
end

#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat

Interact with the context via the chat completions API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
res = ctx.talk("Hello, what is your name?")
puts res.messages[0].content

Parameters:

  • params (defaults to: {})

    The params, including optional :role (defaults to :user), :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.



80
81
82
83
84
85
86
87
88
89
90
# File 'lib/llm/context.rb', line 80

def talk(prompt, params = {})
  return respond(prompt, params) if mode == :responses
  params = params.merge(messages: @messages.to_a)
  params = @params.merge(params)
  res = @llm.complete(prompt, params)
  role = params[:role] || @llm.user_role
  role = @llm.tool_role if params[:role].nil? && [*prompt].grep(LLM::Function::Return).any?
  @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)]
  @messages.concat [res.choices[-1]]
  res
end

#to_hHash

Returns:

  • (Hash)


274
275
276
# File 'lib/llm/context.rb', line 274

def to_h
  {model:, messages:}
end

#to_jsonString

Returns:

  • (String)


280
281
282
# File 'lib/llm/context.rb', line 280

def to_json(...)
  {schema_version: 1}.merge!(to_h).to_json(...)
end

#tracerLLM::Tracer

Returns an LLM tracer

Returns:



261
262
263
# File 'lib/llm/context.rb', line 261

def tracer
  @llm.tracer
end

#usageLLM::Object?

Note:

Returns token usage accumulated in this context This method returns token usage for the latest assistant message, and it returns nil for non-assistant messages.

Returns:



204
205
206
# File 'lib/llm/context.rb', line 204

def usage
  @messages.find(&:assistant?)&.usage
end

#wait(strategy) ⇒ Array<LLM::Function::Return>

Waits for queued tool work to finish.

This prefers queued streamed tool work when the configured stream exposes a non-empty queue. Otherwise it falls back to waiting on the context’s pending functions directly.

Parameters:

  • strategy (Symbol)

    The concurrency strategy to use

Returns:



179
180
181
182
183
184
185
186
# File 'lib/llm/context.rb', line 179

def wait(strategy)
  stream = @params[:stream]
  if LLM::Stream === stream && !stream.queue.empty?
    stream.wait(strategy)
  else
    functions.wait(strategy)
  end
end