Class: AgentLoop

Inherits:
Object
  • Object
show all
Defined in:
lib/agent_loop.rb

Overview

Note:

Not thread-safe. Callers must serialize concurrent calls to #process (e.g. TUI uses a loading flag, future callers should use session-level locks).

Orchestrates the LLM agent loop: accepts user input, runs the tool-use cycle via LLM::Client, and emits events through Events::Bus.

Extracted from TUI::Screens::Chat so the same agent logic can run from the TUI, a background job, or an Action Cable channel.

Examples:

Basic usage

loop = AgentLoop.new(session: session)
loop.process("What files are in the current directory?")
loop.finalize

With dependency injection (testing)

loop = AgentLoop.new(session: session, client: mock_client, registry: mock_registry)
loop.process("hello")

Background job usage (retry-safe)

loop = AgentLoop.new(session: session)
loop.run  # processes persisted session messages without emitting UserMessage
loop.finalize

Constant Summary collapse

STANDARD_TOOLS =

Tool classes available to all sessions by default.

Returns:

[Tools::Bash, Tools::Read, Tools::Write, Tools::Edit, Tools::WebGet, Tools::Think, Tools::Remember].freeze
ALWAYS_GRANTED_TOOLS =

Tools that bypass Session#granted_tools filtering. The agent’s reasoning depends on these regardless of task scope.

Returns:

[Tools::Think].freeze
STANDARD_TOOLS_BY_NAME =

Name-to-class mapping for tool restriction validation and registry building.

Returns:

STANDARD_TOOLS.index_by(&:tool_name).freeze

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(session:, shell_session: nil, client: nil, registry: nil) ⇒ AgentLoop

Returns a new instance of AgentLoop.

Parameters:

  • session (Session)

    the conversation session

  • shell_session (ShellSession, nil) (defaults to: nil)

    injectable persistent shell; created automatically if not provided

  • client (LLM::Client, nil) (defaults to: nil)

    injectable LLM client; created lazily on first #process call if not provided

  • registry (Tools::Registry, nil) (defaults to: nil)

    injectable tool registry; built lazily on first #process call if not provided



36
37
38
39
40
41
# File 'lib/agent_loop.rb', line 36

def initialize(session:, shell_session: nil, client: nil, registry: nil)
  @session = session
  @shell_session = shell_session || ShellSession.new(session_id: session.id)
  @client = client
  @registry = registry
end

Instance Attribute Details

#sessionSession (readonly)

Returns the conversation session this loop operates on.

Returns:

  • (Session)

    the conversation session this loop operates on



27
28
29
# File 'lib/agent_loop.rb', line 27

def session
  @session
end

Instance Method Details

#deliver!void

This method returns an undefined value.

Makes the first LLM API call to verify delivery. Called inside the Bounce Back transaction — if this raises, the user event rolls back.

Caches the first response so the subsequent #run call can continue from it without duplicating the API call.

Raises:



73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
# File 'lib/agent_loop.rb', line 73

def deliver!
  @client ||= LLM::Client.new
  @registry ||= build_tool_registry

  messages = @session.messages_for_llm
  options = build_llm_options

  @first_response = @client.provider.create_message(
    model: @client.model,
    messages: messages,
    max_tokens: @client.max_tokens,
    tools: @registry.schemas,
    **options
  )
end

#finalizeObject

Clean up the underlying ShellSession PTY and resources. Safe to call multiple times — subsequent calls are no-ops.



124
125
126
# File 'lib/agent_loop.rb', line 124

def finalize
  @shell_session&.finalize
end

#process(input) ⇒ String?

Runs the agent loop for a single user input.

Persists the user event directly (the global Persister skips non-pending user messages because AgentRequestJob owns their lifecycle). Then emits a bus notification and delegates to #run. On error emits Events::AgentMessage with the error text.

Parameters:

  • input (String)

    raw user input

Returns:

  • (String, nil)

    the agent’s response text, or nil for blank input



52
53
54
55
56
57
58
59
60
61
62
63
# File 'lib/agent_loop.rb', line 52

def process(input)
  text = input.to_s.strip
  return if text.empty?

  persist_user_event(text)
  Events::Bus.emit(Events::UserMessage.new(content: text, session_id: @session.id))
  run
rescue => error
  error_message = "#{error.class}: #{error.message}"
  Events::Bus.emit(Events::AgentMessage.new(content: error_message, session_id: @session.id))
  error_message
end

#runString?

Runs the LLM tool-use loop on persisted session messages.

When a cached first response exists (from #deliver!), continues from that response without a redundant API call. Otherwise makes a fresh call — used for pending message processing and the standard path.

Lets errors propagate — designed for callers like AgentRequestJob that handle retries and need errors to bubble up.

Returns:

  • (String, nil)

    the agent’s response text, or nil when interrupted

Raises:



102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
# File 'lib/agent_loop.rb', line 102

def run
  @client ||= LLM::Client.new
  @registry ||= build_tool_registry

  messages = @session.messages_for_llm
  options = build_llm_options

  first_resp = @first_response
  @first_response = nil

  response = @client.chat_with_tools(
    messages, registry: @registry, session_id: @session.id,
    first_response: first_resp, **options
  )
  return unless response

  Events::Bus.emit(Events::AgentMessage.new(content: response, session_id: @session.id))
  response
end