Class: LLM::Client

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/client.rb

Overview

Convenience layer over Providers::Anthropic for sending messages and handling tool execution loops. Supports both simple text chat and multi-turn tool calling via the Anthropic tool use protocol.

Examples:

Simple chat (no tools)

client = LLM::Client.new
client.chat([{role: "user", content: "Say hello"}])
# => "Hello! How can I help you today?"

Chat with tools

registry = Tools::Registry.new
registry.register(Tools::WebGet)
client.chat_with_tools(messages, registry: registry, session_id: session.id)

Constant Summary collapse

INTERRUPT_MESSAGE =

Synthetic tool_result message when a tool is skipped due to user interrupt.

"Stopped by user"

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil) ⇒ Client

Returns a new instance of Client.

Parameters:

  • model (String) (defaults to: Anima::Settings.model)

    Anthropic model identifier (default from Settings)

  • max_tokens (Integer) (defaults to: Anima::Settings.max_tokens)

    maximum tokens in the response (default from Settings)

  • provider (Providers::Anthropic, nil) (defaults to: nil)

    injectable provider instance; defaults to a new Providers::Anthropic using credentials

  • logger (Logger, nil) (defaults to: nil)

    optional logger for tool call tracing



35
36
37
38
39
40
# File 'lib/llm/client.rb', line 35

def initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil)
  @provider = build_provider(provider)
  @model = model
  @max_tokens = max_tokens
  @logger = logger
end

Instance Attribute Details

#max_tokensInteger (readonly)

Returns maximum tokens in the response.

Returns:

  • (Integer)

    maximum tokens in the response



28
29
30
# File 'lib/llm/client.rb', line 28

def max_tokens
  @max_tokens
end

#modelString (readonly)

Returns the model identifier used for API calls.

Returns:

  • (String)

    the model identifier used for API calls



25
26
27
# File 'lib/llm/client.rb', line 25

def model
  @model
end

#providerProviders::Anthropic (readonly)

Returns the underlying API provider.

Returns:



22
23
24
# File 'lib/llm/client.rb', line 22

def provider
  @provider
end

Instance Method Details

#chat(messages, **options) ⇒ String

Send messages to the LLM and return the assistant’s text response.

Parameters:

  • messages (Array<Hash>)

    conversation messages, each with :role and :content

  • options (Hash)

    additional API parameters (e.g. system:, temperature:)

Returns:

  • (String)

    the assistant’s response text

Raises:



49
50
51
52
53
54
55
56
57
58
# File 'lib/llm/client.rb', line 49

def chat(messages, **options)
  response = provider.create_message(
    model: model,
    messages: messages,
    max_tokens: max_tokens,
    **options
  )

  extract_text(response)
end

#chat_with_tools(messages, registry:, session_id:, **options) ⇒ String?

Send messages with tool support. Runs the full tool execution loop: call LLM, execute any requested tools, feed results back, repeat until the LLM produces a final text response.

Emits Events::ToolCall and Events::ToolResponse events for each tool interaction so they’re persisted and visible in the event stream.

When the user interrupts via Escape, remaining tools receive synthetic “Stopped by user” results and the loop exits without another LLM call.

Parameters:

  • messages (Array<Hash>)

    conversation messages in Anthropic format

  • registry (Tools::Registry)

    registered tools to make available

  • session_id (Integer, String)

    session ID for emitted events

  • options (Hash)

    additional API parameters (e.g. system:)

Returns:

  • (String, nil)

    the assistant’s final text response, or nil when interrupted

Raises:



76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
# File 'lib/llm/client.rb', line 76

def chat_with_tools(messages, registry:, session_id:, **options)
  messages = messages.dup
  rounds = 0

  loop do
    rounds += 1
    max_rounds = Anima::Settings.max_tool_rounds
    if rounds > max_rounds
      return "[Tool loop exceeded #{max_rounds} rounds — halting]"
    end

    response = provider.create_message(
      model: model,
      messages: messages,
      max_tokens: max_tokens,
      tools: registry.schemas,
      **options
    )

    log(:debug, "stop_reason=#{response["stop_reason"]} content_types=#{(response["content"] || []).map { |b| b["type"] }.join(",")}")

    if response["stop_reason"] == "tool_use"
      tool_results = execute_tools(response, registry, session_id)

      messages += [
        {role: "assistant", content: response["content"]},
        {role: "user", content: tool_results}
      ]

      if interrupted?(session_id)
        clear_interrupt!(session_id)
        return nil
      end
    else
      return extract_text(response)
    end
  end
end