Class: LLM::Client

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/client.rb

Overview

Convenience layer over Providers::Anthropic for sending messages and handling tool execution loops. Supports both simple text chat and multi-turn tool calling via the Anthropic tool use protocol.

Examples:

Simple chat (no tools)

client = LLM::Client.new
client.chat([{role: "user", content: "Say hello"}])
# => "Hello! How can I help you today?"

Chat with tools

registry = Tools::Registry.new
registry.register(Tools::WebGet)
client.chat_with_tools(messages, registry: registry, session_id: session.id)

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil) ⇒ Client

Returns a new instance of Client.

Parameters:

  • model (String) (defaults to: Anima::Settings.model)

    Anthropic model identifier (default from Settings)

  • max_tokens (Integer) (defaults to: Anima::Settings.max_tokens)

    maximum tokens in the response (default from Settings)

  • provider (Providers::Anthropic, nil) (defaults to: nil)

    injectable provider instance; defaults to a new Providers::Anthropic using credentials

  • logger (Logger, nil) (defaults to: nil)

    optional logger for tool call tracing



32
33
34
35
36
37
# File 'lib/llm/client.rb', line 32

def initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil)
  @provider = build_provider(provider)
  @model = model
  @max_tokens = max_tokens
  @logger = logger
end

Instance Attribute Details

#max_tokensInteger (readonly)

Returns maximum tokens in the response.

Returns:

  • (Integer)

    maximum tokens in the response



25
26
27
# File 'lib/llm/client.rb', line 25

def max_tokens
  @max_tokens
end

#modelString (readonly)

Returns the model identifier used for API calls.

Returns:

  • (String)

    the model identifier used for API calls



22
23
24
# File 'lib/llm/client.rb', line 22

def model
  @model
end

#providerProviders::Anthropic (readonly)

Returns the underlying API provider.

Returns:



19
20
21
# File 'lib/llm/client.rb', line 19

def provider
  @provider
end

Instance Method Details

#chat(messages, **options) ⇒ String

Send messages to the LLM and return the assistant’s text response.

Parameters:

  • messages (Array<Hash>)

    conversation messages, each with :role and :content

  • options (Hash)

    additional API parameters (e.g. system:, temperature:)

Returns:

  • (String)

    the assistant’s response text

Raises:



46
47
48
49
50
51
52
53
54
55
# File 'lib/llm/client.rb', line 46

def chat(messages, **options)
  response = provider.create_message(
    model: model,
    messages: messages,
    max_tokens: max_tokens,
    **options
  )

  extract_text(response)
end

#chat_with_tools(messages, registry:, session_id:, **options) ⇒ String

Send messages with tool support. Runs the full tool execution loop: call LLM, execute any requested tools, feed results back, repeat until the LLM produces a final text response.

Emits Events::ToolCall and Events::ToolResponse events for each tool interaction so they’re persisted and visible in the event stream.

Parameters:

  • messages (Array<Hash>)

    conversation messages in Anthropic format

  • registry (Tools::Registry)

    registered tools to make available

  • session_id (Integer, String)

    session ID for emitted events

  • options (Hash)

    additional API parameters (e.g. system:)

Returns:

  • (String)

    the assistant’s final text response

Raises:



70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
# File 'lib/llm/client.rb', line 70

def chat_with_tools(messages, registry:, session_id:, **options)
  messages = messages.dup
  rounds = 0

  loop do
    rounds += 1
    max_rounds = Anima::Settings.max_tool_rounds
    if rounds > max_rounds
      return "[Tool loop exceeded #{max_rounds} rounds — halting]"
    end

    response = provider.create_message(
      model: model,
      messages: messages,
      max_tokens: max_tokens,
      tools: registry.schemas,
      **options
    )

    log(:debug, "stop_reason=#{response["stop_reason"]} content_types=#{(response["content"] || []).map { |b| b["type"] }.join(",")}")

    if response["stop_reason"] == "tool_use"
      tool_results = execute_tools(response, registry, session_id)

      messages += [
        {role: "assistant", content: response["content"]},
        {role: "user", content: tool_results}
      ]
    else
      return extract_text(response)
    end
  end
end