Class: LLM::Client
- Inherits:
-
Object
- Object
- LLM::Client
- Defined in:
- lib/llm/client.rb
Overview
Convenience layer over Providers::Anthropic for sending messages and handling tool execution loops. Supports both simple text chat and multi-turn tool calling via the Anthropic tool use protocol.
Constant Summary collapse
- INTERRUPT_MESSAGE =
Synthetic tool_result message when a tool is skipped due to user interrupt.
"Stopped by user"
Instance Attribute Summary collapse
-
#max_tokens ⇒ Integer
readonly
Maximum tokens in the response.
-
#model ⇒ String
readonly
The model identifier used for API calls.
-
#provider ⇒ Providers::Anthropic
readonly
The underlying API provider.
Instance Method Summary collapse
-
#chat(messages, **options) ⇒ String
Send messages to the LLM and return the assistant’s text response.
-
#chat_with_tools(messages, registry:, session_id:, first_response: nil, **options) ⇒ String?
Send messages with tool support.
-
#initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil) ⇒ Client
constructor
A new instance of Client.
Constructor Details
#initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil) ⇒ Client
Returns a new instance of Client.
35 36 37 38 39 40 |
# File 'lib/llm/client.rb', line 35 def initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil) @provider = build_provider(provider) @model = model @max_tokens = max_tokens @logger = logger end |
Instance Attribute Details
#max_tokens ⇒ Integer (readonly)
Returns maximum tokens in the response.
28 29 30 |
# File 'lib/llm/client.rb', line 28 def max_tokens @max_tokens end |
#model ⇒ String (readonly)
Returns the model identifier used for API calls.
25 26 27 |
# File 'lib/llm/client.rb', line 25 def model @model end |
#provider ⇒ Providers::Anthropic (readonly)
Returns the underlying API provider.
22 23 24 |
# File 'lib/llm/client.rb', line 22 def provider @provider end |
Instance Method Details
#chat(messages, **options) ⇒ String
Send messages to the LLM and return the assistant’s text response.
49 50 51 52 53 54 55 56 57 58 |
# File 'lib/llm/client.rb', line 49 def chat(, **) response = provider.( model: model, messages: , max_tokens: max_tokens, ** ) extract_text(response) end |
#chat_with_tools(messages, registry:, session_id:, first_response: nil, **options) ⇒ String?
Send messages with tool support. Runs the full tool execution loop: call LLM, execute any requested tools, feed results back, repeat until the LLM produces a final text response.
Emits Events::ToolCall and Events::ToolResponse events for each tool interaction so they’re persisted and visible in the event stream.
When the user interrupts via Escape, remaining tools receive synthetic “Stopped by user” results and the loop exits without another LLM call.
79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
# File 'lib/llm/client.rb', line 79 def chat_with_tools(, registry:, session_id:, first_response: nil, **) = .dup rounds = 0 loop do rounds += 1 max_rounds = Anima::Settings.max_tool_rounds if rounds > max_rounds return "[Tool loop exceeded #{max_rounds} rounds — halting]" end response = if first_response && rounds == 1 first_response else provider.( model: model, messages: , max_tokens: max_tokens, tools: registry.schemas, ** ) end log(:debug, "stop_reason=#{response["stop_reason"]} content_types=#{(response["content"] || []).map { |b| b["type"] }.join(",")}") if response["stop_reason"] == "tool_use" tool_results = execute_tools(response, registry, session_id) += [ {role: "assistant", content: response["content"]}, {role: "user", content: tool_results} ] if interrupted?(session_id) clear_interrupt!(session_id) return nil end else return extract_text(response) end end end |