Class: LLM::Client
- Inherits:
-
Object
- Object
- LLM::Client
- Defined in:
- lib/llm/client.rb
Overview
Convenience layer over Providers::Anthropic for sending messages and handling tool execution loops. Supports both simple text chat and multi-turn tool calling via the Anthropic tool use protocol.
Instance Attribute Summary collapse
-
#max_tokens ⇒ Integer
readonly
Maximum tokens in the response.
-
#model ⇒ String
readonly
The model identifier used for API calls.
-
#provider ⇒ Providers::Anthropic
readonly
The underlying API provider.
Instance Method Summary collapse
-
#chat(messages, **options) ⇒ String
Send messages to the LLM and return the assistant’s text response.
-
#chat_with_tools(messages, registry:, session_id:, **options) ⇒ String
Send messages with tool support.
-
#initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil) ⇒ Client
constructor
A new instance of Client.
Constructor Details
#initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil) ⇒ Client
Returns a new instance of Client.
32 33 34 35 36 37 |
# File 'lib/llm/client.rb', line 32 def initialize(model: Anima::Settings.model, max_tokens: Anima::Settings.max_tokens, provider: nil, logger: nil) @provider = build_provider(provider) @model = model @max_tokens = max_tokens @logger = logger end |
Instance Attribute Details
#max_tokens ⇒ Integer (readonly)
Returns maximum tokens in the response.
25 26 27 |
# File 'lib/llm/client.rb', line 25 def max_tokens @max_tokens end |
#model ⇒ String (readonly)
Returns the model identifier used for API calls.
22 23 24 |
# File 'lib/llm/client.rb', line 22 def model @model end |
#provider ⇒ Providers::Anthropic (readonly)
Returns the underlying API provider.
19 20 21 |
# File 'lib/llm/client.rb', line 19 def provider @provider end |
Instance Method Details
#chat(messages, **options) ⇒ String
Send messages to the LLM and return the assistant’s text response.
46 47 48 49 50 51 52 53 54 55 |
# File 'lib/llm/client.rb', line 46 def chat(, **) response = provider.( model: model, messages: , max_tokens: max_tokens, ** ) extract_text(response) end |
#chat_with_tools(messages, registry:, session_id:, **options) ⇒ String
Send messages with tool support. Runs the full tool execution loop: call LLM, execute any requested tools, feed results back, repeat until the LLM produces a final text response.
Emits Events::ToolCall and Events::ToolResponse events for each tool interaction so they’re persisted and visible in the event stream.
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
# File 'lib/llm/client.rb', line 70 def chat_with_tools(, registry:, session_id:, **) = .dup rounds = 0 loop do rounds += 1 max_rounds = Anima::Settings.max_tool_rounds if rounds > max_rounds return "[Tool loop exceeded #{max_rounds} rounds — halting]" end response = provider.( model: model, messages: , max_tokens: max_tokens, tools: registry.schemas, ** ) log(:debug, "stop_reason=#{response["stop_reason"]} content_types=#{(response["content"] || []).map { |b| b["type"] }.join(",")}") if response["stop_reason"] == "tool_use" tool_results = execute_tools(response, registry, session_id) += [ {role: "assistant", content: response["content"]}, {role: "user", content: tool_results} ] else return extract_text(response) end end end |