Class: LlmConductor::Clients::BaseClient
- Inherits:
-
Object
- Object
- LlmConductor::Clients::BaseClient
- Includes:
- Prompts
- Defined in:
- lib/llm_conductor/clients/base_client.rb
Overview
Base client class providing common functionality for all LLM providers including prompt building, token counting, and response formatting.
Direct Known Subclasses
AnthropicClient, GeminiClient, GptClient, GroqClient, OllamaClient, OpenrouterClient, ZaiClient
Instance Attribute Summary collapse
-
#model ⇒ Object
readonly
Returns the value of attribute model.
-
#type ⇒ Object
readonly
Returns the value of attribute type.
Instance Method Summary collapse
- #generate(data:) ⇒ Object
-
#generate_simple(prompt:) ⇒ Object
Simple generation method that accepts a direct prompt and returns a Response object.
-
#initialize(model:, type:) ⇒ BaseClient
constructor
A new instance of BaseClient.
Methods included from Prompts
#prompt_analyze_content, #prompt_classify_content, #prompt_custom, #prompt_extract_links, #prompt_summarize_text
Constructor Details
#initialize(model:, type:) ⇒ BaseClient
Returns a new instance of BaseClient.
16 17 18 19 |
# File 'lib/llm_conductor/clients/base_client.rb', line 16 def initialize(model:, type:) @model = model @type = type end |
Instance Attribute Details
#model ⇒ Object (readonly)
Returns the value of attribute model.
14 15 16 |
# File 'lib/llm_conductor/clients/base_client.rb', line 14 def model @model end |
#type ⇒ Object (readonly)
Returns the value of attribute type.
14 15 16 |
# File 'lib/llm_conductor/clients/base_client.rb', line 14 def type @type end |
Instance Method Details
#generate(data:) ⇒ Object
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
# File 'lib/llm_conductor/clients/base_client.rb', line 21 def generate(data:) prompt = build_prompt(data) input_tokens = calculate_tokens(prompt) output_text = generate_content(prompt) output_tokens = calculate_tokens(output_text || '') # Logging AI request metadata if logger is set configuration.logger&.debug( "Vendor: #{vendor_name}, Model: #{@model} " \ "Output_tokens: #{output_tokens} Input_tokens: #{input_tokens}" ) build_response(output_text, input_tokens, output_tokens, { prompt: }) rescue StandardError => e build_error_response(e) end |
#generate_simple(prompt:) ⇒ Object
Simple generation method that accepts a direct prompt and returns a Response object
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
# File 'lib/llm_conductor/clients/base_client.rb', line 39 def generate_simple(prompt:) input_tokens = calculate_tokens(prompt) output_text = generate_content(prompt) output_tokens = calculate_tokens(output_text || '') # Logging AI request metadata if logger is set configuration.logger&.debug( "Vendor: #{vendor_name}, Model: #{@model} " \ "Output_tokens: #{output_tokens} Input_tokens: #{input_tokens}" ) build_response(output_text, input_tokens, output_tokens) rescue StandardError => e build_error_response(e) end |