Class: LLM::Gemini

Inherits:
Provider show all
Includes:
Format
Defined in:
lib/llm/providers/gemini.rb,
lib/llm/providers/gemini/format.rb,
lib/llm/providers/gemini/error_handler.rb,
lib/llm/providers/gemini/response_parser.rb

Overview

The Gemini class implements a provider for [Gemini](ai.google.dev/)

Defined Under Namespace

Modules: Format, ResponseParser Classes: ErrorHandler

Constant Summary collapse

HOST =
"generativelanguage.googleapis.com"

Instance Method Summary collapse

Methods included from Format

#format

Methods inherited from Provider

#chat, #chat!, #inspect

Methods included from HTTPClient

#request

Constructor Details

#initialize(secret) ⇒ Gemini

Returns a new instance of Gemini.

Parameters:

  • secret (String)

    The secret key for authentication



17
18
19
# File 'lib/llm/providers/gemini.rb', line 17

def initialize(secret, **)
  super(secret, host: HOST, **)
end

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”



49
50
51
# File 'lib/llm/providers/gemini.rb', line 49

def assistant_role
  "model"
end

#complete(prompt, role = :user, **params) ⇒ LLM::Response::Completion

Parameters:

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

Returns:

See Also:



37
38
39
40
41
42
43
44
45
# File 'lib/llm/providers/gemini.rb', line 37

def complete(prompt, role = :user, **params)
  params   = {model: "gemini-1.5-flash"}.merge!(params)
  path     = ["/v1beta/models/#{params.delete(:model)}", "generateContent?key=#{@secret}"].join(":")
  req      = Net::HTTP::Post.new(path, headers)
  messages = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)]
  req.body = JSON.dump({contents: format(messages)})
  res      = request(@http, req)
  Response::Completion.new(res).extend(response_parser)
end

#embed(input, **params) ⇒ LLM::Response::Embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

Returns:



24
25
26
27
28
29
30
# File 'lib/llm/providers/gemini.rb', line 24

def embed(input, **params)
  path = ["/v1beta/models/text-embedding-004", "embedContent?key=#{@secret}"].join(":")
  req = Net::HTTP::Post.new(path, headers)
  req.body = JSON.dump({content: {parts: [{text: input}]}})
  res = request(@http, req)
  Response::Embedding.new(res).extend(response_parser)
end

#modelsHash<String, LLM::Model>

Returns a hash of available models

Returns:

  • (Hash<String, LLM::Model>)

    Returns a hash of available models



55
56
57
# File 'lib/llm/providers/gemini.rb', line 55

def models
  @models ||= load_models!("gemini")
end