Class: LLM::Gemini
- Includes:
- Format
- Defined in:
- lib/llm/providers/gemini.rb,
lib/llm/providers/gemini/audio.rb,
lib/llm/providers/gemini/files.rb,
lib/llm/providers/gemini/format.rb,
lib/llm/providers/gemini/images.rb,
lib/llm/providers/gemini/models.rb,
lib/llm/providers/gemini/error_handler.rb,
lib/llm/providers/gemini/response_parser.rb
Overview
The Gemini class implements a provider for [Gemini](ai.google.dev/).
The Gemini provider can accept multiple inputs (text, images, audio, and video). The inputs can be provided inline via the prompt for files under 20MB or via the Gemini Files API for files that are over 20MB
Defined Under Namespace
Modules: Format, ResponseParser Classes: Audio, ErrorHandler, Files, Images, Models
Constant Summary collapse
- HOST =
"generativelanguage.googleapis.com"
Instance Method Summary collapse
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#audio ⇒ Object
Provides an interface to Gemini’s audio API.
-
#complete(prompt, role = :user, model: default_model, schema: nil, **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#default_model ⇒ String
Returns the default model for chat completions.
-
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#files ⇒ Object
Provides an interface to Gemini’s file management API.
-
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API.
-
#initialize(secret) ⇒ Gemini
constructor
A new instance of Gemini.
-
#models ⇒ Object
Provides an interface to Gemini’s models API.
Methods included from Format
Methods inherited from Provider
#chat, #chat!, #inspect, #respond, #respond!, #responses, #schema
Constructor Details
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
119 120 121 |
# File 'lib/llm/providers/gemini.rb', line 119 def assistant_role "model" end |
#audio ⇒ Object
Provides an interface to Gemini’s audio API
91 92 93 |
# File 'lib/llm/providers/gemini.rb', line 91 def audio LLM::Gemini::Audio.new(self) end |
#complete(prompt, role = :user, model: default_model, schema: nil, **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
77 78 79 80 81 82 83 84 85 86 |
# File 'lib/llm/providers/gemini.rb', line 77 def complete(prompt, role = :user, model: default_model, schema: nil, **params) model.respond_to?(:id) ? model.id : model path = ["/v1beta/models/#{model}", "generateContent?key=#{@secret}"].join(":") req = Net::HTTP::Post.new(path, headers) = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)] body = JSON.dump({contents: format()}.merge!((schema))) set_body_stream(req, StringIO.new(body)) res = request(@http, req) Response::Completion.new(res).extend(response_parser) end |
#default_model ⇒ String
Returns the default model for chat completions
127 128 129 |
# File 'lib/llm/providers/gemini.rb', line 127 def default_model "gemini-1.5-flash" end |
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding
55 56 57 58 59 60 61 62 |
# File 'lib/llm/providers/gemini.rb', line 55 def (input, model: "text-embedding-004", **params) model = model.respond_to?(:id) ? model.id : model path = ["/v1beta/models/#{model}", "embedContent?key=#{@secret}"].join(":") req = Net::HTTP::Post.new(path, headers) req.body = JSON.dump({content: {parts: [{text: input}]}}) res = request(@http, req) Response::Embedding.new(res).extend(response_parser) end |
#files ⇒ Object
Provides an interface to Gemini’s file management API
106 107 108 |
# File 'lib/llm/providers/gemini.rb', line 106 def files LLM::Gemini::Files.new(self) end |
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API
99 100 101 |
# File 'lib/llm/providers/gemini.rb', line 99 def images LLM::Gemini::Images.new(self) end |