Class: LLM::Gemini
- Includes:
- Format
- Defined in:
- lib/llm/providers/gemini.rb,
lib/llm/providers/gemini/audio.rb,
lib/llm/providers/gemini/files.rb,
lib/llm/providers/gemini/format.rb,
lib/llm/providers/gemini/images.rb,
lib/llm/providers/gemini/error_handler.rb,
lib/llm/providers/gemini/response_parser.rb
Overview
The Gemini class implements a provider for [Gemini](ai.google.dev/).
The Gemini provider can accept multiple inputs (text, images, audio, and video). The inputs can be provided inline via the prompt for files under 20MB or via the Gemini Files API for files that are over 20MB
Defined Under Namespace
Modules: Format, ResponseParser Classes: Audio, ErrorHandler, Files, Images
Constant Summary collapse
- HOST =
"generativelanguage.googleapis.com"
Instance Method Summary collapse
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#audio ⇒ Object
Provides an interface to Gemini’s audio API.
-
#complete(prompt, role = :user, model: "gemini-1.5-flash", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#files ⇒ Object
Provides an interface to Gemini’s file management API.
-
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API.
-
#initialize(secret) ⇒ Gemini
constructor
A new instance of Gemini.
-
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models.
Methods included from Format
Methods inherited from Provider
#chat, #chat!, #inspect, #respond, #respond!, #responses
Methods included from HTTPClient
Constructor Details
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
105 106 107 |
# File 'lib/llm/providers/gemini.rb', line 105 def assistant_role "model" end |
#audio ⇒ Object
Provides an interface to Gemini’s audio API
84 85 86 |
# File 'lib/llm/providers/gemini.rb', line 84 def audio LLM::Gemini::Audio.new(self) end |
#complete(prompt, role = :user, model: "gemini-1.5-flash", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
72 73 74 75 76 77 78 79 |
# File 'lib/llm/providers/gemini.rb', line 72 def complete(prompt, role = :user, model: "gemini-1.5-flash", **params) path = ["/v1beta/models/#{model}", "generateContent?key=#{@secret}"].join(":") req = Net::HTTP::Post.new(path, headers) = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)] req.body = JSON.dump({contents: format()}) res = request(@http, req) Response::Completion.new(res).extend(response_parser) end |
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding
54 55 56 57 58 59 60 |
# File 'lib/llm/providers/gemini.rb', line 54 def (input, model: "text-embedding-004", **params) path = ["/v1beta/models/#{model}", "embedContent?key=#{@secret}"].join(":") req = Net::HTTP::Post.new(path, headers) req.body = JSON.dump({content: {parts: [{text: input}]}}) res = request(@http, req) Response::Embedding.new(res).extend(response_parser) end |
#files ⇒ Object
Provides an interface to Gemini’s file management API
99 100 101 |
# File 'lib/llm/providers/gemini.rb', line 99 def files LLM::Gemini::Files.new(self) end |
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API
92 93 94 |
# File 'lib/llm/providers/gemini.rb', line 92 def images LLM::Gemini::Images.new(self) end |
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models
111 112 113 |
# File 'lib/llm/providers/gemini.rb', line 111 def models @models ||= load_models!("gemini") end |