Class: LLM::Gemini
- Includes:
- Format
- Defined in:
- lib/llm/providers/gemini.rb,
lib/llm/providers/gemini/audio.rb,
lib/llm/providers/gemini/files.rb,
lib/llm/providers/gemini/format.rb,
lib/llm/providers/gemini/images.rb,
lib/llm/providers/gemini/error_handler.rb,
lib/llm/providers/gemini/response_parser.rb
Overview
The Gemini class implements a provider for [Gemini](ai.google.dev/).
The Gemini provider can accept multiple inputs (text, images, audio, and video). The inputs can be provided inline via the prompt for files under 20MB or via the Gemini Files API for files that are over 20MB
Defined Under Namespace
Modules: Format, ResponseParser Classes: Audio, ErrorHandler, Files, Images
Constant Summary collapse
- HOST =
"generativelanguage.googleapis.com"
Instance Method Summary collapse
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#audio ⇒ Object
Provides an interface to Gemini’s audio API.
-
#complete(prompt, role = :user, model: "gemini-1.5-flash", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#files ⇒ Object
Provides an interface to Gemini’s file management API.
-
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API.
-
#initialize(secret) ⇒ Gemini
constructor
A new instance of Gemini.
-
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models.
Methods included from Format
Methods inherited from Provider
#chat, #chat!, #inspect, #respond, #respond!, #responses
Constructor Details
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
106 107 108 |
# File 'lib/llm/providers/gemini.rb', line 106 def assistant_role "model" end |
#audio ⇒ Object
Provides an interface to Gemini’s audio API
85 86 87 |
# File 'lib/llm/providers/gemini.rb', line 85 def audio LLM::Gemini::Audio.new(self) end |
#complete(prompt, role = :user, model: "gemini-1.5-flash", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
72 73 74 75 76 77 78 79 80 |
# File 'lib/llm/providers/gemini.rb', line 72 def complete(prompt, role = :user, model: "gemini-1.5-flash", **params) path = ["/v1beta/models/#{model}", "generateContent?key=#{@secret}"].join(":") req = Net::HTTP::Post.new(path, headers) = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)] body = JSON.dump({contents: format()}).b set_body_stream(req, StringIO.new(body)) res = request(@http, req) Response::Completion.new(res).extend(response_parser) end |
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding
54 55 56 57 58 59 60 |
# File 'lib/llm/providers/gemini.rb', line 54 def (input, model: "text-embedding-004", **params) path = ["/v1beta/models/#{model}", "embedContent?key=#{@secret}"].join(":") req = Net::HTTP::Post.new(path, headers) req.body = JSON.dump({content: {parts: [{text: input}]}}) res = request(@http, req) Response::Embedding.new(res).extend(response_parser) end |
#files ⇒ Object
Provides an interface to Gemini’s file management API
100 101 102 |
# File 'lib/llm/providers/gemini.rb', line 100 def files LLM::Gemini::Files.new(self) end |
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API
93 94 95 |
# File 'lib/llm/providers/gemini.rb', line 93 def images LLM::Gemini::Images.new(self) end |
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models
112 113 114 |
# File 'lib/llm/providers/gemini.rb', line 112 def models @models ||= load_models!("gemini") end |