Class: LLM::Gemini
- Includes:
- RequestAdapter
- Defined in:
- lib/llm/providers/gemini.rb,
lib/llm/providers/gemini/audio.rb,
lib/llm/providers/gemini/files.rb,
lib/llm/providers/gemini/images.rb,
lib/llm/providers/gemini/models.rb,
lib/llm/providers/gemini/error_handler.rb,
lib/llm/providers/gemini/stream_parser.rb,
lib/llm/providers/gemini/request_adapter.rb,
lib/llm/providers/gemini/response_adapter.rb
Overview
The Gemini class implements a provider for [Gemini](ai.google.dev/). The Gemini provider can accept multiple inputs (text, images, audio, and video). The inputs can be provided inline via the prompt for files under 20MB or via the Gemini Files API for files that are over 20MB.
Defined Under Namespace
Modules: RequestAdapter, ResponseAdapter Classes: Audio, ErrorHandler, Files, Images, Models, StreamParser
Constant Summary collapse
- HOST =
"generativelanguage.googleapis.com"
Instance Method Summary collapse
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#audio ⇒ LLM::Gemini::Audio
Provides an interface to Gemini’s audio API.
-
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API.
-
#default_model ⇒ String
Returns the default model for chat completions.
-
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response
Provides an embedding.
-
#files ⇒ LLM::Gemini::Files
Provides an interface to Gemini’s file management API.
-
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API.
-
#initialize ⇒ Gemini
constructor
A new instance of Gemini.
-
#models ⇒ LLM::Gemini::Models
Provides an interface to Gemini’s models API.
- #server_tools ⇒ String => LLM::ServerTool
-
#web_search(query:) ⇒ LLM::Response
A convenience method for performing a web search using the Google Search tool.
Methods included from RequestAdapter
Methods inherited from Provider
#chat, clients, #inspect, #moderations, #respond, #responses, #schema, #server_tool, #vector_stores, #with
Constructor Details
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
108 109 110 |
# File 'lib/llm/providers/gemini.rb', line 108 def assistant_role "model" end |
#audio ⇒ LLM::Gemini::Audio
Provides an interface to Gemini’s audio API
78 79 80 |
# File 'lib/llm/providers/gemini.rb', line 78 def audio LLM::Gemini::Audio.new(self) end |
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API
66 67 68 69 70 71 72 |
# File 'lib/llm/providers/gemini.rb', line 66 def complete(prompt, params = {}) params, stream, tools, role, model = normalize_complete_params(params) req = build_complete_request(prompt, params, role, model, stream) res = execute(request: req, stream: stream) ResponseAdapter.adapt(res, type: :completion) .extend(Module.new { define_method(:__tools__) { tools } }) end |
#default_model ⇒ String
Returns the default model for chat completions
116 117 118 |
# File 'lib/llm/providers/gemini.rb', line 116 def default_model "gemini-2.5-flash" end |
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response
Provides an embedding
47 48 49 50 51 52 53 54 |
# File 'lib/llm/providers/gemini.rb', line 47 def (input, model: "text-embedding-004", **params) model = model.respond_to?(:id) ? model.id : model path = ["/v1beta/models/#{model}", "embedContent?key=#{@key}"].join(":") req = Net::HTTP::Post.new(path, headers) req.body = LLM.json.dump({content: {parts: [{text: input}]}}) res = execute(request: req) ResponseAdapter.adapt(res, type: :embedding) end |
#files ⇒ LLM::Gemini::Files
Provides an interface to Gemini’s file management API
94 95 96 |
# File 'lib/llm/providers/gemini.rb', line 94 def files LLM::Gemini::Files.new(self) end |
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API
86 87 88 |
# File 'lib/llm/providers/gemini.rb', line 86 def images LLM::Gemini::Images.new(self) end |
#models ⇒ LLM::Gemini::Models
Provides an interface to Gemini’s models API
102 103 104 |
# File 'lib/llm/providers/gemini.rb', line 102 def models LLM::Gemini::Models.new(self) end |
#server_tools ⇒ String => LLM::ServerTool
This method includes certain tools that require configuration through a set of options that are easier to set through the LLM::Provider#server_tool method.
127 128 129 130 131 132 133 |
# File 'lib/llm/providers/gemini.rb', line 127 def server_tools { google_search: server_tool(:google_search), code_execution: server_tool(:code_execution), url_context: server_tool(:url_context) } end |
#web_search(query:) ⇒ LLM::Response
A convenience method for performing a web search using the Google Search tool.
140 141 142 |
# File 'lib/llm/providers/gemini.rb', line 140 def web_search(query:) ResponseAdapter.adapt(complete(query, tools: [server_tools[:google_search]]), type: :web_search) end |