Class: LLM::OpenAI::Responses

Inherits:
Object
  • Object
show all
Includes:
Format
Defined in:
lib/llm/providers/openai/responses.rb

Overview

The LLM::OpenAI::Responses class provides a responses object for interacting with [OpenAI’s response API](platform.openai.com/docs/guides/conversation-state?api-mode=responses). The responses API is similar to the chat completions API but it can maintain conversation state across multiple requests. This is useful when you want to save bandwidth and/or not maintain the message thread by yourself.

Examples:

example #1

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(ENV["KEY"])
res1 = llm.responses.create "Your task is to help me with math", role: :developer
res2 = llm.responses.create "5 + 5  = ?", role: :user, previous_response_id: res1.id
[res1,res2].each { llm.responses.delete(_1) }

example #2

#!/usr/bin/env ruby
require "llm"

llm  = LLM.openai(ENV["KEY"])
file = llm.files.create file: "/images/hat.png"
res  = llm.responses.create ["Describe the image", file]

example #3

#!/usr/bin/env ruby
require "llm"

llm  = LLM.openai(ENV["KEY"])
file = llm.files.create file: "/documents/freebsd.pdf"
res  = llm.responses.create ["Describe the document, file]

Instance Method Summary collapse

Methods included from Format

#format

Constructor Details

#initialize(provider) ⇒ LLM::OpenAI::Responses

Returns a new Responses object

Parameters:



42
43
44
# File 'lib/llm/providers/openai/responses.rb', line 42

def initialize(provider)
  @provider = provider
end

Instance Method Details

#create(prompt, params = {}) ⇒ LLM::Response::Output

Create a response

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

  • (LLM::Response::Output)

Raises:

  • (LLM::Error::PromptError)

    When given an object a provider does not understand

See Also:



55
56
57
58
59
60
61
62
63
64
65
# File 'lib/llm/providers/openai/responses.rb', line 55

def create(prompt, params = {})
  params = {role: :user, model: @provider.default_model}.merge!(params)
  params = [params, format_schema(params), format_tools(params)].inject({}, &:merge!).compact
  role = params.delete(:role)
  req = Net::HTTP::Post.new("/v1/responses", headers)
  messages = [*(params.delete(:input) || []), LLM::Message.new(role, prompt)]
  body = JSON.dump({input: [format(messages, :response)].flatten}.merge!(params))
  set_body_stream(req, StringIO.new(body))
  res = execute(request: req)
  LLM::Response::Respond.new(res).extend(response_parser)
end

#delete(response) ⇒ LLM::Object

Deletes a response

Parameters:

  • response (#id, #to_s)

    Response ID

Returns:

See Also:



87
88
89
90
91
92
# File 'lib/llm/providers/openai/responses.rb', line 87

def delete(response)
  response_id = response.respond_to?(:id) ? response.id : response
  req = Net::HTTP::Delete.new("/v1/responses/#{response_id}", headers)
  res = execute(request: req)
  LLM::Object.from_hash JSON.parse(res.body)
end

#get(response, **params) ⇒ LLM::Response::Output

Get a response

Parameters:

  • response (#id, #to_s)

    Response ID

Returns:

  • (LLM::Response::Output)

See Also:



73
74
75
76
77
78
79
# File 'lib/llm/providers/openai/responses.rb', line 73

def get(response, **params)
  response_id = response.respond_to?(:id) ? response.id : response
  query = URI.encode_www_form(params)
  req = Net::HTTP::Get.new("/v1/responses/#{response_id}?#{query}", headers)
  res = execute(request: req)
  LLM::Response::Respond.new(res).extend(response_parser)
end