Class: LLM::OpenAI::Responses

Inherits:
Object
  • Object
show all
Includes:
Format
Defined in:
lib/llm/providers/openai/responses.rb

Overview

The LLM::OpenAI::Responses class provides a responses object for interacting with [OpenAI’s response API](platform.openai.com/docs/guides/conversation-state?api-mode=responses). The responses API is similar to the chat completions API but it can maintain conversation state across multiple requests. This is useful when you want to save bandwidth and/or not maintain the message thread by yourself.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(ENV["KEY"])
res1 = llm.responses.create "Your task is to help me with math", :developer
res2 = llm.responses.create "5 + 5  = ?", :user, previous_response_id: res1.id
[res1,res2].each { llm.responses.delete(_1) }
#!/usr/bin/env ruby
require "llm"

llm  = LLM.openai(ENV["KEY"])
file = llm.files.create file: LLM::File("/images/hat.png")
res  = llm.responses.create ["Describe the image", file]
#!/usr/bin/env ruby
require "llm"

llm  = LLM.openai(ENV["KEY"])
file = llm.files.create file: LLM::File("/documents/freebsd.pdf")
res  = llm.responses.create ["Describe the document, file]

Instance Method Summary collapse

Methods included from Format

#format

Constructor Details

#initialize(provider) ⇒ LLM::OpenAI::Responses

Returns a new Responses object

Parameters:



40
41
42
# File 'lib/llm/providers/openai/responses.rb', line 40

def initialize(provider)
  @provider = provider
end

Instance Method Details

#create(prompt, role = :user, model: @provider.default_model, schema: nil, **params) ⇒ LLM::Response::Output

Create a response

Parameters:

  • params (Hash)

    Response params

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: @provider.default_model)

    The model to use for the completion

Returns:

Raises:

See Also:



55
56
57
58
59
60
61
62
63
64
65
66
# File 'lib/llm/providers/openai/responses.rb', line 55

def create(prompt, role = :user, model: @provider.default_model, schema: nil, **params)
  params = {model:}
             .merge!(expand_schema(schema))
             .merge!(params)
             .compact
  req = Net::HTTP::Post.new("/v1/responses", headers)
  messages = [*(params.delete(:input) || []), LLM::Message.new(role, prompt)]
  body = JSON.dump({input: format(messages, :response)}.merge!(params))
  set_body_stream(req, StringIO.new(body))
  res = request(http, req)
  LLM::Response::Output.new(res).extend(response_parser)
end

#delete(response) ⇒ OpenStruct

Deletes a response

Parameters:

  • response (#id, #to_s)

    Response ID

Returns:

Raises:

See Also:



88
89
90
91
92
93
# File 'lib/llm/providers/openai/responses.rb', line 88

def delete(response)
  response_id = response.respond_to?(:id) ? response.id : response
  req = Net::HTTP::Delete.new("/v1/responses/#{response_id}", headers)
  res = request(http, req)
  OpenStruct.from_hash JSON.parse(res.body)
end

#get(response, **params) ⇒ LLM::Response::Output

Get a response

Parameters:

  • response (#id, #to_s)

    Response ID

Returns:

Raises:

See Also:



74
75
76
77
78
79
80
# File 'lib/llm/providers/openai/responses.rb', line 74

def get(response, **params)
  response_id = response.respond_to?(:id) ? response.id : response
  query = URI.encode_www_form(params)
  req = Net::HTTP::Get.new("/v1/responses/#{response_id}?#{query}", headers)
  res = request(http, req)
  LLM::Response::Output.new(res).extend(response_parser)
end