Class: Message

Inherits:
ApplicationRecord show all
Includes:
Broadcasting
Defined in:
app/models/message.rb

Overview

A persisted record of what was said during a session — by whom and when. Messages are the single source of truth for conversation history —there is no separate chat log, only messages attached to a session.

Not to be confused with Events::Base (transient bus signals). Messages persist to SQLite; events flow through the bus and are gone.

Defined Under Namespace

Modules: Broadcasting

Constant Summary collapse

TYPES =
%w[system_message user_message agent_message tool_call tool_response].freeze
LLM_TYPES =
%w[user_message agent_message].freeze
CONTEXT_TYPES =
%w[system_message user_message agent_message tool_call tool_response].freeze
CONVERSATION_TYPES =
%w[user_message agent_message system_message].freeze
THINK_TOOL =
"think"
SPAWN_TOOLS =
%w[spawn_subagent spawn_specialist].freeze
PENDING_STATUS =
"pending"
TOOL_TYPES =

Message types that require a tool_use_id to pair call with response.

%w[tool_call tool_response].freeze
ROLE_MAP =
{"user_message" => "user", "agent_message" => "assistant"}.freeze
BYTES_PER_TOKEN =

Heuristic: average bytes per token for English prose.

4

Constants included from Broadcasting

Broadcasting::ACTION_CREATE, Broadcasting::ACTION_UPDATE

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Instance Attribute Details

#message_typeString

Returns one of TYPES: system_message, user_message, agent_message, tool_call, tool_response.

Returns:

  • (String)

    one of TYPES: system_message, user_message, agent_message, tool_call, tool_response



23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
# File 'app/models/message.rb', line 23

class Message < ApplicationRecord
  include Message::Broadcasting

  TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  LLM_TYPES = %w[user_message agent_message].freeze
  CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze
  THINK_TOOL = "think"
  SPAWN_TOOLS = %w[spawn_subagent spawn_specialist].freeze
  PENDING_STATUS = "pending"

  # Message types that require a tool_use_id to pair call with response.
  TOOL_TYPES = %w[tool_call tool_response].freeze

  ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze

  # Heuristic: average bytes per token for English prose.
  BYTES_PER_TOKEN = 4

  belongs_to :session
  has_many :pinned_messages, dependent: :destroy

  validates :message_type, presence: true, inclusion: {in: TYPES}
  validates :payload, presence: true
  validates :timestamp, presence: true
  # Anthropic requires every tool_use to have a matching tool_result with the same ID
  validates :tool_use_id, presence: true, if: -> { message_type.in?(TOOL_TYPES) }

  after_create :schedule_token_count, if: :llm_message?

  # @!method self.llm_messages
  #   Messages that represent conversation turns sent to the LLM API.
  #   @return [ActiveRecord::Relation]
  scope :llm_messages, -> { where(message_type: LLM_TYPES) }

  # @!method self.context_messages
  #   Messages included in the LLM context window (conversation + tool interactions).
  #   @return [ActiveRecord::Relation]
  scope :context_messages, -> { where(message_type: CONTEXT_TYPES) }

  # @!method self.pending
  #   User messages queued during active agent processing, not yet sent to LLM.
  #   @return [ActiveRecord::Relation]
  scope :pending, -> { where(status: PENDING_STATUS) }

  # @!method self.deliverable
  #   Messages eligible for LLM context (excludes pending messages).
  #   NULL status means delivered/processed — the only excluded value is "pending".
  #   @return [ActiveRecord::Relation]
  scope :deliverable, -> { where(status: nil) }

  # @!method self.excluding_spawn_messages
  #   Excludes spawn_subagent/spawn_specialist tool_call and tool_response messages.
  #   Used when building parent context for sub-agents — spawn messages cause role
  #   confusion because the sub-agent sees sibling spawn results and mistakes
  #   itself for the parent.
  #   @return [ActiveRecord::Relation]
  scope :excluding_spawn_messages, -> {
    where.not("message_type IN (?) AND json_extract(payload, '$.tool_name') IN (?)",
      TOOL_TYPES, SPAWN_TOOLS)
  }

  # Maps message_type to the Anthropic Messages API role.
  # @return [String] "user" or "assistant"
  def api_role
    ROLE_MAP.fetch(message_type)
  end

  # @return [Boolean] true if this message represents an LLM conversation turn
  def llm_message?
    message_type.in?(LLM_TYPES)
  end

  # @return [Boolean] true if this message is part of the LLM context window
  def context_message?
    message_type.in?(CONTEXT_TYPES)
  end

  # @return [Boolean] true if this is a pending message not yet sent to the LLM
  def pending?
    status == PENDING_STATUS
  end

  # @return [Boolean] true if this is a conversation message (user/agent/system)
  #   or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking
  def conversation_or_think?
    message_type.in?(CONVERSATION_TYPES) ||
      (message_type == "tool_call" && payload["tool_name"] == THINK_TOOL)
  end

  # Heuristic token estimate: ~4 bytes per token for English prose.
  # Tool messages are estimated from the full payload JSON since tool_input
  # and tool metadata contribute to token count. Messages use content only.
  #
  # @return [Integer] estimated token count (at least 1)
  def estimate_tokens
    text = if message_type.in?(TOOL_TYPES)
      payload.to_json
    else
      payload["content"].to_s
    end
    [(text.bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max
  end

  private

  def schedule_token_count
    CountMessageTokensJob.perform_later(id)
  end
end

#payloadHash

Returns message-specific data (content, tool_name, tool_input, etc.).

Returns:

  • (Hash)

    message-specific data (content, tool_name, tool_input, etc.)



23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
# File 'app/models/message.rb', line 23

class Message < ApplicationRecord
  include Message::Broadcasting

  TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  LLM_TYPES = %w[user_message agent_message].freeze
  CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze
  THINK_TOOL = "think"
  SPAWN_TOOLS = %w[spawn_subagent spawn_specialist].freeze
  PENDING_STATUS = "pending"

  # Message types that require a tool_use_id to pair call with response.
  TOOL_TYPES = %w[tool_call tool_response].freeze

  ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze

  # Heuristic: average bytes per token for English prose.
  BYTES_PER_TOKEN = 4

  belongs_to :session
  has_many :pinned_messages, dependent: :destroy

  validates :message_type, presence: true, inclusion: {in: TYPES}
  validates :payload, presence: true
  validates :timestamp, presence: true
  # Anthropic requires every tool_use to have a matching tool_result with the same ID
  validates :tool_use_id, presence: true, if: -> { message_type.in?(TOOL_TYPES) }

  after_create :schedule_token_count, if: :llm_message?

  # @!method self.llm_messages
  #   Messages that represent conversation turns sent to the LLM API.
  #   @return [ActiveRecord::Relation]
  scope :llm_messages, -> { where(message_type: LLM_TYPES) }

  # @!method self.context_messages
  #   Messages included in the LLM context window (conversation + tool interactions).
  #   @return [ActiveRecord::Relation]
  scope :context_messages, -> { where(message_type: CONTEXT_TYPES) }

  # @!method self.pending
  #   User messages queued during active agent processing, not yet sent to LLM.
  #   @return [ActiveRecord::Relation]
  scope :pending, -> { where(status: PENDING_STATUS) }

  # @!method self.deliverable
  #   Messages eligible for LLM context (excludes pending messages).
  #   NULL status means delivered/processed — the only excluded value is "pending".
  #   @return [ActiveRecord::Relation]
  scope :deliverable, -> { where(status: nil) }

  # @!method self.excluding_spawn_messages
  #   Excludes spawn_subagent/spawn_specialist tool_call and tool_response messages.
  #   Used when building parent context for sub-agents — spawn messages cause role
  #   confusion because the sub-agent sees sibling spawn results and mistakes
  #   itself for the parent.
  #   @return [ActiveRecord::Relation]
  scope :excluding_spawn_messages, -> {
    where.not("message_type IN (?) AND json_extract(payload, '$.tool_name') IN (?)",
      TOOL_TYPES, SPAWN_TOOLS)
  }

  # Maps message_type to the Anthropic Messages API role.
  # @return [String] "user" or "assistant"
  def api_role
    ROLE_MAP.fetch(message_type)
  end

  # @return [Boolean] true if this message represents an LLM conversation turn
  def llm_message?
    message_type.in?(LLM_TYPES)
  end

  # @return [Boolean] true if this message is part of the LLM context window
  def context_message?
    message_type.in?(CONTEXT_TYPES)
  end

  # @return [Boolean] true if this is a pending message not yet sent to the LLM
  def pending?
    status == PENDING_STATUS
  end

  # @return [Boolean] true if this is a conversation message (user/agent/system)
  #   or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking
  def conversation_or_think?
    message_type.in?(CONVERSATION_TYPES) ||
      (message_type == "tool_call" && payload["tool_name"] == THINK_TOOL)
  end

  # Heuristic token estimate: ~4 bytes per token for English prose.
  # Tool messages are estimated from the full payload JSON since tool_input
  # and tool metadata contribute to token count. Messages use content only.
  #
  # @return [Integer] estimated token count (at least 1)
  def estimate_tokens
    text = if message_type.in?(TOOL_TYPES)
      payload.to_json
    else
      payload["content"].to_s
    end
    [(text.bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max
  end

  private

  def schedule_token_count
    CountMessageTokensJob.perform_later(id)
  end
end

#timestampInteger

Returns nanoseconds since epoch (Process::CLOCK_REALTIME).

Returns:

  • (Integer)

    nanoseconds since epoch (Process::CLOCK_REALTIME)



23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
# File 'app/models/message.rb', line 23

class Message < ApplicationRecord
  include Message::Broadcasting

  TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  LLM_TYPES = %w[user_message agent_message].freeze
  CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze
  THINK_TOOL = "think"
  SPAWN_TOOLS = %w[spawn_subagent spawn_specialist].freeze
  PENDING_STATUS = "pending"

  # Message types that require a tool_use_id to pair call with response.
  TOOL_TYPES = %w[tool_call tool_response].freeze

  ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze

  # Heuristic: average bytes per token for English prose.
  BYTES_PER_TOKEN = 4

  belongs_to :session
  has_many :pinned_messages, dependent: :destroy

  validates :message_type, presence: true, inclusion: {in: TYPES}
  validates :payload, presence: true
  validates :timestamp, presence: true
  # Anthropic requires every tool_use to have a matching tool_result with the same ID
  validates :tool_use_id, presence: true, if: -> { message_type.in?(TOOL_TYPES) }

  after_create :schedule_token_count, if: :llm_message?

  # @!method self.llm_messages
  #   Messages that represent conversation turns sent to the LLM API.
  #   @return [ActiveRecord::Relation]
  scope :llm_messages, -> { where(message_type: LLM_TYPES) }

  # @!method self.context_messages
  #   Messages included in the LLM context window (conversation + tool interactions).
  #   @return [ActiveRecord::Relation]
  scope :context_messages, -> { where(message_type: CONTEXT_TYPES) }

  # @!method self.pending
  #   User messages queued during active agent processing, not yet sent to LLM.
  #   @return [ActiveRecord::Relation]
  scope :pending, -> { where(status: PENDING_STATUS) }

  # @!method self.deliverable
  #   Messages eligible for LLM context (excludes pending messages).
  #   NULL status means delivered/processed — the only excluded value is "pending".
  #   @return [ActiveRecord::Relation]
  scope :deliverable, -> { where(status: nil) }

  # @!method self.excluding_spawn_messages
  #   Excludes spawn_subagent/spawn_specialist tool_call and tool_response messages.
  #   Used when building parent context for sub-agents — spawn messages cause role
  #   confusion because the sub-agent sees sibling spawn results and mistakes
  #   itself for the parent.
  #   @return [ActiveRecord::Relation]
  scope :excluding_spawn_messages, -> {
    where.not("message_type IN (?) AND json_extract(payload, '$.tool_name') IN (?)",
      TOOL_TYPES, SPAWN_TOOLS)
  }

  # Maps message_type to the Anthropic Messages API role.
  # @return [String] "user" or "assistant"
  def api_role
    ROLE_MAP.fetch(message_type)
  end

  # @return [Boolean] true if this message represents an LLM conversation turn
  def llm_message?
    message_type.in?(LLM_TYPES)
  end

  # @return [Boolean] true if this message is part of the LLM context window
  def context_message?
    message_type.in?(CONTEXT_TYPES)
  end

  # @return [Boolean] true if this is a pending message not yet sent to the LLM
  def pending?
    status == PENDING_STATUS
  end

  # @return [Boolean] true if this is a conversation message (user/agent/system)
  #   or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking
  def conversation_or_think?
    message_type.in?(CONVERSATION_TYPES) ||
      (message_type == "tool_call" && payload["tool_name"] == THINK_TOOL)
  end

  # Heuristic token estimate: ~4 bytes per token for English prose.
  # Tool messages are estimated from the full payload JSON since tool_input
  # and tool metadata contribute to token count. Messages use content only.
  #
  # @return [Integer] estimated token count (at least 1)
  def estimate_tokens
    text = if message_type.in?(TOOL_TYPES)
      payload.to_json
    else
      payload["content"].to_s
    end
    [(text.bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max
  end

  private

  def schedule_token_count
    CountMessageTokensJob.perform_later(id)
  end
end

#token_countInteger

Returns cached token count for this message’s payload (0 until counted).

Returns:

  • (Integer)

    cached token count for this message’s payload (0 until counted)



23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
# File 'app/models/message.rb', line 23

class Message < ApplicationRecord
  include Message::Broadcasting

  TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  LLM_TYPES = %w[user_message agent_message].freeze
  CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze
  THINK_TOOL = "think"
  SPAWN_TOOLS = %w[spawn_subagent spawn_specialist].freeze
  PENDING_STATUS = "pending"

  # Message types that require a tool_use_id to pair call with response.
  TOOL_TYPES = %w[tool_call tool_response].freeze

  ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze

  # Heuristic: average bytes per token for English prose.
  BYTES_PER_TOKEN = 4

  belongs_to :session
  has_many :pinned_messages, dependent: :destroy

  validates :message_type, presence: true, inclusion: {in: TYPES}
  validates :payload, presence: true
  validates :timestamp, presence: true
  # Anthropic requires every tool_use to have a matching tool_result with the same ID
  validates :tool_use_id, presence: true, if: -> { message_type.in?(TOOL_TYPES) }

  after_create :schedule_token_count, if: :llm_message?

  # @!method self.llm_messages
  #   Messages that represent conversation turns sent to the LLM API.
  #   @return [ActiveRecord::Relation]
  scope :llm_messages, -> { where(message_type: LLM_TYPES) }

  # @!method self.context_messages
  #   Messages included in the LLM context window (conversation + tool interactions).
  #   @return [ActiveRecord::Relation]
  scope :context_messages, -> { where(message_type: CONTEXT_TYPES) }

  # @!method self.pending
  #   User messages queued during active agent processing, not yet sent to LLM.
  #   @return [ActiveRecord::Relation]
  scope :pending, -> { where(status: PENDING_STATUS) }

  # @!method self.deliverable
  #   Messages eligible for LLM context (excludes pending messages).
  #   NULL status means delivered/processed — the only excluded value is "pending".
  #   @return [ActiveRecord::Relation]
  scope :deliverable, -> { where(status: nil) }

  # @!method self.excluding_spawn_messages
  #   Excludes spawn_subagent/spawn_specialist tool_call and tool_response messages.
  #   Used when building parent context for sub-agents — spawn messages cause role
  #   confusion because the sub-agent sees sibling spawn results and mistakes
  #   itself for the parent.
  #   @return [ActiveRecord::Relation]
  scope :excluding_spawn_messages, -> {
    where.not("message_type IN (?) AND json_extract(payload, '$.tool_name') IN (?)",
      TOOL_TYPES, SPAWN_TOOLS)
  }

  # Maps message_type to the Anthropic Messages API role.
  # @return [String] "user" or "assistant"
  def api_role
    ROLE_MAP.fetch(message_type)
  end

  # @return [Boolean] true if this message represents an LLM conversation turn
  def llm_message?
    message_type.in?(LLM_TYPES)
  end

  # @return [Boolean] true if this message is part of the LLM context window
  def context_message?
    message_type.in?(CONTEXT_TYPES)
  end

  # @return [Boolean] true if this is a pending message not yet sent to the LLM
  def pending?
    status == PENDING_STATUS
  end

  # @return [Boolean] true if this is a conversation message (user/agent/system)
  #   or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking
  def conversation_or_think?
    message_type.in?(CONVERSATION_TYPES) ||
      (message_type == "tool_call" && payload["tool_name"] == THINK_TOOL)
  end

  # Heuristic token estimate: ~4 bytes per token for English prose.
  # Tool messages are estimated from the full payload JSON since tool_input
  # and tool metadata contribute to token count. Messages use content only.
  #
  # @return [Integer] estimated token count (at least 1)
  def estimate_tokens
    text = if message_type.in?(TOOL_TYPES)
      payload.to_json
    else
      payload["content"].to_s
    end
    [(text.bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max
  end

  private

  def schedule_token_count
    CountMessageTokensJob.perform_later(id)
  end
end

#tool_use_idString

Returns ID correlating tool_call and tool_response messages (Anthropic-assigned, or a SecureRandom.uuid fallback when the API returns nil; required for tool_call and tool_response messages).

Returns:

  • (String)

    ID correlating tool_call and tool_response messages (Anthropic-assigned, or a SecureRandom.uuid fallback when the API returns nil; required for tool_call and tool_response messages)



23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
# File 'app/models/message.rb', line 23

class Message < ApplicationRecord
  include Message::Broadcasting

  TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  LLM_TYPES = %w[user_message agent_message].freeze
  CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze
  CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze
  THINK_TOOL = "think"
  SPAWN_TOOLS = %w[spawn_subagent spawn_specialist].freeze
  PENDING_STATUS = "pending"

  # Message types that require a tool_use_id to pair call with response.
  TOOL_TYPES = %w[tool_call tool_response].freeze

  ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze

  # Heuristic: average bytes per token for English prose.
  BYTES_PER_TOKEN = 4

  belongs_to :session
  has_many :pinned_messages, dependent: :destroy

  validates :message_type, presence: true, inclusion: {in: TYPES}
  validates :payload, presence: true
  validates :timestamp, presence: true
  # Anthropic requires every tool_use to have a matching tool_result with the same ID
  validates :tool_use_id, presence: true, if: -> { message_type.in?(TOOL_TYPES) }

  after_create :schedule_token_count, if: :llm_message?

  # @!method self.llm_messages
  #   Messages that represent conversation turns sent to the LLM API.
  #   @return [ActiveRecord::Relation]
  scope :llm_messages, -> { where(message_type: LLM_TYPES) }

  # @!method self.context_messages
  #   Messages included in the LLM context window (conversation + tool interactions).
  #   @return [ActiveRecord::Relation]
  scope :context_messages, -> { where(message_type: CONTEXT_TYPES) }

  # @!method self.pending
  #   User messages queued during active agent processing, not yet sent to LLM.
  #   @return [ActiveRecord::Relation]
  scope :pending, -> { where(status: PENDING_STATUS) }

  # @!method self.deliverable
  #   Messages eligible for LLM context (excludes pending messages).
  #   NULL status means delivered/processed — the only excluded value is "pending".
  #   @return [ActiveRecord::Relation]
  scope :deliverable, -> { where(status: nil) }

  # @!method self.excluding_spawn_messages
  #   Excludes spawn_subagent/spawn_specialist tool_call and tool_response messages.
  #   Used when building parent context for sub-agents — spawn messages cause role
  #   confusion because the sub-agent sees sibling spawn results and mistakes
  #   itself for the parent.
  #   @return [ActiveRecord::Relation]
  scope :excluding_spawn_messages, -> {
    where.not("message_type IN (?) AND json_extract(payload, '$.tool_name') IN (?)",
      TOOL_TYPES, SPAWN_TOOLS)
  }

  # Maps message_type to the Anthropic Messages API role.
  # @return [String] "user" or "assistant"
  def api_role
    ROLE_MAP.fetch(message_type)
  end

  # @return [Boolean] true if this message represents an LLM conversation turn
  def llm_message?
    message_type.in?(LLM_TYPES)
  end

  # @return [Boolean] true if this message is part of the LLM context window
  def context_message?
    message_type.in?(CONTEXT_TYPES)
  end

  # @return [Boolean] true if this is a pending message not yet sent to the LLM
  def pending?
    status == PENDING_STATUS
  end

  # @return [Boolean] true if this is a conversation message (user/agent/system)
  #   or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking
  def conversation_or_think?
    message_type.in?(CONVERSATION_TYPES) ||
      (message_type == "tool_call" && payload["tool_name"] == THINK_TOOL)
  end

  # Heuristic token estimate: ~4 bytes per token for English prose.
  # Tool messages are estimated from the full payload JSON since tool_input
  # and tool metadata contribute to token count. Messages use content only.
  #
  # @return [Integer] estimated token count (at least 1)
  def estimate_tokens
    text = if message_type.in?(TOOL_TYPES)
      payload.to_json
    else
      payload["content"].to_s
    end
    [(text.bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max
  end

  private

  def schedule_token_count
    CountMessageTokensJob.perform_later(id)
  end
end

Class Method Details

.context_messagesActiveRecord::Relation

Messages included in the LLM context window (conversation + tool interactions).

Returns:

  • (ActiveRecord::Relation)


61
# File 'app/models/message.rb', line 61

scope :context_messages, -> { where(message_type: CONTEXT_TYPES) }

.deliverableActiveRecord::Relation

Messages eligible for LLM context (excludes pending messages). NULL status means delivered/processed — the only excluded value is “pending”.

Returns:

  • (ActiveRecord::Relation)


72
# File 'app/models/message.rb', line 72

scope :deliverable, -> { where(status: nil) }

.excluding_spawn_messagesActiveRecord::Relation

Excludes spawn_subagent/spawn_specialist tool_call and tool_response messages. Used when building parent context for sub-agents — spawn messages cause role confusion because the sub-agent sees sibling spawn results and mistakes itself for the parent.

Returns:

  • (ActiveRecord::Relation)


80
81
82
83
# File 'app/models/message.rb', line 80

scope :excluding_spawn_messages, -> {
  where.not("message_type IN (?) AND json_extract(payload, '$.tool_name') IN (?)",
    TOOL_TYPES, SPAWN_TOOLS)
}

.llm_messagesActiveRecord::Relation

Messages that represent conversation turns sent to the LLM API.

Returns:

  • (ActiveRecord::Relation)


56
# File 'app/models/message.rb', line 56

scope :llm_messages, -> { where(message_type: LLM_TYPES) }

.pendingActiveRecord::Relation

User messages queued during active agent processing, not yet sent to LLM.

Returns:

  • (ActiveRecord::Relation)


66
# File 'app/models/message.rb', line 66

scope :pending, -> { where(status: PENDING_STATUS) }

Instance Method Details

#api_roleString

Maps message_type to the Anthropic Messages API role.

Returns:

  • (String)

    “user” or “assistant”



87
88
89
# File 'app/models/message.rb', line 87

def api_role
  ROLE_MAP.fetch(message_type)
end

#context_message?Boolean

Returns true if this message is part of the LLM context window.

Returns:

  • (Boolean)

    true if this message is part of the LLM context window



97
98
99
# File 'app/models/message.rb', line 97

def context_message?
  message_type.in?(CONTEXT_TYPES)
end

#conversation_or_think?Boolean

Returns true if this is a conversation message (user/agent/system) or a think tool_call — the messages Mneme treats as “conversation” for boundary tracking.

Returns:

  • (Boolean)

    true if this is a conversation message (user/agent/system) or a think tool_call — the messages Mneme treats as “conversation” for boundary tracking



108
109
110
111
# File 'app/models/message.rb', line 108

def conversation_or_think?
  message_type.in?(CONVERSATION_TYPES) ||
    (message_type == "tool_call" && payload["tool_name"] == THINK_TOOL)
end

#estimate_tokensInteger

Heuristic token estimate: ~4 bytes per token for English prose. Tool messages are estimated from the full payload JSON since tool_input and tool metadata contribute to token count. Messages use content only.

Returns:

  • (Integer)

    estimated token count (at least 1)



118
119
120
121
122
123
124
125
# File 'app/models/message.rb', line 118

def estimate_tokens
  text = if message_type.in?(TOOL_TYPES)
    payload.to_json
  else
    payload["content"].to_s
  end
  [(text.bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max
end

#llm_message?Boolean

Returns true if this message represents an LLM conversation turn.

Returns:

  • (Boolean)

    true if this message represents an LLM conversation turn



92
93
94
# File 'app/models/message.rb', line 92

def llm_message?
  message_type.in?(LLM_TYPES)
end

#pending?Boolean

Returns true if this is a pending message not yet sent to the LLM.

Returns:

  • (Boolean)

    true if this is a pending message not yet sent to the LLM



102
103
104
# File 'app/models/message.rb', line 102

def pending?
  status == PENDING_STATUS
end