Class: Message
- Inherits:
-
ApplicationRecord
- Object
- ActiveRecord::Base
- ApplicationRecord
- Message
- Includes:
- Broadcasting
- Defined in:
- app/models/message.rb
Overview
A persisted record of what was said during a session — by whom and when. Messages are the single source of truth for conversation history —there is no separate chat log, only messages attached to a session.
Not to be confused with Events::Base (transient bus signals). Messages persist to SQLite; events flow through the bus and are gone.
Defined Under Namespace
Modules: Broadcasting
Constant Summary collapse
- TYPES =
%w[system_message user_message agent_message tool_call tool_response].freeze
- LLM_TYPES =
%w[user_message agent_message].freeze
- CONTEXT_TYPES =
%w[system_message user_message agent_message tool_call tool_response].freeze
- CONVERSATION_TYPES =
%w[user_message agent_message system_message].freeze
- THINK_TOOL =
"think"- TOOL_TYPES =
Message types that require a tool_use_id to pair call with response.
%w[tool_call tool_response].freeze
- ROLE_MAP =
{"user_message" => "user", "agent_message" => "assistant"}.freeze
- BYTES_PER_TOKEN =
Heuristic: average bytes per token for English prose.
4- SYSTEM_PROMPT_ID =
Synthetic ID for system prompt entries in the TUI message store. Real message IDs are positive integers from the database, so 0 is safe for deduplication without collision risk.
0
Constants included from Broadcasting
Broadcasting::ACTION_CREATE, Broadcasting::ACTION_UPDATE
Instance Attribute Summary collapse
-
#message_type ⇒ String
One of TYPES: system_message, user_message, agent_message, tool_call, tool_response.
-
#payload ⇒ Hash
Message-specific data (content, tool_name, tool_input, etc.).
-
#timestamp ⇒ Integer
Nanoseconds since epoch (Process::CLOCK_REALTIME).
-
#token_count ⇒ Integer
Cached token count for this message’s payload (0 until counted).
-
#tool_use_id ⇒ String
ID correlating tool_call and tool_response messages (Anthropic-assigned, or a SecureRandom.uuid fallback when the API returns nil; required for tool_call and tool_response messages).
Class Method Summary collapse
-
.context_messages ⇒ ActiveRecord::Relation
Messages included in the LLM context window (conversation + tool interactions).
-
.estimate_token_count(bytesize) ⇒ Integer
Estimates token count from a byte size using the BYTES_PER_TOKEN heuristic.
-
.llm_messages ⇒ ActiveRecord::Relation
Messages that represent conversation turns sent to the LLM API.
Instance Method Summary collapse
-
#api_role ⇒ String
Maps message_type to the Anthropic Messages API role.
-
#context_message? ⇒ Boolean
True if this message is part of the LLM context window.
-
#conversation_or_think? ⇒ Boolean
True if this is a conversation message (user/agent/system) or a think tool_call — the messages Mneme treats as “conversation” for boundary tracking.
-
#estimate_tokens ⇒ Integer
Heuristic token estimate: ~4 bytes per token for English prose.
-
#llm_message? ⇒ Boolean
True if this message represents an LLM conversation turn.
Instance Attribute Details
#message_type ⇒ String
Returns one of TYPES: system_message, user_message, agent_message, tool_call, tool_response.
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
# File 'app/models/message.rb', line 23 class Message < ApplicationRecord include Message::Broadcasting TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Heuristic: average bytes per token for English prose. BYTES_PER_TOKEN = 4 # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 # Estimates token count from a byte size using the {BYTES_PER_TOKEN} heuristic. # @param bytesize [Integer] number of bytes # @return [Integer] estimated token count (at least 1) def self.estimate_token_count(bytesize) [(bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max end belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create :schedule_token_count, if: :llm_message? # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.context_messages # Messages included in the LLM context window (conversation + tool interactions). # @return [ActiveRecord::Relation] scope :context_messages, -> { where(message_type: CONTEXT_TYPES) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this message represents an LLM conversation turn def .in?(LLM_TYPES) end # @return [Boolean] true if this message is part of the LLM context window def .in?(CONTEXT_TYPES) end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # Heuristic token estimate: ~4 bytes per token for English prose. # Tool messages are estimated from the full payload JSON since tool_input # and tool metadata contribute to token count. Messages use content only. # # @return [Integer] estimated token count (at least 1) def estimate_tokens text = if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end self.class.estimate_token_count(text.bytesize) end private def schedule_token_count CountMessageTokensJob.perform_later(id) end end |
#payload ⇒ Hash
Returns message-specific data (content, tool_name, tool_input, etc.).
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
# File 'app/models/message.rb', line 23 class Message < ApplicationRecord include Message::Broadcasting TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Heuristic: average bytes per token for English prose. BYTES_PER_TOKEN = 4 # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 # Estimates token count from a byte size using the {BYTES_PER_TOKEN} heuristic. # @param bytesize [Integer] number of bytes # @return [Integer] estimated token count (at least 1) def self.estimate_token_count(bytesize) [(bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max end belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create :schedule_token_count, if: :llm_message? # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.context_messages # Messages included in the LLM context window (conversation + tool interactions). # @return [ActiveRecord::Relation] scope :context_messages, -> { where(message_type: CONTEXT_TYPES) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this message represents an LLM conversation turn def .in?(LLM_TYPES) end # @return [Boolean] true if this message is part of the LLM context window def .in?(CONTEXT_TYPES) end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # Heuristic token estimate: ~4 bytes per token for English prose. # Tool messages are estimated from the full payload JSON since tool_input # and tool metadata contribute to token count. Messages use content only. # # @return [Integer] estimated token count (at least 1) def estimate_tokens text = if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end self.class.estimate_token_count(text.bytesize) end private def schedule_token_count CountMessageTokensJob.perform_later(id) end end |
#timestamp ⇒ Integer
Returns nanoseconds since epoch (Process::CLOCK_REALTIME).
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
# File 'app/models/message.rb', line 23 class Message < ApplicationRecord include Message::Broadcasting TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Heuristic: average bytes per token for English prose. BYTES_PER_TOKEN = 4 # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 # Estimates token count from a byte size using the {BYTES_PER_TOKEN} heuristic. # @param bytesize [Integer] number of bytes # @return [Integer] estimated token count (at least 1) def self.estimate_token_count(bytesize) [(bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max end belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create :schedule_token_count, if: :llm_message? # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.context_messages # Messages included in the LLM context window (conversation + tool interactions). # @return [ActiveRecord::Relation] scope :context_messages, -> { where(message_type: CONTEXT_TYPES) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this message represents an LLM conversation turn def .in?(LLM_TYPES) end # @return [Boolean] true if this message is part of the LLM context window def .in?(CONTEXT_TYPES) end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # Heuristic token estimate: ~4 bytes per token for English prose. # Tool messages are estimated from the full payload JSON since tool_input # and tool metadata contribute to token count. Messages use content only. # # @return [Integer] estimated token count (at least 1) def estimate_tokens text = if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end self.class.estimate_token_count(text.bytesize) end private def schedule_token_count CountMessageTokensJob.perform_later(id) end end |
#token_count ⇒ Integer
Returns cached token count for this message’s payload (0 until counted).
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
# File 'app/models/message.rb', line 23 class Message < ApplicationRecord include Message::Broadcasting TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Heuristic: average bytes per token for English prose. BYTES_PER_TOKEN = 4 # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 # Estimates token count from a byte size using the {BYTES_PER_TOKEN} heuristic. # @param bytesize [Integer] number of bytes # @return [Integer] estimated token count (at least 1) def self.estimate_token_count(bytesize) [(bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max end belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create :schedule_token_count, if: :llm_message? # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.context_messages # Messages included in the LLM context window (conversation + tool interactions). # @return [ActiveRecord::Relation] scope :context_messages, -> { where(message_type: CONTEXT_TYPES) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this message represents an LLM conversation turn def .in?(LLM_TYPES) end # @return [Boolean] true if this message is part of the LLM context window def .in?(CONTEXT_TYPES) end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # Heuristic token estimate: ~4 bytes per token for English prose. # Tool messages are estimated from the full payload JSON since tool_input # and tool metadata contribute to token count. Messages use content only. # # @return [Integer] estimated token count (at least 1) def estimate_tokens text = if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end self.class.estimate_token_count(text.bytesize) end private def schedule_token_count CountMessageTokensJob.perform_later(id) end end |
#tool_use_id ⇒ String
Returns ID correlating tool_call and tool_response messages (Anthropic-assigned, or a SecureRandom.uuid fallback when the API returns nil; required for tool_call and tool_response messages).
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
# File 'app/models/message.rb', line 23 class Message < ApplicationRecord include Message::Broadcasting TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONTEXT_TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Heuristic: average bytes per token for English prose. BYTES_PER_TOKEN = 4 # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 # Estimates token count from a byte size using the {BYTES_PER_TOKEN} heuristic. # @param bytesize [Integer] number of bytes # @return [Integer] estimated token count (at least 1) def self.estimate_token_count(bytesize) [(bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max end belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create :schedule_token_count, if: :llm_message? # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.context_messages # Messages included in the LLM context window (conversation + tool interactions). # @return [ActiveRecord::Relation] scope :context_messages, -> { where(message_type: CONTEXT_TYPES) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this message represents an LLM conversation turn def .in?(LLM_TYPES) end # @return [Boolean] true if this message is part of the LLM context window def .in?(CONTEXT_TYPES) end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # Heuristic token estimate: ~4 bytes per token for English prose. # Tool messages are estimated from the full payload JSON since tool_input # and tool metadata contribute to token count. Messages use content only. # # @return [Integer] estimated token count (at least 1) def estimate_tokens text = if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end self.class.estimate_token_count(text.bytesize) end private def schedule_token_count CountMessageTokensJob.perform_later(id) end end |
Class Method Details
.context_messages ⇒ ActiveRecord::Relation
Messages included in the LLM context window (conversation + tool interactions).
70 |
# File 'app/models/message.rb', line 70 scope :context_messages, -> { where(message_type: CONTEXT_TYPES) } |
.estimate_token_count(bytesize) ⇒ Integer
Estimates token count from a byte size using the BYTES_PER_TOKEN heuristic.
47 48 49 |
# File 'app/models/message.rb', line 47 def self.estimate_token_count(bytesize) [(bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max end |
.llm_messages ⇒ ActiveRecord::Relation
Messages that represent conversation turns sent to the LLM API.
65 |
# File 'app/models/message.rb', line 65 scope :llm_messages, -> { where(message_type: LLM_TYPES) } |
Instance Method Details
#api_role ⇒ String
Maps message_type to the Anthropic Messages API role.
74 75 76 |
# File 'app/models/message.rb', line 74 def api_role ROLE_MAP.fetch() end |
#context_message? ⇒ Boolean
Returns true if this message is part of the LLM context window.
84 85 86 |
# File 'app/models/message.rb', line 84 def .in?(CONTEXT_TYPES) end |
#conversation_or_think? ⇒ Boolean
Returns true if this is a conversation message (user/agent/system) or a think tool_call — the messages Mneme treats as “conversation” for boundary tracking.
90 91 92 93 |
# File 'app/models/message.rb', line 90 def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end |
#estimate_tokens ⇒ Integer
Heuristic token estimate: ~4 bytes per token for English prose. Tool messages are estimated from the full payload JSON since tool_input and tool metadata contribute to token count. Messages use content only.
100 101 102 103 104 105 106 107 |
# File 'app/models/message.rb', line 100 def estimate_tokens text = if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end self.class.estimate_token_count(text.bytesize) end |
#llm_message? ⇒ Boolean
Returns true if this message represents an LLM conversation turn.
79 80 81 |
# File 'app/models/message.rb', line 79 def .in?(LLM_TYPES) end |