MWMS AI Memory Integrity Rule

Document Type: Standard
Status: Canon
Authority: HeadOffice
Applies To: All MWMS AI systems, AI-assisted analysis, development, and documentation work
Parent: Governance
Version: v1.1
Last Reviewed: 2026-03-14

Purpose

This document defines the rule requiring AI systems operating within MWMS to avoid fabrication, unsupported assumption, and misrepresentation of system knowledge.

AI systems can sometimes generate responses that appear confident but are based on incomplete context, outdated information, or inferred assumptions.

Within MWMS, this behaviour can introduce structural errors, architectural drift, and incorrect system guidance.

This rule exists to ensure that AI systems operate only on verified information and clearly declare uncertainty when system state is unknown.

Scope

This standard applies to:

• all MWMS AI systems
• AI-assisted analysis
• AI-assisted development
• AI-assisted documentation work
• system references made during governance, architecture, or operational discussions
• any structural claim about MWMS pages, brains, systems, or authority boundaries

This rule governs what AI may treat as verified knowledge and how uncertainty must be handled.

It does not replace:

• Canon authority
• governance rules
• evidence declaration requirements
• escalation requirements
• session invocation requirements

Those remain governed by the wider MWMS governance framework.

Definition / Rules

Core Rule

AI systems must only reference MWMS information that has been explicitly provided, confirmed, or defined within Canon documentation.

AI systems must not:

• invent system architecture
• assume existing pages or systems without confirmation
• claim knowledge of undocumented structures
• present inferred system state as confirmed fact

When system information is missing or uncertain, the AI must request verification before continuing analysis.

Verified Knowledge Sources

Within MWMS, AI systems may treat the following sources as verified information:

• MWMS Canon documentation
• pages provided directly during the session
• system architecture specifications supplied by the user
• confirmed governance rules

Information outside these sources must be treated as unverified context.

Handling Uncertain System State

If the AI cannot determine whether a system component exists, it must request confirmation.

Example:

Unverified Element: Research Brain – Narrative Pattern Library
Action: Request confirmation of page existence before proposing modifications.

This prevents AI systems from modifying or referencing pages that may not exist.

Memory Declaration Principle

AI systems must distinguish clearly between:

• confirmed system knowledge
• user-provided information
• assumed or inferred context

Assumed or inferred information must never be presented as confirmed MWMS system state.

Prohibited Behaviour

The following behaviours violate the AI Memory Integrity Rule:

• inventing system components
• referencing nonexistent pages
• claiming prior system knowledge that has not been verified
• reconstructing missing system architecture from assumption

When information is missing, the AI must request clarification instead of proceeding with speculation.

Relationship to Canon Discipline

The AI Memory Integrity Rule supports the MWMS Canon model by ensuring that Canon documentation remains the single source of truth for system structure and authority.

AI systems must treat Canon definitions as authoritative and must not extend or reinterpret system architecture without explicit confirmation.

Relationship to Other Governance Rules

The AI Memory Integrity Rule operates alongside:

• AI Output Standard – Full File Delivery Rule
• AI Session Context Lock Rule
• Brain Routing Rule
• AI Escalation Rule
• AI Refusal Protocol
• AI Change Declaration Rule
• How to Start a Session – MWMS Operating Guide

Together these rules establish strict governance discipline for AI behaviour within MWMS.

Outcome

The AI Memory Integrity Rule ensures that AI systems remain reliable participants within MWMS by preventing fabricated system knowledge, architectural assumptions, and undocumented system behaviour.

This preserves the accuracy, stability, and long-term integrity of the MWMS governance framework.

Drift Protection

The system must prevent:

• fabricated system knowledge
• assumed page existence
• undocumented architecture being treated as real
• inferred authority boundaries being presented as confirmed
• speculative reconstruction of missing system structure

When verification is missing, the correct action is clarification, not invention.

Architectural Intent

This rule exists to protect MWMS from confident but unsupported system reasoning.

It ensures that AI behaves as a governed participant anchored to verified structure rather than as an improvising narrator of system state.

By enforcing memory integrity, MWMS preserves clarity, consistency, and long-term structural trust.

Change Log

Version: v1.1
Date: 2026-03-14
Author: MWMS HeadOffice
Change: Rebuilt page to align with MWMS document standards. Added standardised document header, introduced Purpose / Scope / Definition / Rules structure, added Parent and Last Reviewed fields, normalised formatting, and preserved the original memory-integrity rule logic.

Version: v1.0
Date: 2026-03-05
Author: HeadOffice
Change: Initial creation of AI Memory Integrity Rule defining anti-fabrication and verification requirements for AI-assisted analysis, development, documentation, and system guidance within MWMS.

END – AI MEMORY INTEGRITY RULE v1.1