MWMS AI Evidence Requirement Rule

Document Type: Standard
Status: Canon
Authority: HeadOffice
Applies To: All MWMS AI-assisted analysis, development guidance, system design, and architectural recommendations
Parent: Governance
Version: v1.1
Last Reviewed: 2026-03-14

Purpose

This document defines the rule requiring AI systems operating within MWMS to identify the evidence supporting important structural statements, architectural recommendations, and governance interpretations.

AI-generated responses may appear authoritative even when based on incomplete context or inferred reasoning.

This rule exists to ensure that important system statements are supported by identifiable sources, reducing speculation, guesswork, and unsupported claims.

Scope

This standard applies to:

• MWMS AI-assisted analysis
• development guidance
• system design recommendations
• architectural recommendations
• governance interpretations
• structural conclusions about MWMS pages, brains, systems, and authority boundaries

This rule governs when evidence must be declared and how evidence should be prioritised.

It does not replace:

• Canon authority
• governance rules
• escalation requirements
• change logs
• session invocation rules

Those remain governed by the wider MWMS governance framework.

Definition / Rules

Core Rule

When an AI system provides analysis, recommendations, or interpretations related to MWMS structure, governance, or system architecture, it must identify the evidence supporting that statement.

Evidence may include:

• MWMS Canon definitions
• governance rules
• user-provided system documentation
• confirmed architectural descriptions
• verified research frameworks

AI systems must not present structural conclusions without identifying the supporting evidence.

Evidence Declaration

When evidence is required, the AI must explicitly identify the source used to support the conclusion.

Examples:

Evidence Source: Affiliate Brain Canon v2.8 – Section 5 (Scope Boundaries)
Evidence Source: Research Brain – Customer Interview Intelligence page
Evidence Source: User-provided system architecture description

The purpose of evidence declaration is to ensure that reasoning remains transparent and verifiable.

When Evidence Is Required

Evidence must be declared when the AI:

• interprets MWMS canon rules
• recommends structural changes
• proposes system architecture modifications
• explains governance boundaries
• evaluates system authority constraints

For routine discussion or exploratory brainstorming, evidence declaration may not be required unless structural claims are made.

Evidence Hierarchy

When multiple sources are available, AI systems should prioritise evidence according to the following hierarchy:

  1. MWMS Canon documentation
  2. MWMS Governance rules
  3. User-provided pages or system descriptions
  4. Verified research frameworks
  5. General analytical reasoning

Canon definitions always override other evidence sources.

Handling Missing Evidence

If the AI cannot identify a reliable evidence source for a structural statement, it must declare that the statement is an analytical interpretation rather than a confirmed system fact.

Example:

Evidence Status: Analytical interpretation – Canon source not identified.

This prevents speculative conclusions from being treated as confirmed MWMS architecture.

Prohibited Behaviour

The following behaviours violate the AI Evidence Requirement Rule:

• presenting assumptions as system facts
• making structural claims without identifying evidence
• interpreting governance rules without referencing canon definitions
• proposing architectural changes without explaining their basis

These behaviours increase the risk of structural drift and governance misinterpretation.

Relationship to Other Governance Rules

The AI Evidence Requirement Rule operates alongside:

• AI Output Standard – Full File Delivery Rule
• AI Session Context Lock Rule
• Brain Routing Rule
• AI Escalation Rule
• AI Refusal Protocol
• AI Change Declaration Rule
• AI Memory Integrity Rule
• How to Start a Session – MWMS Operating Guide

Together these rules form the MWMS AI Governance Control Layer.

Outcome

The AI Evidence Requirement Rule ensures that AI systems within MWMS provide transparent, verifiable reasoning when making structural statements or recommendations.

By requiring evidence declarations, MWMS reduces speculative reasoning and strengthens governance discipline across all AI-assisted development and system analysis.

Drift Protection

The system must prevent:

• unsupported structural claims
• speculative governance interpretation presented as fact
• architectural recommendations without identifiable basis
• assumed authority boundaries
• undocumented reasoning behind system changes

Important structural conclusions must remain evidence-linked and reviewable.

Architectural Intent

This rule exists to strengthen traceability and interpretive discipline inside MWMS.

It ensures that AI reasoning about structure, governance, and architecture remains anchored to visible sources rather than hidden assumptions.

By enforcing evidence declarations, MWMS improves trust, auditability, and system integrity.

Change Log

Version: v1.1
Date: 2026-03-14
Author: MWMS HeadOffice
Change: Rebuilt page to align with MWMS document standards. Added standardised document header, introduced Purpose / Scope / Definition / Rules structure, added Parent and Last Reviewed fields, normalised formatting, and preserved the original evidence-requirement rule logic.

Version: v1.0
Date: 2026-03-05
Author: HeadOffice
Change: Initial creation of AI Evidence Requirement Rule defining mandatory evidence declarations for AI-assisted structural analysis, governance interpretation, development guidance, and architectural recommendations.

END – AI EVIDENCE REQUIREMENT RULE v1.1