Document Type: Canon
Status: Canon
Version: v1.6
Authority: MWMS HeadOffice
Applies To: Experimentation Brain statistical governance, methodological integrity, and cross-MWMS experiment standards
Parent: Brains
Last Reviewed: 2026-03-15
Purpose
The Experimentation Brain governs statistical and methodological integrity across MWMS.
Its purpose is to ensure that:
• all experiments follow statistical discipline
• all tests meet minimum methodological standards
• false positives are controlled
• measurement integrity is protected
• business cases are calculated before scaling
• insights are aggregated and institutionalized
It does not execute experiments.
It does not originate hypotheses.
It defines how experiments must be executed once intent is declared.
Scope
This canon applies to:
• A/B testing methodology across MWMS
• statistical thresholds and power requirements
• duration and sequencing rules
• peeking policies and SRM standards
• KPI selection discipline
• business-case validation before scale
• incrementality testing where required
• meta-study aggregation and institutional learning
• cross-brain experiment governance
This document governs the constitutional role and mandatory standards of Experimentation Brain.
It does not govern:
• offer approval
• capital allocation
• runtime system monitoring
• ad execution
• capital risk classification
• marketing strategy creation
Those remain governed by Affiliate Brain, Finance Brain, SIT Brain, HeadOffice, and related system canons.
Definition / Rules
Core Purpose
Experimentation Brain exists to protect the validity of learning inside MWMS.
It is the methodological governor of experiments, not the business originator of experiments.
It ensures that experimentation produces trustworthy institutional knowledge rather than noisy, ego-driven, or statistically weak conclusions.
Scope of Authority
Experimentation Brain governs:
• A/B testing methodology
• statistical thresholds
• power requirements
• test duration rules
• sequential testing discipline
• peeking policies
• Sample Ratio Mismatch (SRM) detection standards
• KPI selection rules
• business case calculations
• insight database structure
• meta-study aggregation standards
It does not:
• approve offers (Affiliate Brain)
• allocate capital (Finance Brain)
• monitor runtime systems (SIT Brain)
• execute ads
• define capital risk level
• create marketing strategy
Position in MWMS Architecture
Experimentation Brain is a governance peer to:
• SIT Brain
• Finance Brain
It supports:
• Affiliate Brain
• Ads Brain
• Product Brain
• AI Business Systems Brain
• any future testing environment
Hierarchy:
HeadOffice
↳ Experimentation Brain
↳ SIT Brain
↳ Finance Brain
Intent Alignment Requirement
Experimentation Brain may not initiate Phase 1 unless the MWMS opportunity lifecycle has progressed through:
• Research Signal
• Opportunity Queue
• Offer Intelligence Evaluation
• Affiliate Structural Evaluation
And the following conditions are satisfied:
• structured hypothesis declared
• capital risk classification defined
• lifecycle stage declared
• Velocity = YES
• research integrity status = OK
Experimentation validates statistical form.
It does not create business intent.
If Intent Gate is incomplete:
Output =
BLOCKED — Intent Misalignment
Experiment Lifecycle Standard (Mandatory)
Every experiment must follow this structure.
Phase 1 — Hypothesis Validation
• hypothesis supplied by Affiliate Brain
• primary KPI confirmed
• secondary KPIs defined
• expected uplift validated
• risk threshold confirmed
• minimal detectable effect defined
Experimentation validates structure, not business narrative.
Phase 2 — Power & Duration Calculation
• minimum sample size calculated
• power ≥ 80% (default rule)
• duration estimate documented
• traffic validation completed
Phase 3 — Pre-Test Integrity Checks
• funnel measurement verified
• tool vs analytics traffic reconciled
• tracking code verified
• variation allocation confirmed
Phase 4 — Execution Discipline
• no peeking before minimum sample threshold
• no mid-test KPI switching
• no audience reshaping mid-test
• no creative modification during run
Campaign structure must remain fixed during experiment execution.
Phase 5 — Outcome Classification
• Win
• Loss
• Inconclusive
• False positive risk flagged
Phase 6 — Business Case Validation
• expected value calculated
• implementation cost included
• risk-adjusted ROI calculated
Finance Brain gate approval is required before scaling.
Incrementality Testing Protocol
Certain marketing activities require validation of true incremental impact, not just relative performance between variants.
Standard A/B testing determines which variant performs better.
Incrementality testing determines whether an activity generates additional outcomes that would not have occurred without intervention.
Incrementality tests may be required when evaluating:
• brand advertising
• upper-funnel acquisition campaigns
• retargeting systems
• affiliate traffic sources
• channel expansion initiatives
• algorithmic bidding systems
• CRM and lifecycle marketing programs
Incrementality testing methods may include:
• holdout groups
• ghost ads
• geo-based experimentation
• audience exclusion experiments
• time-based counterfactual testing
When incrementality testing is required, the experiment must include:
• control population receiving no exposure
• treatment population receiving exposure
Observed difference in outcome rates between treatment and control represents incremental lift.
Incrementality results must be evaluated alongside:
• cost of exposure
• estimated incremental conversions
• incremental revenue impact
Scaling decisions must consider incremental value, not only conversion attribution.
Incrementality testing is particularly important in environments where:
• platform attribution models may inflate performance
• conversion tracking cannot observe full customer journeys
• multiple channels interact within the same funnel
Failure to evaluate incrementality in these conditions may lead to false scaling decisions.
Measurement Integrity Protocol
Experimentation Brain enforces:
• funnel completeness audits
• analytics vs test-tool traffic comparison
• code execution rate monitoring
• client vs server-side test evaluation
• first-party data preference
• ITP / cookie limitation mitigation review
If measurement gap > 10% → test flagged
If measurement gap > 20% → scaling blocked
False Positive Protection Rules
Mandatory rules:
• minimum 80% power
• no stopping early without predefined sequential rule
• SRM monitoring required
• false positive estimation included in reporting
Probability of being best does not equal automatic scale.
Scaling requires expected value approval.
Meta-Study & Behavioral Intelligence Layer
All experiments must be logged with:
• funnel stage
• traffic source
• offer type
• persuasion technique used
• psychological mechanism
• uplift %
• statistical confidence
• sample size
• business outcome
Experimentation Brain maintains:
• insight database
• cross-test aggregation
• persuasion pattern detection
• funnel-stage effectiveness mapping
• long-term behavioral model development
This becomes institutional memory.
Prediction Validation Rule
All experiments must produce a forward prediction that can be validated against future performance.
Before scaling approval, the experiment report must include:
• expected uplift projection
• confidence range of outcome
• forecast impact on business metrics
After implementation, the system must evaluate:
• predicted uplift vs actual performance
• variance between projected and realized impact
Validation structure:
Prediction → Implementation → Observed Outcome → Variance Analysis → Insight Update
Persistent prediction error may trigger:
• model recalibration
• experiment design review
• insight confidence downgrade
This rule ensures experimentation produces validated learning rather than isolated statistical outcomes.
Scaling Governance Model
When experimentation volume increases:
Execution may distribute across operational teams.
However:
• standards remain centralized
• statistical governance remains centralized
No department may run experiments outside this framework.
Interaction With SIT Brain
SIT monitors:
• runtime compliance
• logging
• data integrity
• rule violation detection
Experimentation Brain defines:
• statistical rules
• integrity thresholds
SIT enforces.
Experimentation governs methodology.
Interaction With Finance Brain
Experimentation produces:
• statistical outcome
• expected uplift
• risk-adjusted projection
Finance decides:
• capital allocation
• scaling approval
• budget expansion
No scaling may occur without Finance approval.
Interaction With Affiliate Brain
Affiliate Brain defines:
• hypothesis
• opportunity context
• testing intent
These must originate from opportunities that have passed:
• Opportunity Queue
• Offer Intelligence evaluation
Experimentation Brain validates:
• statistical design
• sample requirements
• execution discipline
Affiliate Brain does not override statistical methodology.
Escalation Rule
If any of the following occur:
• statistical integrity violated
• measurement compromised
• peeking detected
• KPI switching detected
• intent misalignment detected
SIT escalates to HeadOffice.
HeadOffice may:
• invalidate test
• freeze scaling
• trigger audit
Canon Lock
Experimentation standards may only be modified by:
• HeadOffice approval
• documentation
• version update
No operational brain may override Experimentation rules.
Final Rule
No experiment inside MWMS is valid unless it satisfies methodological discipline, measurement integrity, and governance alignment.
Experimentation Brain protects the credibility of learning across the ecosystem.
Drift Protection
The system must prevent:
• experiments being treated as valid without statistical discipline
• business enthusiasm overriding experiment form
• scaling decisions being made from weak or contaminated data
• departments creating local testing standards outside central governance
• incrementality-sensitive channels being evaluated with incomplete logic
• institutional learning degrading into disconnected test anecdotes
Experimentation governance must remain centralized, consistent, and enforceable.
Architectural Intent
Experimentation Brain exists to make statistical and methodological integrity a permanent governing layer inside MWMS.
Its role is to ensure that all Brains can test, learn, and scale within a common discipline so institutional knowledge compounds on valid evidence rather than noise, bias, or convenience.
Change Log
Version: v1.6
Date: 2026-03-15
Author: MWMS HeadOffice
Change: Rebuilt page to align with the locked MWMS document standard for this cleanup pass. Preserved the original statistical-governance role, authority boundaries, lifecycle requirements, experiment lifecycle standard, incrementality protocol, measurement integrity protocol, false-positive protection rules, meta-study layer, prediction validation rule, scaling governance model, cross-brain interactions, escalation logic, and canon lock. Standardised metadata structure, removed non-standard header fields by integrating their meaning into the body, and added Scope, Final Rule, Drift Protection, and Architectural Intent sections.
Version: v1.5
Date: 2026-03-09
Author: MWMS HeadOffice
Change: Introduced Incrementality Testing Protocol to distinguish true incremental impact from relative variant performance in marketing experiments.
Version: v1.4
Date: 2026-03-09
Author: MWMS HeadOffice
Change: Lifecycle alignment update. Added recognition of Opportunity Queue and Offer Intelligence stages prior to Affiliate Brain hypothesis declaration to maintain consistency with MWMS discovery and evaluation architecture.
Version: v1.3
Date: Earlier version
Author: MWMS HeadOffice
Change: Prior canonical version before lifecycle alignment update.
END – EXPERIMENTATION BRAIN v1.6