Experimentation Brain Diagnostic Trigger Framework


Document Type: Framework
Status: Active
Version: v1.0
Authority: Experimentation Brain
Applies To: Experimentation Brain, Data Brain, Affiliate Brain, Ads Brain, Research Brain, HeadOffice
Parent: Experimentation Brain Canon
Last Reviewed: 2026-04-25


Purpose

The Experimentation Brain Diagnostic Trigger Framework defines when MWMS should initiate testing based on detected changes in system behaviour.

Its purpose is to prevent random or unnecessary testing and ensure that experimentation is driven by identified problems, opportunities, or signal shifts.

Testing must be triggered by evidence.

Testing without a diagnostic trigger increases noise, wastes capital, and reduces learning quality.


Core Principle

Testing should not begin with ideas.

Testing should begin with signals.

A test must be triggered by:

• a detected problem
• a detected opportunity
• a detected behavioural shift
• a detected inconsistency

If no trigger exists:

→ no test should be created


Definition

A diagnostic trigger is a condition detected within MWMS that justifies the creation of a test.

Triggers originate from:

• Data Brain signals
• Affiliate Brain evaluations
• Experimentation Brain observations
• Research Brain insights


Core Question

This framework answers:

👉 Why is this test being created?


Trigger Categories


1. Performance Degradation Trigger

Activated when performance declines beyond expected variation.

Examples:

• conversion rate drops
• revenue declines
• funnel stage drop-off increases
• campaign performance weakens
• engagement declines


Purpose

• identify problems early
• prevent continued loss
• isolate root cause through testing


2. Customer Quality Drift Trigger

Activated when customer quality weakens over time.

Examples:

• lower repeat behaviour
• declining spend per customer
• increased discount dependency
• weaker downstream engagement


Purpose

• detect hidden system instability
• prevent scaling weak customer bases
• test recovery mechanisms


3. Offer Health Trigger

Activated when an offer shows signs of decay or instability.

Examples:

• declining conversion stability
• reduced engagement
• inconsistent performance
• increased cost to acquire


Purpose

• identify offer fatigue
• test new angles or structures
• prevent over-scaling degraded offers


4. Segment Imbalance Trigger

Activated when performance varies significantly across segments.

Examples:

• one traffic source outperforms others
• certain creatives outperform consistently
• geographic or device differences
• audience segment divergence


Purpose

• identify high-performing segments
• isolate weak segments
• test segmentation strategies


5. Funnel Friction Trigger

Activated when behavioural progression weakens.

Examples:

• increased drop-off at key steps
• reduced engagement between stages
• incomplete journeys
• abnormal user behaviour patterns


Purpose

• identify friction points
• test improvements
• improve conversion flow


6. Promotion Distortion Trigger

Activated when performance is influenced by pricing or incentives.

Examples:

• spike in conversions during discounts
• reduced full-price behaviour
• increased dependency on promotions


Purpose

• detect fake performance
• test real demand vs incentive-driven behaviour
• protect long-term profitability


7. Opportunity Signal Trigger

Activated when positive signals indicate potential upside.

Examples:

• consistent positive performance trend
• strong engagement signals
• emerging winning angle
• new traffic opportunity


Purpose

• expand winners
• validate opportunity
• test scalability


8. Measurement Inconsistency Trigger

Activated when data signals are unreliable.

Examples:

• sudden metric anomalies
• conflicting platform data
• unexpected drops or spikes
• signal inconsistencies across systems


Purpose

• validate measurement integrity
• prevent decisions based on faulty data
• trigger diagnostic tests or audits


Trigger Detection Sources

Triggers may originate from:

• Data Brain Customer Quality Tracking
• Data Brain Performance Decomposition
• Experimentation Brain Lifecycle outputs
• Affiliate Brain Offer Intelligence
• Ads Brain campaign data
• Research Brain signal analysis


Trigger Thresholds

Triggers must meet defined thresholds before action.

Thresholds may include:

• magnitude of change
• consistency of change
• duration of change
• impact on revenue or behaviour


Rule

Minor variation does not trigger testing.

Only meaningful signals trigger tests.


Trigger Validation

Before creating a test:

MWMS must confirm:

• signal is real (Data Brain validation)
• measurement is reliable
• signal is not noise
• signal is not caused by external anomaly alone


Validation Gate

If signal fails validation:

→ no test is created


Test Creation Rule

A test may only be created when:

• a valid trigger exists
• the trigger is documented
• the trigger is classified
• the expected outcome is defined


Trigger to Test Mapping

Each trigger must map to:

• a hypothesis
• a test variable
• a measurement plan
• a decision threshold


Decision Impact

Triggers influence:

• what gets tested
• how urgent the test is
• how much risk is acceptable
• how much capital may be allocated


Priority Levels


High Priority

• revenue decline
• major funnel failure
• strong customer quality drop


Medium Priority

• segment imbalance
• offer instability
• moderate performance shifts


Low Priority

• small optimisation opportunities
• minor creative improvements


Cross Brain Use


Data Brain

Detects signals and validates trigger conditions.


Experimentation Brain

Owns trigger-to-test conversion.


Affiliate Brain

Uses triggers to evaluate offer health and testing need.


Ads Brain

Uses triggers to adjust campaign-level testing.


Research Brain

Logs recurring trigger patterns.


HeadOffice

Uses triggers to prioritise system focus.


Relationship To Other Frameworks

This framework connects to:

• Data Brain Customer Quality Tracking Framework
• Data Brain Performance Decomposition Framework
• Data Brain Data Trust Framework
• Experimentation Brain Test Lifecycle Model
• Experimentation Brain Test Interpretation Discipline
• Experimentation Brain Test Result And Decision Workflow
• Affiliate Brain Offer Health Monitoring Framework


Failure Modes Prevented

This framework prevents:

• random testing
• testing without purpose
• wasting budget on noise
• missing early warning signals
• reacting too late to problems
• over-testing low-impact areas


Drift Protection

The system must prevent:

• tests being created without triggers
• triggers being ignored
• weak signals triggering unnecessary tests
• strong signals being dismissed
• inconsistent trigger standards


Architectural Intent

Diagnostic Trigger Framework ensures MWMS testing is:

👉 reactive to reality
not
👉 driven by ideas

This transforms experimentation into a signal-driven system.


Final Rule

If a test cannot clearly answer:

👉 what triggered it

→ the test must not be created


Change Log

Version: v1.0
Date: 2026-04-25
Author: Experimentation Brain / HeadOffice


Change

Initial creation of Diagnostic Trigger Framework based on CXL transactional analysis principles adapted for MWMS diagnostic intelligence.


Change Impact Declaration

Pages Created:
Experimentation Brain Diagnostic Trigger Framework

Pages Updated:
None

Pages Deprecated:
None

Registries Requiring Update:
Experimentation Brain Architecture
MWMS Architecture Registry

Canon Version Update Required:
No

Change Log Entry Required:
Yes


End of Framework