Document Type: Framework
Status: Active
Authority: Data Brain
Parent: Data Brain Architecture
Applies To: All MWMS environments where behavioural, performance, or measurement signals are monitored
Version: v1.0
Last Reviewed: 2026-04-23
Purpose
The Data Brain Signal Anomaly Response Framework defines how MWMS detects, classifies, investigates, and responds to anomalies in measurement and performance signals.
The framework ensures:
• anomalies are identified early
• root causes are diagnosed correctly
• invalid data does not influence decisions
• system stability is maintained
• decision-making is paused when required
Without structured response, anomalies lead to incorrect decisions.
Core Principle
An anomaly must never be ignored.
An anomaly must be:
• investigated
• validated
• classified
• resolved
If the cause is unknown:
→ decisions must be paused
Position in MWMS System
This framework operates alongside:
• Data Brain Monitoring and Audit Calendar Framework
• Data Brain Measurement Validation Protocol
• Data Brain Data Trust Framework
• HeadOffice Data Decision Gate Framework
This framework determines:
👉 how MWMS reacts when data behaves unexpectedly
What Is an Anomaly
An anomaly is any signal behaviour that deviates from expected patterns.
Examples of Anomalies
• sudden traffic spike or drop
• conversion rate spike or collapse
• missing events
• duplicated events
• unexpected channel shifts
• unusual attribution patterns
• broken funnel progression
• abnormal metric ratios
Anomaly Detection Sources
Anomalies may be detected through:
• automated alerts (GA4 insights, monitoring systems)
• manual review
• audit findings
• cross-platform comparison
Anomaly Classification
All anomalies must be classified by severity:
Critical Anomaly
Characteristics
• data is invalid
• tracking is broken
• decisions are unsafe
Examples
• conversions stop tracking
• duplicate conversion spike
• major data loss
Action
→ immediate investigation
→ decision freeze
High Severity Anomaly
Characteristics
• major distortion risk
• strong impact on interpretation
Examples
• large unexplained traffic spike
• sudden attribution shift
Action
→ urgent investigation
→ restrict decision-making
Medium Severity Anomaly
Characteristics
• partial impact
• limited distortion
Examples
• small metric inconsistency
• minor funnel drop-off
Action
→ investigate and monitor
Low Severity Anomaly
Characteristics
• minimal impact
• cosmetic or isolated issue
Action
→ monitor only
Anomaly Response Process
All anomalies must follow this process:
Step 1 — Detect
Identify anomaly via:
• alert
• report
• manual observation
Step 2 — Validate
Confirm anomaly is real:
• rule out reporting delay
• confirm across multiple views
• verify using raw/debug data
Step 3 — Classify Severity
Assign:
• Critical
• High
• Medium
• Low
Step 4 — Investigate Root Cause
Check:
• tracking changes
• GTM updates
• campaign changes
• site updates
• attribution changes
• platform behaviour
Step 5 — Assess Data Trust Impact
Determine:
• is data still usable?
• is data partially usable?
• is data invalid?
Step 6 — Trigger Decision Control
Apply HeadOffice Data Decision Gate:
• allow decisions
• restrict decisions
• block decisions
Step 7 — Resolve Issue
• fix tracking
• correct configuration
• adjust campaigns
• update attribution inputs
Step 8 — Revalidate
Run:
• Measurement Validation Protocol
• Attribution Validation Protocol
Step 9 — Resume Normal Operation
Only when:
• issue resolved
• data validated
• trust restored
🔴 Decision Freeze Rule
Decisions must be paused when:
• anomaly cause is unknown
• data integrity is compromised
• attribution is unstable
• tracking is broken
🔴 False Positive Protection Rule
Not all anomalies are real.
Before reacting, confirm:
• anomaly is not caused by reporting delay
• anomaly is not caused by sampling
• anomaly is not expected behaviour change
🔴 Revalidation Rule
After any anomaly:
→ full or partial validation must occur
No data returns to trusted state without validation.
🔴 Monitoring Feedback Loop
All anomalies must feed back into:
• Monitoring systems
• Audit processes
• Validation protocols
Purpose:
→ prevent repeat issues
Common Anomaly Causes
This framework helps identify:
• tracking breaks after deployment
• duplicate tag firing
• missing events
• attribution misconfiguration
• campaign mis-tagging
• platform algorithm shifts
• consent-mode changes
Relationship to Other Frameworks
This framework integrates with:
• Data Brain Monitoring and Audit Calendar Framework
• Data Brain Measurement Validation Protocol
• Data Brain Data Trust Framework
• Data Brain Attribution Validation Protocol
• HeadOffice Data Decision Gate Framework
Key Outcomes
When applied correctly:
• anomalies are handled safely
• bad data is contained
• decisions are protected
• system stability improves
• confidence in data remains high
Failure Modes Prevented
reacting to false signals
scaling on broken data
ignoring tracking failures
misinterpreting anomalies
delayed issue detection
loss of trust in data systems
Architectural Intent
This framework ensures MWMS operates as a controlled system under uncertainty.
Instead of reacting blindly to data:
→ MWMS validates, classifies, and controls response
Final Rule
If an anomaly is not understood:
→ decisions must not proceed
Change Log
Version: v1.0
Date: 2026-04-23
Author: Data Brain
Change:
Initial creation of Signal Anomaly Response Framework defining structured response to unexpected signal behaviour.
Change Impact Declaration
Pages Created:
Data Brain Signal Anomaly Response Framework
Pages Updated:
None
Pages Deprecated:
None
Registries Requiring Update:
MWMS Architecture Registry
Canon Version Update Required:
No
Change Log Entry Required:
Yes