HeadOffice Test Monitoring Dashboard Framework


Document Type: Framework
Status: Active
Authority: HeadOffice
Parent: Governance
Applies To: All MWMS environments where experiments, campaigns, or performance systems require ongoing monitoring and decision oversight
Version: v1.0
Last Reviewed: 2026-04-23


Purpose

The HeadOffice Test Monitoring Dashboard Framework defines how MWMS monitors experiment performance, validates outcomes, and supports decision-making through structured dashboard systems.

Experiments and campaigns generate continuous data.

Without structured monitoring:

• issues go unnoticed
• anomalies are missed
• decisions are delayed
• incorrect conclusions are made

This framework ensures MWMS maintains:

• real-time visibility
• structured performance monitoring
• consistent decision inputs
• centralised oversight


Core Principle

Data must be observable before it can be acted on.

Monitoring systems must:

• reflect reality
• highlight risk
• support validation
• enable fast response

A dashboard is not a reporting tool.

A dashboard is a decision surface.


Position in MWMS System

This framework operates within:

• HeadOffice → system oversight and decision control
• Experimentation Brain → test performance
• Data Brain → data validation and trust

It integrates:

• Raw Data Access Framework
• Warehouse Based Test Analysis Framework
• Data Decision Gate Framework


Dashboard Purpose Types

Monitoring dashboards serve three primary purposes:


1. Performance Monitoring

Track:

• users
• conversions
• conversion rates
• revenue or leads
• variant performance

Purpose:

→ understand current performance


2. Integrity Monitoring

Track:

• data anomalies
• missing events
• duplicate events
• unexpected spikes or drops

Purpose:

→ detect measurement issues


3. Decision Monitoring

Track:

• statistical confidence
• sample size progress
• readiness for decision
• decision gate status

Purpose:

→ determine when action is allowed


Dashboard Data Sources

Dashboards must use:

• validated data
• preferably raw or warehouse data
• consistent data sources

Where multiple sources exist:

→ discrepancies must be acknowledged


🔴 Raw Data Preference Rule

Dashboards used for decision-making must prioritise:

• warehouse data
• validated datasets

Interface data may be used for:

• quick checks
• early signals

Final decisions must not rely on unvalidated interface data.


Dashboard Structure


1. Experiment Overview Section

Displays:

• test name
• variants
• start date
• status

Purpose:

→ identify test context


2. Variant Performance Section

Displays:

• users per variant
• conversions per variant
• conversion rates
• relative performance

Purpose:

→ compare outcomes


3. Statistical Section

Displays:

• confidence level
• significance indicators
• sample size progress

Purpose:

→ validate reliability


4. Integrity Section

Displays:

• anomaly indicators
• data validation status
• duplication checks
• missing data alerts

Purpose:

→ ensure data quality


5. Decision Status Section

Displays:

• decision gate status
• recommended action
• readiness level

Purpose:

→ guide decision-making


🔴 Anomaly Visibility Rule

Dashboards must highlight:

• sudden spikes
• sudden drops
• unexpected behaviour

Anomalies must not be hidden.

They must be visible immediately.


🔴 Decision Gate Integration Rule

Dashboards must reflect:

• measurement integrity
• data trust level
• attribution reliability
• statistical confidence

If any component fails:

→ dashboard must indicate decision risk


🔴 Real-Time vs Validated Data Rule

Dashboards may show:

• real-time data

But must clearly distinguish:

• validated data
• unvalidated data

Unvalidated data must not be treated as decision-safe.


🔴 Segmentation Rule

Dashboards may include segmentation:

• device
• channel
• user type

Segmentation must:

• preserve statistical validity
• not mislead interpretation


🔴 Automation Rule

Dashboards should be:

• automatically updated
• connected to data pipelines
• consistent across environments

Automation ensures:

• reduced manual work
• consistent monitoring
• faster response


🔴 Data Latency Awareness Rule

Dashboards must account for:

• delayed data
• incomplete recent data

Recent data must be interpreted cautiously.


Monitoring Frequency

Dashboards should support:

• continuous monitoring
• daily review
• weekly analysis
• milestone-based decision review


Relationship to Other Frameworks

Supports:

• Data Brain Raw Data Access Framework
• Experimentation Brain Warehouse Based Test Analysis Framework
• Data Brain Data Trust Framework
• Data Brain Signal Anomaly Response Framework
• HeadOffice Data Decision Gate Framework


Failure Modes Prevented

missed anomalies
delayed decisions
incorrect conclusions
over-reliance on incomplete data
lack of visibility into test performance


Drift Protection

The system must prevent:

• dashboards showing inconsistent data
• reliance on outdated metrics
• loss of anomaly visibility
• degradation of data pipelines


Architectural Intent

The HeadOffice Test Monitoring Dashboard Framework ensures MWMS operates with:

continuous, structured visibility into performance and risk

It transforms dashboards from:

reporting tools → decision control systems


Final Rule

If a metric cannot be observed reliably:

→ it must not drive decisions


Change Log

Version: v1.0
Date: 2026-04-23
Author: HeadOffice

Change:
Initial creation of Test Monitoring Dashboard Framework defining how MWMS structures dashboards for experiment monitoring and decision oversight.


Change Impact Declaration

Pages Created:
HeadOffice Test Monitoring Dashboard Framework

Pages Updated:
None

Pages Deprecated:
None

Registries Requiring Update:
MWMS Architecture Registry

Canon Version Update Required:
No

Change Log Entry Required:
Yes