Document Type: Protocol
Status: Active
Authority: Experimentation Brain
Applies To: Affiliate Brain, Content Brain, Conversion Brain, Ads Brain, Research Brain
Parent: Experimentation Brain Canon
Version: v1.1
Last Reviewed: 2026-04-25
Purpose
The Experimentation Brain Time Based Testing Protocol defines how MWMS evaluates performance changes when split testing is not possible.
Many environments do not allow true A/B testing.
Examples include:
• SEO changes
• pricing changes
• UX layout changes
• navigation changes
• content structure changes
• technical improvements
In these situations, performance must be evaluated sequentially over time.
Time-based testing allows MWMS to extract directional learning while maintaining experimentation discipline.
🔴 Extension (NEW)
Time-based testing must still follow:
👉 structured hypothesis
👉 measurement planning
👉 interpretation discipline
👉 decision control
Without this:
→ time-based testing becomes guess-based
Scope
This protocol applies to:
• SEO experiments
• content structure tests
• landing page structural changes
• offer structure changes
• pricing tests
• navigation changes
• UX layout modifications
• technical performance changes
Governs
• sequential comparison testing
• test timing structure
• signal interpretation logic
• decision discipline when statistical significance is limited
Does Not Govern
• true A/B split testing
• multivariate testing design
• traffic allocation mechanisms
These are governed by Experimentation Brain A/B testing frameworks.
Core Principle
Time-based testing produces:
👉 directional insight, not absolute certainty
Therefore:
👉 decision discipline must be stricter
🔴 Data Validation Requirement (NEW)
Before using time-based data:
Data must pass:
• Signal Integrity
• Measurement Integrity
• Data Trust
If not:
→ results must be treated as low confidence
Definition
Time-based testing compares performance metrics across sequential time periods before and after a change is implemented.
Structure:
baseline period
→ change implementation
→ evaluation period
Performance difference between periods provides directional insight.
🔴 Forecast Requirement (NEW)
Before running a time-based test, you must define:
• expected outcome
• expected direction
• acceptable variance
Without this:
→ comparison has no meaning
When Time Based Testing Should Be Used
Time-based testing should be used when:
• traffic volume is insufficient for split testing
• platform limitations prevent traffic splitting
• SEO indexing prevents controlled variants
• technical constraints prevent parallel versions
• risk prevents simultaneous variation deployment
• change affects entire system simultaneously
Time Based Testing Structure
Step 1 — Define Baseline Period
Collect performance data before implementing change.
Baseline period should be sufficiently long to capture:
• normal variation
• seasonal effects
• traffic fluctuations
🔴 Baseline Integrity Rule (NEW)
Baseline must:
• use consistent measurement conditions
• use validated data
• reflect stable behaviour
If baseline is unstable:
→ test is invalid
Step 2 — Implement Controlled Change
Introduce one meaningful change.
Avoid simultaneous unrelated changes.
Document:
• what changed
• why change was introduced
• expected impact
• date of implementation
🔴 Isolation Enforcement (NEW)
Only one primary variable may be changed.
If multiple variables change:
→ result becomes non-interpretable
Step 3 — Define Evaluation Period
Allow sufficient time for system stabilisation.
Evaluation period should consider:
• traffic volume
• learning phase duration
• algorithm response time
• user adaptation effects
🔴 Time Discipline Rule (NEW)
No interpretation or decision may occur:
• before sufficient time has passed
• before signal stabilisation
Premature evaluation = invalid conclusion
Step 4 — Compare Performance Periods
Compare performance between:
• baseline period
• evaluation period
Assess directional performance change.
🔴 Comparison Discipline (NEW)
Comparison must include:
👉 Expected vs Actual vs Variance
Interpretation must answer:
• did performance improve as expected?
• did it decline?
• is variance stable or noisy?
Metrics Suitable for Time Based Testing
• conversion rate
• revenue per visitor
• average order value
• bounce rate
• scroll depth
• session duration
• organic click-through rate
• search ranking movement
• lead conversion rate
Metrics should align with Growth Lever focus when applicable.
Directional Signal Interpretation
Time-based testing does not always produce statistical certainty.
Interpretation must rely on:
• trend consistency
• magnitude of change
• supporting behavioural signals
• repeat test confirmation
🔴 Interpretation Control (NEW)
Interpretation must follow:
• Interpretation Discipline Protocol
• Data Brain validation
• lifecycle stage alignment
Confounding Variable Awareness
External factors may influence results.
Examples:
• seasonality
• campaign overlap
• traffic mix changes
• algorithm updates
• market changes
• competitor actions
🔴 Risk Adjustment Rule (NEW)
If confounding variables exist:
→ confidence must be reduced
Test Isolation Rule
Where possible:
test one major variable per time period.
Iterative Testing Principle
Time-based tests may be repeated iteratively.
Sequence:
test change A
→ observe impact
→ refine change
→ retest
→ compare cumulative improvements
🔴 Iteration Control (NEW)
Each iteration must:
• define new hypothesis
• define expected outcome
• compare to previous result
Decision Discipline (NEW SECTION)
Time-based testing must not drive immediate scaling.
Allowed Decisions
• refine
• retest
• continue observation
• cautious progression
Blocked Decisions
• aggressive scaling
• capital escalation
• strong conclusions without repeatability
Decision Gate
Decisions may only occur if:
• signal is consistent
• variance is understood
• confounding variables accounted for
• data integrity confirmed
Relationship to SEO Testing Framework
(UNCHANGED)
Relationship to Growth Lever Framework
(UNCHANGED)
Governance Rule
When A/B testing is not feasible, time-based testing must follow structured evaluation discipline.
Unstructured sequential changes should be avoided.
Changes should be documented and evaluated systematically.
Controlled Loss Alignment (NEW)
This protocol supports Controlled Loss by:
• preventing premature decisions
• reducing false positives
• reducing false negatives
• enforcing disciplined evaluation
• limiting capital exposure
Architectural Intent
Time-based testing provides:
👉 directional learning under constraints
It must not:
👉 simulate precision where none exists
🔴 Architectural Extension (NEW)
This protocol ensures:
• decisions reflect uncertainty
• interpretation reflects context
• system remains disciplined under imperfect conditions
Final Rule
If time-based results are unstable or unclear:
→ no strong decision should be made
Change Log
Version: v1.1
Date: 2026-04-25
Author: Experimentation Brain / HeadOffice
Change
Upgraded protocol to include:
• forecast requirement
• data validation dependency
• decision gating logic
• time discipline enforcement
• alignment with interpretation and lifecycle systems