Experimentation Brain Test Result And Decision Workflow

Document Type: Protocol
Status: Active
Version: v1.1
Authority: Experimentation Brain
Applies To: Experimentation Brain test result handling, decision-making, and post-test workflow
Parent: Experimentation Brain Canon
Last Reviewed: 2026-04-25


Purpose

This protocol defines how MWMS must handle test results after a test has been executed.

Its purpose is to ensure that every test:

• produces a structured result
• leads to a clear next action
• generates reusable learning
• prevents uncontrolled scaling
• improves future decision quality
• is evaluated against expected performance (NEW)

This protocol transforms testing from isolated activity into a continuous learning and improvement system.


Scope

This protocol applies to:

• all tests executed within Experimentation Brain
• post-test evaluation
• result classification
• decision-making after test completion
• feedback into Affiliate Brain and Research Brain

It governs what must happen after a test produces results.

It does not govern:

• test setup
• test execution
• statistical calculation methods
• budget allocation


Core Principle

A test is not complete when it stops running.

A test is only complete when:

• its result is classified
• a decision is made
• learning is captured
• next action is defined
• actual performance is compared to expected performance (NEW)


🔴 Decision Discipline Extension (NEW)

All test results must be evaluated using:

👉 Forecast → Actual → Variance → Decision

Without this:

→ optimisation becomes guess-based


Workflow Overview

Test Execution

Test Completion

Result Classification

Decision

Action

Learning Capture

Feedback Loop


Step 1 — Test Completion

A test enters completion when:

• planned duration is reached
• success criteria are met
• failure criteria are met
• test is manually stopped

At this point, the test must move into structured evaluation.


Step 2 — Result Classification

Every test must be classified into one of the following categories:


Clear Winner

Definition:

• meets or exceeds success criteria
• strong signal quality
• consistent performance
• aligns with or exceeds forecast expectations (NEW)


Weak Positive

Definition:

• shows some positive signal
• inconsistent or marginal performance
• requires refinement or retesting
• variance from forecast is unstable (NEW)


Inconclusive

Definition:

• insufficient data
• unclear signal
• test conditions not adequate
• unable to validate against forecast (NEW)


Structured Failure

Definition:

• fails to meet success criteria
• provides clear negative signal
• result is interpretable
• underperforms forecast expectations (NEW)


Invalid Test

Definition:

• test setup was flawed
• execution issues occurred
• data is unreliable
• data fails Data Brain validation (NEW)


🔴 Data Validation Gate (NEW)

Before classification:

Data must pass:

• Signal Integrity
• Measurement Integrity
• Data Trust

If not:

→ test must be classified as Invalid Test


Step 3 — Decision Mapping

Each classification must map to a defined action.


Clear Winner → Scale Or Expand

Actions:

• escalate to Finance Brain for capital readiness
• compare performance against forecast expectations (NEW)
• consider scaling strategy
• expand testing dimensions


Weak Positive → Refine And Retest

Actions:

• adjust angle
• refine creative
• modify targeting
• run follow-up test
• adjust hypothesis based on variance (NEW)


Inconclusive → Extend Or Redesign

Actions:

• extend duration
• increase sample size
• redesign test
• review measurement conditions (NEW)


Structured Failure → Kill And Record

Actions:

• stop further testing
• record failure
• capture learning
• validate that failure is signal-valid (NEW)


Invalid Test → Reset And Rebuild

Actions:

• fix test design
• fix measurement issues (NEW)
• relaunch with corrected structure


Step 4 — Learning Capture

Every test must produce structured learning.


Required Learning Fields

• What was tested
• What worked
• What did not work
• Why the result occurred (best interpretation)
• Market insight
• Angle insight
• Platform insight
• Funnel insight
• Recommendation for future
• Variance vs expected outcome (NEW)


Step 5 — Feedback Loop

Learning must be routed back to:


Affiliate Brain

Improves:

• opportunity selection
• evaluation quality
• decision accuracy


Research Brain

Improves:

• signal classification
• pattern detection
• market understanding


Step 6 — Scaling Control

Scaling must not occur automatically.


Scaling Conditions

Before scaling:

• result must be classified as Clear Winner
• confidence must be sufficient
• risk must be understood
• Finance Brain must approve capital
• performance must align with forecast expectations (NEW)


Scaling Block Conditions

Scaling must be blocked if:

• result is weak or inconsistent
• signal is unclear
• risk is high
• insufficient data
• data integrity is compromised (NEW)
• variance from forecast is unstable (NEW)


Step 7 — Record Final Outcome

Each test must end with:

• final classification
• final decision
• final action
• learning captured
• comparison to forecast (NEW)


Controlled Loss Alignment

This workflow enforces the MWMS Controlled Loss Principle by:

• ensuring failures are interpretable
• preventing uncontrolled scaling
• converting losses into learning
• reducing repeated mistakes
• maintaining capital discipline
• preventing decisions based on weak or invalid data (NEW)


Governance Role

This protocol ensures:

• testing produces structured outcomes
• decisions are consistent
• learning is captured
• system improves over time
• decisions are based on validated data and expected performance (NEW)


Relationship To Other MWMS Pages

This protocol operates alongside:

• Experimentation Brain Test Candidate Screen Specification
• Affiliate Brain To Experimentation Brain Handoff Specification
• Experimentation Brain Structured Testing Protocol
• Finance Brain Capital Allocation Ladder
• MWMS Controlled Loss Principle
• Data Brain Measurement Planning Framework (NEW)
• Data Brain Data Trust Framework (NEW)


Drift Protection

The system must prevent:

• tests ending without classification
• results without decisions
• scaling without validation
• ignoring failed tests
• repeating mistakes without learning
• treating all results as equal
• decisions without forecast comparison (NEW)
• decisions based on unvalidated data (NEW)


Architectural Intent

This protocol completes the MWMS testing loop.

It ensures that:

• every test produces value
• every result leads to action
• every action improves the system
• decisions are grounded in both data and expectation (NEW)

It transforms MWMS into a learning system that compounds over time.


Change Log

Version: v1.1
Date: 2026-04-25
Author: Experimentation Brain / HeadOffice


Change

Upgraded workflow to include:

• forecast vs actual comparison layer
• Data Brain validation gate
• variance-aware decision logic
• stronger scaling discipline
• alignment with Measurement Planning system


Change Impact Declaration

Pages Created:
None

Pages Updated:
Experimentation Brain Test Result And Decision Workflow

Pages Deprecated:
None

Registries Requiring Update:
No

Canon Version Update Required:
No

Change Log Entry Required:
Yes


End of Protocol