Experimentation Brain Growth Process Framework

Document Type: Framework
Status: Active
Authority: Experimentation Brain
Applies To: Affiliate Brain, Ads Brain, Conversion Brain, Research Brain, Content Brain, Product Brain
Parent: Experimentation Brain Canon
Version: v1.1
Last Reviewed: 2026-04-19


Purpose

The Experimentation Brain Growth Process Framework defines the structured process MWMS uses to convert ideas into disciplined experiments and useful learning.

A growth process is required because experimentation without structure leads to:

random testing
poor prioritisation
weak documentation
fragmented learning
unclear next steps
slow organisational learning

This framework ensures MWMS experimentation follows a repeatable cycle that improves signal quality and learning efficiency over time.

The growth process does not exist to increase activity volume.

It exists to increase learning quality and decision usefulness.

Statistical confidence discipline strengthens interpretation reliability within the growth loop.


Scope

This framework applies to:

Experimentation Brain test governance
Affiliate Brain experiment workflows
Ads Brain structured testing
Conversion Brain optimisation process
Research Brain idea generation support
Content Brain structured test iteration
Product Brain behavioural experiment support

This framework governs:

growth experimentation process structure
movement from idea to prioritisation to test to learning
experiment tracking discipline
learning loop continuity
confidence interpretation discipline

This framework does not govern:

specific experiment scoring formula
statistical threshold methodology
Brain-specific execution tactics
budget approval rules

These remain governed by related frameworks.


Definition

A Growth Process is the structured operating cycle used to turn opportunities into experiments, experiments into results, and results into usable learning.

The process creates disciplined continuity between:

idea generation
prioritisation
test execution
analysis
next-step decision-making

The process ensures tests contribute to system intelligence rather than isolated activity.

Statistical interpretation strengthens confidence in learning outcomes.


Core Growth Process Loop

MWMS uses the following core loop:

Idea

Prioritisation

Test

Analysis

Learning

Iteration

Each stage must be completed clearly.

Skipping stages reduces learning quality.

Confidence discipline applies particularly during Analysis and Learning stages.


Stage 1 — Idea Generation

Idea generation identifies potential opportunities for testing.

Ideas may originate from:

customer research
market signals
behaviour analysis
performance diagnostics
competitive intelligence
team ideation
product friction observations

Ideas should relate to active Growth Levers where possible.

Idea generation should remain broad.

Judgement and prioritisation occur later.

Ideas should remain hypothesis-oriented rather than assumption-oriented.


Stage 2 — Prioritisation

Not all ideas should be tested.

Prioritisation determines:

which ideas deserve testing now.

Prioritisation may consider:

expected impact
confidence
ease
resource requirements
strategic fit
Growth Lever relevance
signal reliability potential

Ideas should be chosen intentionally.

Prioritisation should consider likelihood of obtaining interpretable signals.

Tests producing ambiguous signals reduce learning efficiency.


Stage 3 — Test Execution

Selected ideas move into structured testing.

Test execution requires:

clear hypothesis
clear success condition
defined owner
defined timeline
defined scope

Tests should be meaningful enough to generate measurable learning.

Where possible, test structure should minimise noise and ambiguity.

Sampling awareness should be considered when defining test scope.

Small sample sizes increase signal uncertainty.

Tests should aim to produce interpretable signal clarity.


Stage 4 — Analysis

Analysis determines what the test result means.

Analysis should assess:

result quality
signal clarity
confidence level
potential confounding variables
behavioural relevance
decision usefulness
sampling reliability
variance influence

Analysis must go beyond:

did it win?

Analysis should answer:

what did this teach the system?

Observed relationships must be interpreted carefully.

Correlation alone does not prove causation.

Statistical confidence should be interpreted as a spectrum rather than a binary outcome.


Stage 5 — Learning

Every meaningful test should produce learning.

Learning may include:

confirmed hypothesis
rejected hypothesis
new behavioural insight
new directional clue
new theme worth exploring
area requiring repetition or narrowing
confidence adjustment

Learning should be documented clearly.

Undocumented learning is effectively lost.

Learning confidence should reflect signal reliability.

Sampling limitations should be considered when interpreting results.

Learning must distinguish between:

observed effect
statistical noise

Confidence discipline improves learning quality.


Stage 6 — Iteration

Iteration uses previous learning to guide next action.

Possible next actions include:

repeat test
narrow test
expand test
retire idea
move to new theme
promote stronger hypothesis

Iteration closes the growth process loop.

Confidence-adjusted learning improves iteration quality.

Repeated signal confirmation strengthens confidence stability.


Statistical Reliability Layer

Statistical reasoning strengthens interpretation reliability.

Confidence in experiment results depends on:

sample size
variance stability
effect size magnitude
signal consistency
measurement reliability
data distribution behaviour

Small samples produce higher uncertainty.

Large samples reduce random variation influence.

Probability reasoning helps interpret uncertainty realistically.

False positives may occur when noise appears meaningful.

False negatives may occur when real effects are missed due to insufficient evidence.

Confidence must consider both risks.

Hypothesis testing supports structured evaluation of observed differences.

Forecast-based assumptions must account for uncertainty and residual error.

Statistical confidence improves decision stability.


Growth Process Objective

The primary objective of the growth process is not experimentation for its own sake.

The objective is:

to improve decision quality through structured learning.

This means:

fewer random actions
faster pattern recognition
better prioritisation
higher confidence progression
cleaner signal accumulation

Confidence-adjusted learning improves scaling reliability.


Growth Process Benefits

A disciplined growth process improves:

focus
team alignment
learning continuity
decision confidence
experiment quality
cross-functional collaboration
signal reliability interpretation

It reduces:

firefighting behaviour
random experiment selection
loss of learning
over-reliance on opinion
fragmented optimisation activity

Statistical discipline strengthens confidence clarity.


Relationship to Growth Levers

The growth process should normally operate inside active Growth Levers.

Growth Levers define:

what area should be improved.

The growth process defines:

how learning inside that area progresses.

Without active leverage focus, the growth process may become too broad.

Statistical confidence improves prioritisation reliability inside Growth Levers.


Relationship to Themes

Themes organise experiments within each Growth Lever.

The growth process should move through:

Growth Lever

Theme

Experiment

Learning

Themes improve pattern detection and learning continuity.

Confidence-adjusted learning improves theme evolution quality.


Experiment Tracking Principle

Experiments should be tracked systematically.

Tracking should distinguish between:

ideas not yet tested
active tests
completed tests

Tracking should capture:

hypothesis
owner
theme
lever
results
learning
next step
confidence strength

Tracking should remain useful, not bloated.

Confidence strength improves signal comparability.


Documentation Discipline Rule

Documentation should be sufficient to preserve learning.

However, over-documentation should be avoided.

Not every small optimisation requires full experiment treatment.

Documentation depth should match learning value and test significance.

Confidence interpretation should remain proportional to signal importance.

This protects the process from administrative overload.


Experiment vs Optimisation Distinction

The growth process must distinguish between:

growth experiments
optimisations
A B tests

Growth Experiment

A meaningful structured test designed to produce decision-relevant learning.

Optimisation

A smaller incremental adjustment that improves an existing asset or process.

A B Test

A specific test method comparing two variants simultaneously.

Not every optimisation requires full experiment treatment.

Not every experiment is an A B test.

Clear distinction protects the growth process from noise.


Process Maturity Principle

The growth process should mature over time.

Typical maturity progression may include:

tracking current tests
focusing on one leverage area
building clearer hypotheses
creating structured backlog
improving prioritisation logic
improving documentation quality
improving analysis depth
improving statistical confidence interpretation

MWMS should not attempt maximum process complexity immediately.

Process maturity should evolve with team capability.


Governance Rule

All meaningful experiments should follow the defined growth process loop.

Experimentation activity that bypasses prioritisation, analysis, or learning documentation reduces system intelligence.

Statistical confidence discipline improves interpretation reliability.

The process should remain structured but lightweight enough to sustain ongoing use.


Relationship to Experimentation Brain Canon

Experimentation Brain Canon defines:

why testing discipline matters.

This Growth Process Framework defines:

how disciplined experimentation flows operationally.

Experimentation Brain Statistical Confidence Framework defines:

how confidence in results is interpreted.

All three operate together.


Version Control

v1.0
Initial definition of structured MWMS growth experimentation process.

v1.1
Added Statistical Reliability Layer incorporating probability interpretation, sampling awareness, confidence spectrum discipline, and forecast uncertainty awareness to improve experiment interpretation reliability.


END Experimentation Brain Growth Process Framework v1.1