Document Type: Framework
Status: Active
Version: v1.0
Authority: MWMS HeadOffice
Parent: Experimentation Brain Canon
Slug: experimentation-brain-program-maturity-assessment-framework
Purpose
Defines how MWMS evaluates the maturity of an experimentation capability over time.
The purpose of maturity assessment is not to create vanity scoring.
Its purpose is to identify whether the experimentation function is becoming more:
- strategically aligned
- methodologically reliable
- operationally efficient
- cross-functionally trusted
- systemically useful
This framework helps MWMS distinguish between:
- activity without learning
- learning without implementation
- implementation without discipline
- discipline that is actually becoming a scalable capability
Scope
Applies to assessment of experimentation capability across:
- strategy alignment
- process design
- planning quality
- execution standards
- communication quality
- learning capture
- adoption and trust
- tool and system support
- organisational integration
Applies at the level of:
- one experimentation program
- one Brain-level experimentation function
- cross-Brain experimentation capability
- future enterprise-wide maturity reviews
Core Principle
A mature experimentation program is not defined by running more tests.
It is defined by how reliably testing produces:
- trustworthy signals
- useful decisions
- repeatable learning
- scalable organisational advantage
Maturity is therefore a capability assessment, not an activity count.
Assessment Dimensions
1. Strategic Alignment
Assesses whether experimentation is tied to:
- business objectives
- growth constraints
- system priorities
- decision-making needs
Questions include:
- Are the right problems being tested?
- Are tests linked to real strategic goals?
- Is prioritisation aligned to material opportunity?
2. Process Structure
Assesses whether experimentation follows a clear operating flow.
Includes:
- problem identification
- hypothesis formation
- prioritisation
- test execution
- decision routing
- archive discipline
Questions include:
- Does the program have a repeatable process?
- Are decision pathways clear?
- Are responsibilities understood?
3. Signal Quality Discipline
Assesses whether the program generates reliable signals.
Includes:
- test integrity
- confidence progression
- result interpretation quality
- noise reduction discipline
Questions include:
- Are tests producing interpretable signals?
- Is confidence earned rather than assumed?
- Are false winners reduced over time?
4. Learning Capture
Assesses whether knowledge created by tests becomes reusable.
Includes:
- archive consistency
- summarisation quality
- repeated pattern detection
- learning continuity
Questions include:
- Are results recorded in useful form?
- Can prior learning be found and reused?
- Does the program accumulate intelligence?
5. Communication and Adoption
Assesses whether results influence behaviour.
Includes:
- implementation uptake
- stakeholder trust
- clarity of decision summaries
- learning distribution across Brains
Questions include:
- Are results being understood?
- Are teams changing behaviour because of findings?
- Is communication simple enough to spread?
6. Tool and System Enablement
Assesses whether the infrastructure supports scale.
Includes:
- experiment tracking systems
- workflow systems
- measurement reliability
- communication tooling
- dashboarding and scorecards
Questions include:
- Is the system easy to use?
- Does tooling reduce friction?
- Are critical metrics accessible?
7. Organisational Integration
Assesses whether experimentation is embedded in decision flow.
Includes:
- HeadOffice usage
- cross-Brain interfaces
- process ownership
- implementation linkage
- strategic recognition
Questions include:
- Is experimentation advisory only, or truly integrated?
- Is it isolated or connected to broader system behaviour?
- Does it affect how MWMS operates?
Maturity Stages
Stage 1 — Ad Hoc
Characteristics:
- experiments happen inconsistently
- no strong prioritisation
- learning is fragmented
- decisions are often intuitive rather than structured
Stage 2 — Structured
Characteristics:
- repeatable workflow exists
- basic standards are present
- results are captured more consistently
- prioritisation begins improving
Stage 3 — Reliable
Characteristics:
- signals are generally trustworthy
- teams understand the process
- decisions use experimentation outputs more consistently
- learning archives become useful assets
Stage 4 — Integrated
Characteristics:
- experimentation affects multiple Brains
- strategy and testing are meaningfully linked
- communication improves adoption
- implementation pathways are smoother
Stage 5 — Intelligence-Driven
Characteristics:
- experimentation functions as a core operating layer
- learning compounds over time
- cross-Brain capability strengthens
- the organisation becomes more adaptive through disciplined testing
Inputs
process documents
guardrail metrics
archive records
stakeholder interviews
tooling review
adoption patterns
decision records
Outputs
maturity profile
capability gaps
improvement priorities
target state definition
cross-Brain integration needs
Assessment Use Cases
This framework may be used to:
- assess the current Experimentation Brain capability
- compare maturity across sub-functions
- identify bottlenecks to scaling
- justify new systems or governance improvements
- establish future build priorities for MCR and HeadOffice
Relationship to Program Guardrail Metrics Framework
Guardrail metrics provide one measurement layer.
Maturity assessment provides broader interpretation of capability.
Metrics alone do not explain maturity.
This framework interprets the operating condition behind those metrics.
Relationship to Results Communication Framework
Communication quality is one of the strongest maturity indicators.
A program that learns well but communicates poorly remains immature at organisational level.
Failure Modes
This framework protects MWMS from:
- confusing activity with capability
- scaling weak systems too early
- treating test count as maturity
- ignoring poor adoption and low trust
- underestimating archive and learning discipline
- assuming tool count equals sophistication
Governance Notes
Experimentation Brain may run the maturity assessment.
HeadOffice should review material capability implications where maturity gaps affect broader MWMS performance.
Maturity scoring must remain diagnostic, not performative.
Canon Relationships
Experimentation Brain Canon
Experimentation Brain Program Guardrail Metrics Framework
Experimentation Brain Results Communication Framework
Experimentation Brain Experimentation Operating System Framework
Change Log
v1.0 initial canonical structure defined