Experimentation Brain Paid Media Experiment Framework

Document Type: Framework
Status: Active
Version: v1.0
Authority: Experimentation Brain
Parent: Experimentation Brain Architecture
Applies To: Ads Brain, Affiliate Brain, Conversion Brain, Data Brain, Research Brain
Last Reviewed: 2026-04-19


Purpose

Defines the standard structure for designing, executing, evaluating, and scaling paid media experiments inside MWMS.

The framework ensures paid media testing is:

structured
comparable
repeatable
statistically interpretable
scalable
auditable

The framework prevents:

random testing
premature scaling
inconclusive experiment interpretation
platform-driven bias
metric misinterpretation

This framework standardises how MWMS evaluates:

audiences
creatives
offers
landing pages
bidding approaches
funnel transitions
targeting logic
messaging alignment

across all paid traffic platforms.


Core Principle

Paid media optimisation is not a creative activity.

It is an experimentation discipline.

Every paid media test must produce learning that:

improves future decisions
reduces uncertainty
increases signal clarity
improves capital allocation discipline

Experiments must produce interpretable outcomes.

If a result cannot be interpreted, the test is incomplete.


Experiment Structure

Each paid media experiment must define the following components before activation.


1. Experiment Goal

Defines the business outcome the experiment is intended to influence.

Examples:

increase qualified traffic
improve conversion rate
reduce cost per acquisition
improve signal quality
validate audience fit
validate message resonance
improve funnel continuity

Goals must connect to a measurable system outcome.

Goals must not be vague.


2. Hypothesis Definition

Each experiment must state a behavioural hypothesis.

Structure:

If we change [variable]

for [defined audience]

then [defined outcome metric] will improve

because [behavioural reasoning]

Example:

If message specificity increases alignment with user intent
then conversion rate will increase
because perceived relevance improves decision confidence.

Hypotheses must be falsifiable.

Non-falsifiable hypotheses are not valid tests.


3. Variable Isolation

Each experiment must isolate a primary variable.

Examples of valid variables:

audience segment
message angle
creative format
offer framing
landing page structure
CTA framing
targeting constraint
bid strategy
funnel step transition

Multiple simultaneous variable changes reduce interpretability.

Where multiple variables are tested:

test design must clearly separate influence layers.


4. KPI Definition

Each experiment must define primary and secondary KPIs.

Primary KPI:

the main signal used to evaluate success.

Secondary KPIs:

diagnostic signals used to interpret behavioural effects.

Example KPI hierarchy:

Primary KPI:

conversion rate
cost per acquisition
qualified lead rate

Secondary KPIs:

CTR
landing page engagement
scroll depth
video completion
form completion rate

KPIs must map to the system signal hierarchy.


5. Measurement Window

Each experiment must define:

minimum runtime
minimum impression volume
minimum signal threshold

Premature evaluation is prohibited.

Experiment duration must allow sufficient signal accumulation.

Minimum signal thresholds must reflect:

traffic volume
funnel complexity
conversion lag

Tests must not be concluded early due to impatience.


6. Experiment Constraints

Each test must define constraint boundaries.

Examples:

budget constraint
traffic constraint
platform constraint
targeting constraint
time constraint

Constraints ensure tests remain comparable.

Changing constraints mid-test invalidates interpretation.


7. Control Structure

Where possible, experiments must include:

control condition
variation condition

Control enables:

relative signal comparison
behavioural attribution
causal interpretation

Without control, learning reliability decreases.


8. Signal Interpretation Logic

Interpretation must consider:

signal strength
consistency across metrics
behavioural coherence
statistical plausibility

Example interpretation patterns:

high CTR + low conversion rate indicates message mismatch
low CTR + high conversion rate indicates high relevance but limited reach
high engagement + low conversion indicates friction later in funnel

Signal interpretation must consider full funnel behaviour.


9. Decision Rule

Each experiment must define decision logic before activation.

Possible outcomes:

scale
iterate
pause
reject
retest with modified structure

Scaling decisions must not be emotional.

Scaling decisions must reflect signal strength and structural coherence.


10. Documentation Requirement

Each experiment must produce structured record outputs.

Record fields:

experiment ID
hypothesis
variables tested
audience tested
KPI results
interpretation summary
decision outcome

Documentation ensures signal reuse.

Undocumented experiments do not contribute to MWMS intelligence.


Experiment Types

MWMS paid media experiments commonly fall into the following categories:

Audience Experiments

test audience fit
test targeting precision
test segment responsiveness

Creative Experiments

test message angle
test belief triggers
test visual structure
test attention patterns

Offer Experiments

test value framing
test incentive structure
test perceived risk reduction

Funnel Experiments

test landing page continuity
test CTA clarity
test information hierarchy

Bid Strategy Experiments

test cost control approaches
test volume acquisition stability

Platform Mechanics Experiments

test optimisation behaviour
test delivery mechanics


Scaling Discipline

Scaling must occur incrementally.

Scaling must follow demonstrated signal strength.

Scaling progression example:

test segment
adjacent segment
broader segment
new segment cluster

Scaling must evaluate:

signal decay
cost stability
behavioural consistency

Scaling must not occur globally after one successful test.

Scaling must protect baseline performance stability.


Signal Integrity Protection

Experiment conclusions must consider:

sample bias
platform optimisation distortion
seasonal effects
budget volatility
external traffic shifts

Signals must be interpreted within system context.

Single-metric interpretation is prohibited.


Relationship to Other MWMS Frameworks

Supports:

Experimentation Brain Statistical Confidence Framework
Ads Brain Creative Testing Structure Framework
Conversion Brain Funnel Structure Framework
Research Brain Behaviour Signal Framework
Data Brain Measurement Integrity Framework

Provides experiment structure layer for paid traffic decision-making.


Architectural Intent

Paid media platforms introduce complexity due to:

algorithmic optimisation
variable interaction effects
delayed signal feedback
attribution uncertainty

This framework ensures MWMS maintains:

decision discipline
signal clarity
capital protection
learning continuity

Experiments must improve system intelligence.

Not just campaign performance.


Change Log

Version: v1.0
Date: 2026-04-19
Author: Experimentation Brain

Change:

Initial creation of Paid Media Experiment Framework defining structured methodology for designing, executing, evaluating, and scaling paid media experiments across MWMS traffic systems.

Establishes hypothesis discipline, KPI hierarchy structure, scaling constraints, signal interpretation logic, and documentation requirements.