Experimentation Brain Decision Structure Tests

Document Type: Framework
Status: Draft Canon Candidate
Authority: HeadOffice
Applies To: Experimentation Brain, Affiliate Brain, Ads Brain
Parent: Experimentation Brain Testing Intelligence Framework
Last Reviewed: 2026-03-29


Purpose

Defines the experiment structures used to test how decision environments influence behavioural continuation.

Ensures Experimentation Brain can evaluate whether changes to option presentation, comparison structure, and decision flow improve or reduce user progression.

Supports structured testing of decision support variables in landing pages, bridge pages, offer pages, and pricing environments.


Core Principle

Users often fail to act because they cannot confidently decide.

Decision environments can be tested structurally.

Choice architecture, option clarity, and trade-off visibility all influence behavioural completion probability.


Test Domains

Option Clarity Tests

Test whether users can clearly distinguish between available choices.

Variables may include:

• feature comparison visibility
• benefit contrast clarity
• package naming clarity
• offer differentiation clarity

Primary question:

Can the user easily understand the difference between available options?


Choice Simplicity Tests

Test whether reducing complexity improves progression.

Variables may include:

• number of visible options
• number of comparison columns
• amount of supporting detail
• simplified vs dense comparison layouts

Primary question:

Does reducing visible complexity improve choice confidence?


Dominant Option Tests

Test whether highlighting a primary path improves decision speed and continuation.

Variables may include:

• recommended option labels
• most popular option labels
• highlighted package styling
• central option emphasis

Primary question:

Does visible prioritisation improve progression without harming trust?


Trade-Off Transparency Tests

Test whether users make decisions more easily when gains and sacrifices are clearer.

Variables may include:

• feature inclusion display
• savings explanation
• cost-benefit clarity
• side-by-side comparison detail level

Primary question:

Does clearer trade-off visibility improve decision confidence?


Post Click Decision Continuity Tests

Test whether decision support remains strong after ad click.

Variables may include:

• ad-to-page message continuity
• repeated value framing
• headline alignment
• landing page option framing

Primary question:

Does post-click alignment improve decision completion rates?


Recommended Metrics

Primary:

• progression to next step
• selection completion rate
• click-through to commitment step
• completed decision event rate

Secondary:

• time to selection
• abandonment rate
• comparison interaction depth
• return-to-comparison behaviour

Diagnostic:

• hesitation patterns
• option switching frequency
• repeated re-evaluation behaviour


Testing Rules

• test one major decision variable at a time where possible
• isolate structural choice changes from unrelated creative changes
• preserve tracking consistency across variants
• document expected behavioural mechanism before launch
• do not treat uplift as valid without behavioural interpretation


Structural Insight

Decision environments are not neutral.

They are designed systems that either:

• reduce choice friction
• increase cognitive effort
• clarify trade-offs
• obscure differences
• support commitment
• delay commitment

The purpose of these tests is to identify which decision structures help users move forward with clarity.


Interfaces

Inputs:

• Affiliate Brain Behavioural Signal Framework
• Affiliate Brain Decision Support Model
• Ads Brain Decision Support Signals
• Offer Comparison Grid
• landing page review

Outputs:

• Decision Structure Test Result
• Choice Complexity Finding
• Dominant Option Impact Finding
• Decision Continuity Recommendation


Drift Triggers

Reject or flag if:

• multiple decision variables are changed without isolation
• the test cannot explain the behavioural mechanism being tested
• option differences are too unclear to interpret results
• uplift is claimed without decision-path evidence
• structural confusion is introduced in the name of testing