Experimentation Dashboard
Available in: Agency Plan
Run A/B tests to optimize your content performance. Test different variations and use statistical analysis to determine winners.
Overview
The Experimentation Dashboard lets you:
- Create A/B tests for content, campaigns, or modules
- Define multiple variants with different content
- Track performance metrics
- View statistical significance
- Get data-driven recommendations
Understanding Experiments
An experiment compares two or more variants to determine which performs better:
- Control - Your baseline version
- Variants - Alternative versions to test
- Traffic Split - How traffic is distributed (e.g., 50/50, 33/33/34)
- Success Metric - What you're measuring (clicks, conversions, engagement)
Creating an Experiment
- Go to Experiments in the main navigation
- Click Create Experiment
- Enter experiment details:
- Name (required)
- Description (optional)
- Experiment type (content, campaign, or module)
- Brand (required)
- Success metric (clicks, conversions, engagement, etc.)
- Click Create
Adding Variants
- Go to your experiment
- Click Add Variant
- Configure variant:
- Name (e.g., "Variant A", "Headline Option 2")
- Variant type (control, variant_a, variant_b, etc.)
- Content data (the actual content to test)
- Traffic percentage (0-100%)
- Mark as control (if this is your baseline)
Note: Traffic percentages should total 100% across all variants.
Running an Experiment
- Ensure all variants are added
- Set experiment status to Running
- Set start and end dates (optional)
- The system will automatically assign variants to new content based on traffic split
Assigning Variants
When creating content (module runs, campaigns, etc.), the system will:
- Check if an active experiment exists
- Assign a variant based on traffic split (weighted random)
- Track which variant was shown to which entity
Recording Results
Results are recorded automatically when:
- Content is published
- Clicks are tracked
- Conversions occur
- Engagement metrics are captured
You can also manually record metrics:
- Go to your experiment
- Click Record Metric
- Enter:
- Variant ID
- Metric name
- Metric value
- Sample size
Viewing Experiment Statistics
- Go to your experiment
- Click View Statistics
- See:
- Performance by variant
- Statistical significance (p-value)
- Confidence intervals
- Winner determination
- Recommendations
Understanding Results
Statistical Significance
- P-value < 0.05: Statistically significant difference
- P-value ≥ 0.05: No significant difference detected
- Sample Size: Larger samples provide more reliable results
Recommendations
The system provides recommendations:
- Winner Found: One variant significantly outperforms others
- Inconclusive: No significant difference, continue testing
- Not Enough Data: Need more samples for reliable results
Experiment Status
- Draft - Being configured
- Running - Actively testing
- Paused - Temporarily stopped
- Completed - Finished and analyzed
- Cancelled - Stopped without completion
Best Practices
- Test One Thing: Change one element per experiment for clear results
- Equal Traffic Split: Start with 50/50 for two variants
- Adequate Sample Size: Wait for sufficient data before drawing conclusions
- Set Clear Metrics: Define success metric before starting
- Run Long Enough: Allow enough time for statistical significance
- Document Learnings: Note what worked and why
Related Topics
Back to: Help Center