Documentation

Experimentation Dashboard

Available in: Agency Plan

Run A/B tests to optimize your content performance. Test different variations and use statistical analysis to determine winners.

Overview

The Experimentation Dashboard lets you:

  • Create A/B tests for content, campaigns, or modules
  • Define multiple variants with different content
  • Track performance metrics
  • View statistical significance
  • Get data-driven recommendations

Understanding Experiments

An experiment compares two or more variants to determine which performs better:

  • Control - Your baseline version
  • Variants - Alternative versions to test
  • Traffic Split - How traffic is distributed (e.g., 50/50, 33/33/34)
  • Success Metric - What you're measuring (clicks, conversions, engagement)

Creating an Experiment

  1. Go to Experiments in the main navigation
  2. Click Create Experiment
  3. Enter experiment details:
    • Name (required)
    • Description (optional)
    • Experiment type (content, campaign, or module)
    • Brand (required)
    • Success metric (clicks, conversions, engagement, etc.)
  4. Click Create

Adding Variants

  1. Go to your experiment
  2. Click Add Variant
  3. Configure variant:
    • Name (e.g., "Variant A", "Headline Option 2")
    • Variant type (control, variant_a, variant_b, etc.)
    • Content data (the actual content to test)
    • Traffic percentage (0-100%)
    • Mark as control (if this is your baseline)

Note: Traffic percentages should total 100% across all variants.

Running an Experiment

  1. Ensure all variants are added
  2. Set experiment status to Running
  3. Set start and end dates (optional)
  4. The system will automatically assign variants to new content based on traffic split

Assigning Variants

When creating content (module runs, campaigns, etc.), the system will:

  1. Check if an active experiment exists
  2. Assign a variant based on traffic split (weighted random)
  3. Track which variant was shown to which entity

Recording Results

Results are recorded automatically when:

  • Content is published
  • Clicks are tracked
  • Conversions occur
  • Engagement metrics are captured

You can also manually record metrics:

  1. Go to your experiment
  2. Click Record Metric
  3. Enter:
    • Variant ID
    • Metric name
    • Metric value
    • Sample size

Viewing Experiment Statistics

  1. Go to your experiment
  2. Click View Statistics
  3. See:
    • Performance by variant
    • Statistical significance (p-value)
    • Confidence intervals
    • Winner determination
    • Recommendations

Understanding Results

Statistical Significance

  • P-value < 0.05: Statistically significant difference
  • P-value ≥ 0.05: No significant difference detected
  • Sample Size: Larger samples provide more reliable results

Recommendations

The system provides recommendations:

  • Winner Found: One variant significantly outperforms others
  • Inconclusive: No significant difference, continue testing
  • Not Enough Data: Need more samples for reliable results

Experiment Status

  • Draft - Being configured
  • Running - Actively testing
  • Paused - Temporarily stopped
  • Completed - Finished and analyzed
  • Cancelled - Stopped without completion

Best Practices

  1. Test One Thing: Change one element per experiment for clear results
  2. Equal Traffic Split: Start with 50/50 for two variants
  3. Adequate Sample Size: Wait for sufficient data before drawing conclusions
  4. Set Clear Metrics: Define success metric before starting
  5. Run Long Enough: Allow enough time for statistical significance
  6. Document Learnings: Note what worked and why

Related Topics


Back to: Help Center