1. Knowledge base
  2. Data preparation & Model Calibration

Advanced Model Calibration Guide - Cassandra

Calibrating your model ensures that your marketing mix modeling (MMM) results are as accurate and actionable as possible. This guide covers key steps to refine your models through validation experiments and incremental improvements.

Step 1: Understanding Model Uncertainty

  1. Confidence Intervals
    • Cassandra provides confidence intervals around each marketing channel’s ROI.
    • Wider confidence intervals indicate higher uncertainty in attribution.
    • Channels with low spend or inconsistent data often have greater variance in results.
  2. Multicollinearity & Its Impact
    • If two or more marketing channels have highly correlated spending patterns, the model may struggle to differentiate their contributions.
    • To address this, aggregate highly correlated campaign types (e.g., combine Meta Prospecting + Meta Retargeting).
  3. Baseline & Seasonality Effects
    • Cassandra estimates a baseline contribution to revenue that occurs without marketing spend.
    • If your baseline is too high, your paid media impact might be underestimated.
    • Seasonality variables help explain natural fluctuations that are not caused by marketing spend.

Step 2: Running Incrementality Experiments

What is an Incrementality Test?

Incrementality tests measure the true impact of marketing channels by isolating them from other influencing factors. Cassandra supports two main experiment types:

1. GeoLift Experiments (Geographical Holdouts)

  • Goal: Measure the lift from a marketing channel by turning off spending in selected regions.
  • How to set up:
    • Select a test region (where the campaign will be turned off for a period of 2-4 weeks).
    • Select a control region (similar audience behavior but still receiving the campaign).
    • Compare revenue performance between regions.
  • When to use: When testing paid search, social ads, or TV campaigns.

2. Conversion Lift Studies (Platform-Based Tests)

  • Goal: Validate the effectiveness of a campaign using internal platform measurement.
  • Available in: Meta, Google Ads, TikTok, and other ad platforms.
  • How to set up:
    • Work with your ad platform’s marketing science team to run a controlled test.
    • A portion of your audience will not be shown ads, acting as a control group.
    • Measure the difference in conversions between the test and control groups.
  • When to use: When testing brand awareness and upper-funnel campaigns.

Step 3: Applying Calibration Results to Cassandra

Once your experiment is complete, use its results to update your Cassandra model.

  1. Access the Calibration Panel
    • Navigate to Models > Refresh Model and select Calibration Settings.
  2. Enter Experiment Results
    • Select the channel tested (e.g., Meta, Google Ads, TV).
    • Input the measured ROI from the experiment (if different from Cassandra’s estimate).
    • Cassandra will adjust its weightings to align with experiment findings.
  3. Refresh the Model
    • Apply the calibration and rerun the model to update forecasts.
    • Compare pre- and post-calibration results to assess the impact.

Step 4: Best Practices for Continuous Improvement

  1. Refresh Models Monthly
    • Ensure your model accounts for new data, seasonal trends, and experiment learnings.
  2. Prioritize High-Uncertainty Channels
    • Focus on experiments for channels with high variance in ROI estimates.
    • Adjust multi-touch attribution models if necessary.
  3. Use Budget Allocator for Ongoing Optimization
    • Cassandra’s Budget Allocator suggests optimal spend levels based on calibrated results.
    • Simulate multiple budget allocation scenarios before finalizing spend.

Summary & Next Steps

  • Monitor confidence intervals to identify uncertainty.
  • Run incrementality experiments for high-impact channels.
  • Apply calibration results to refine model accuracy.
  • Regularly refresh the model to keep predictions accurate.
  • Use the Budget Allocator to ensure optimized spending.