Food for thought

Open Source MMM Testing: a New Era of Trust

According to recent research, 64% of B2B marketers say they do not trust their organisation’s marketing measurement when it comes to making decisions… In case you missed the latest episode of ‘MMMorning – your Marketing Science wake-up call’, this article outlines where we stand on the industry’s measurement challenges.

Open Source MMM Testing: a New Era of Trust
by Will Marks Aug 15, 2025

That’s not just a startling statistic: it’s a reflection of reality I see every day in conversations with marketers still relying on either last-click attribution or opaque “black box” models. Like I said on the show, “Every organisation I talk to claims to be or want to be data-driven. But how can that be achieved if they’re not actually trusting their data and their insights?”

Last week, I sat down with Joseph Kang, Model Scale Lead at Mutinex, to dissect the launch of our new Open MMM Validation Framework, the industry’s first open-source validation framework for Market Mix Models (MMM), built out in the open on GitHub for anyone to inspect, challenge, and improve.

Why Trust Needs a Test

I opened the conversation with Joseph by asking why we need a testing framework for MMM at all. Surely, I said, “Why can’t a model just either be right or wrong?” Joseph pushed back on the binary, referencing George Box’s famous dictum about models: “It’s not about whether it’s right or wrong in that binary sense,” Joseph said. “It’s more about its ability to explain the world around us and how useful that can be to make decisions.” Even the best theoretical models have their limits, so it makes no sense to treat marketing models as gospel.

With MMM in particular, Joseph pointed out, “There are multiple reasons or multiple explanations of how each channel contributed to that sale, which are equally plausible.” It’s not possible to know if a model is absolutely correct, but we have to make it possible to verify which are useful, stable, and robust enough to guide real investment.

What Makes a ‘Trustworthy’ Model?

So if absolute correctness is a myth, what distinguishes a useful model from an expendable one? As Joseph explained, “You should be testing for things like accuracy, stability, and robustness of the model.” Accuracy covers the basics: “Can the model predict sales? If it can’t even do this well, then how can we even trust its estimates around channel effectiveness?” Holdout validation (asking whether a model can predict periods it hasn’t seen) is crucial here.

Stability means that new data shouldn’t send your ROI estimates zigzagging: “If a channel ROI jumps from, let’s say two X to five X, just because you’ve added that extra month of data, that would be a pretty big, big red flag,” Joseph said. “You’d want estimates that evolve gradually as new information comes in and not wild swings that make planning impossible.”

Robustness, meanwhile, means the model should hold up when inputs change slightly. Joseph’s litmus test: “If you tweak any of your input variables by even one percent, and suddenly your entire channel mix reshuffles or shifts dramatically, then you could probably say that your model is overfitting to noise rather than finding real relationships.” In short: real models should be resistant to minor perturbations.

Why Make This Open Source?

The killer question: why do this in public, rather than keeping testing protocols as protected IP? Joseph says it best: “Our philosophy behind this initiative is to uplift the entire industry standards on model governance and testing. By opening up the framework, we hope to prevent millions of marketing dollars from going to waste.”

“Without objective standards enforced freely and openly, we risk losing trust in MMM technology altogether,” Joseph warned. The hope is for a rising-tide-lifts-all-boats situation where basically everyone can kind of come together and properly evaluate and improve their models. So the overall standard of MMM goes up in the industry.”

How You Can Get Involved

We want as many practitioners, vendors, and end-users using this framework as possible. “It’s available on GitHub, so completely open source,” Joseph confirmed. “You can go into GitHub, look at the page in the readme, it’ll have details on what it is, what sort of tests that it currently covers, how to install it, how to run basic commands and so on.” There’s full documentation for developers and example notebooks showing how to use the two currently supported frameworks, Meridian and PMC.

Anyone can clone the repo, load their data and model, and run the suite of validation tests. “So it’s fairly simple to use. And like I said, it’s not comprehensive. So we hope the industry gets together and we have some good contributions coming through as well.”

Ultimately, openness means we can evolve as new ideas and methods come to light. The more rigorous, transparent, and foolproof the testing process, the less likely we (all of us) end up marketing on faith alone.

I’ll say it again: every organisation wants to be data driven. Now, with open source model validation, they can finally start to trust the results.

Watch the MMMorning! Episode here.


[Will Marks is Head of Marketing Science at Mutinex]