
Published June 22nd, 2025 by Assaf Trafikant
Case Study: Lead Marketing Mix Modeling (MMM) Execution and Adoption
I didn’t set out to solve world peace, just to help a fast-growing skincare brand figure out where their marketing money was actually going. NaturaxGlow (just a pseudo name for this case study) was doing well on paper: strong sales, decent growth, a loyal customer base. But under the surface, things weren’t so clear. Google said one thing, Meta said another, and finance wasn’t buying either. The attribution tools weren’t broken, just overwhelmed. So we decided to take a step back and build a proper Marketing Mix Model.
Now, this is not one of those case studies where I show a polished graph, throw out a few ROIs, and say “look what we(I) did.” This write-up is not about the final results. It’s about the process, the messy, manual journey. Because getting MMM to actually work inside a company is less about the math and more about the people, the tradeoffs, and knowing where to put your energy.
I wasn’t working alone, of course. We had a solid team: a data scientist who could translate messy reality into clean regression lines, a marketing lead who knew the channel landscape inside out, and a finance partner who asked the right (and often uncomfortable) questions. Our goal wasn’t just to build a model, but to build trust around it, and I was just the conductor.
Project Milestones
We followed a structured, step-by-step approach:
- Align marketing, finance, and leadership around one question.
- Get everyone on board early, especially those who’d later challenge the results.
- Build the core team
- Gather and prepare data
- Choose the right modeling approach.
- Build and validate the model
- Translate results into insights (where I’m at now!)
- Integrate with other measurement tools.
- Set up a cadence to refresh the model.
And it all started with the most basic (and hardest) question: what exactly are we trying to answer? Let’s talk about that.
Step 1: Define Clear Business Goals
Company Background: NaturaxGlow is a fast-growing DTC skincare brand with annual revenue of $$M, primarily driven by paid digital channels (Meta, TT, Google, YouTube), influencer collaborations, and seasonal TV buys. Over the last 2 years, the brand has expanded internationally, added new SKUs, and faced rising CACs.
What’s was the trigger for MMM adoption you’re asking? Well, the CMO, under increasing pressure to justify marketing spend amid a softening ROAS on paid media, sought a more robust, objective method to measure marketing effectiveness. The performance team raised concerns about attribution inconsistencies (GA4 vs. platform data), while the finance team wanted a clearer tie between spend and revenue. Nothing new.
In a cross-functional alignment session, key stakeholders from marketing, finance, BI, and growth defined the following goals for the MMM project:
- Understand the real impact of each channel (Meta, YouTube, Google, TV, influencers, email) on sales – controlling for seasonality, promotions, and macro trends.
- Use MMM insights to inform 2025 budget planning across markets and channels, including decisions on scaling down TV or increasing YouTube investment.
- Create a common language and modeling framework accepted by both teams to reduce conflicts over ROI calculations and budget requests.
- Enable scenario planning (“What if we cut Meta by 20%?”).
- Quantify the lift from product launches, PR, and pricing changes.
We also agreed on biannual MMM refresh, with quarterly checkpoints, focusing on the US market only, based on Q1/2022 – Q4/2024 data.
Step 2: Stakeholder Buy-in
It doesn’t matter how accurate or clever it is if no one trusts it. You can spend weeks cleaning data, tweaking variables, testing for every little effect, and then watch it all go nowhere because someone in marketing says, “This doesn’t feel right.” That’s why getting buy-in early is not a formality. It is critical.
At NaturaxGlow, I made sure to take this part seriously. I did not want to build something in isolation and then try to convince people to care about it later. It’s too much work, and I had my reputation to keep. So I sat down with everyone who had a stake in the results. Marketing leads, the performance team & finance.
I explained the basics. I am building a model to show what actually drives sales across all channels. It will not solve everything, but it will be transparent, consistent, and useful. That already made it better than most of the tools they were used to.
What helped:
- I asked each team what they needed from the model so they felt heard from the start.
- I shared early drafts based on fake data. People care more when they see work in progress.
- I kept expectations realistic.
By the end of the third week (yes, it took some time), most teams were either onboard or at least open-minded.
Step 3: Build the Core Team
I know what you’re thinking. Why do I need a team if I’m the one doing most of the work? Fair question. The truth is, building a proper marketing mix model is part technical project, part politics, and part group therapy. You can’t do it alone, not if you want it to last. So I put together a small, focused group. I had a data scientist who could spot weird patterns and push back when things looked too good to be true. I had someone from performance marketing who understood how budgets really get spent, not just how they’re logged in spreadsheets. And I had a finance partner who asked tough questions and made sure the results would stand up in a quarterly review. That was enough.
We met regularly but ut wasn’t always smooth. People had opinions. But once this core team was in place, everything else moved faster. The decisions were clearer. The data requests made more sense. And when the results came in, no one was surprised. Building the team wasn’t just a project step. It was the structure that made the whole thing work. The reason I was eager to write this piece.
Step 4: Gather and Prepare the Data
This is the part where reality hits. Everyone says they have the data. And technically, they do. It’s just scattered across five systems, a few forgotten folders, three different calendar types, and at least one spreadsheet that’s been manually updated every Monday since 2021.
At NaturaxGlow, this step took the most time. Not just in hours, but in conversations, version control battles, and slowly realizing that what one team calls “YouTube spend” another calls “upper funnel awareness.” There’s no easy way around it. This is the foundation of the whole model, and if the numbers are off, nothing else matters.
I started by listing everything we needed:
- Weekly media spend by channel (Meta, Google, TT, YouTube, TV, influencers, email..)
- Weekly revenue or conversions
- Promotions, campaign calendars, launches
- Pricing changes and stock issues
- External stuff like holidays and seasonality.
To save time (and avoid chasing screenshots), I used ETL tools to pull structured data directly from the ad platforms into our data warehouse. This covered most of the paid channels and even one influencer campaign tracker that had API. It wasn’t magic, but it meant I was not copying numbers from dashboards or asking the media team to export files every week.
Then came the fun part: pulling everything else together.
Some of the data was a mess. Mixed date formats, duplicate week entries, sheets with two-letter country code, other with three. Spend numbers that didn’t match finance, typos and such. Normally, this is where I’d spend days fixing things.
But this time, I used GPT to speed things up. I fed it raw CSV exports (in parts), and asked it to:
- Spot missing weeks, gaps, or overlaps
- Flag duplicate campaigns or inconsistent naming
- Suggest unified formatting across files
- Fill in missing fields when it could guess based on patterns
Side note: AI like GPT can’t replace a proper analyst, but it’s incredibly useful for this kind of prep work. It can:
- Scan large datasets for weird outliers or broken time series
- Help normalize campaign names and match them to known patterns
- Compare multiple sheets and show where data doesn’t align
- Build simple summaries of what’s in each file so you don’t have to dig through manually
Once things were cleaned up, I pulled it all into one master sheet with consistent columns: date, channel, spend, campaign type, and key notes. Then I ran final checks. Do the totals match finance? Are any weeks missing? Are the dates aligned?
This step is where most projects slow down or quietly fall apart. But with automated data pipelines, AI for cleanup, and just enough manual checking to stay sane, I got the dataset to a place I could actually build on.
Step 5: Choose the Right Model
Like most of the industry, we used an open-source MMM frameworks that gave us flexibility but still came with guardrails. So, lets cut to the chase.
Meta’s Robyn or Google’s Lightweight MMM?
Both Meta’s Robyn and Google’s Lightweight MMM are strong open-source tools, but they’re built with different assumptions, languages, and use cases.
Why We Chose Meta Robyn
Here’s why.
First – our data scientist was fluent in both Python and R, but wanted to get deeper into R. He saw this project as a chance to become fluent enough to call it a real skill. So choosing Robyn was practical and also a small win for team growth. One tool, two goals. No complaints from me.
Second – the marketing mix at NaturaGlow was diverse. We were dealing with Meta, Google, TT, YouTube, email, TV, influencers, and sometimes even OOH. Robyn is built for that kind of complexity. It handles nonlinear effects, saturation, carryover, and interactions across many variables. It doesn’t blink when you throw ten channels at it with different behaviors.
Third – we had enough time and budget to do things properly. Not infinite, but enough to support a few iterations and some modeling flexibility. Robyn isn’t just a plug-and-play tool. You still need to think. But with a flexible timeline, we could afford to go deeper and get better answers.
Fourth – NaturaGlow had significant spend on traditional media. TV had a real presence in the marketing mix, and Robyn is well-equipped to model that kind of channel. It accounts for lag effects and long tail impact in a way that is baked into its logic, which saved us a lot of manual tuning.
Fifth – Robyn has a strong community and is being actively developed. Updates roll out often, bugs are fixed quickly, and there’s a healthy backlog of issues and discussions from real users. If something broke or didn’t make sense, there were resources to turn to.
And finally – the hyperparameter grid search. This might sound technical, but it is what makes Robyn really stand out. You give it a range of assumptions to test, and it builds and compares hundreds of model variations automatically. It checks what works best, what’s stable, and what’s consistent. This saves time and gives you more confidence in the results.
In the end, I didn’t need perfection. I needed something practical, flexible, transparent, and solid enough to use in front of a room full of stakeholders. Robyn delivered that. It wasn’t push-button easy, but it gave me control and credibility and that’s exactly what I needed.
Step 6: Build and Validate the Model
Once the data was in place and I picked the tool, I started testing. The first version of the model was just to see if anything made sense. It mostly did not.
Some channels showed extremely high ROI. Others had no effect at all, even though we were sure they had some influence. That is common. Models do not usually work well the first time. You try, adjust, and keep going.
I made a few quick changes:
- Weekly data gave better results than daily, especially for channels like TV or influencers
- I added lag assumptions to reflect that some channels take time to show results (recommended reading by Rhydham Gupta)
- I used saturation settings to avoid giving too much credit to spend that was clearly beyond the point of effectiveness (another great piece by Rajiv Gopinath)
Robyn helped with this. I entered some reasonable ranges for these settings, and it ran a bunch of model variations automatically. That saved a lot of time and reduced guesswork.
Once I had a few versions that looked stable, I started checking if they held up.
I used a few simple tests:
- I left out a few weeks of data to see if the model could predict them accurately
- I compared the results to actual campaign performance to see if the direction matched reality
- I looked for anything odd, like a channel showing huge impact when it barely ran
There were a few surprises. A large influencer campaign that everyone internally talked about had no measurable effect. On the other hand, as expected, email performed well, especially during promo weeks.
Robyn also includes built-in visuals that helped a lot. I could see when each channel peaked, how long its effect lasted, and how it responded to extra spend. These charts made it easier to spot mistakes or patterns I would not have noticed in a spreadsheet.
If you are learning this process, I recommend starting with Robyn’s documentation. It explains things clearly, even if you are not a modeling expert. I also reviewed Google’s Lightweight MMM project. It works differently but helped me understand what decisions matter most and where there is room for flexibility.
To be honest, I did not get it right on the first try. I made assumptions that did not hold up. I missed a few data issues that showed up later. I also overcomplicated one version of the model just to try something out. That is all part of the process. You learn by doing and fixing.
In the end, I had a model that worked. Not flawless, but good enough to support decisions and explain where marketing impact was coming from. And that was the goal.
Step 7: Translate Results into Insights
Once the model was working and the results looked stable, the next step was to turn it into something people could actually use.
The model gives you numbers, but numbers alone do not drive decisions. People do. I grouped channels by performance level. High impact, low impact, and unclear. That gave people a simple way to think about priority.
I showed ROI not just as a number, but in the context of budget. A small channel might have a high ROI, but it cannot scale. A big channel might look less efficient but still drive most of the revenue.
I explained saturation. If we are already spending close to the maximum on a platform, increasing the budget will not help much. That helped kill some of the usual “let’s just spend more” suggestions.
I shared channel curves. Robyn makes this easy. I could show, for each channel, where we were on the curve and what would happen if we spent more or less.
Then I started meeting with teams. One by one. Some were open to the insights. Others were polite but clearly not planning to change anything. A few pushed back, which was fine. The model is not the final word, it is just one part of the conversation.
In a company of this size, no one expects instant transformation. Budgets are tied to roadmaps. Teams have internal targets. Some decisions are political. Even if the model shows a weak-performing channel, that does not mean anyone is ready to cut it tomorrow.
Still, the insights gave people something to think about. Even the teams that did not act right away started to ask different questions. Some teams adjusted their plans slightly. Others used the results as backup in meetings. Some ignored them. That is also part of the process. The goal was never to force changes across the board. The goal was to provide a clear picture of what the data shows, so teams could use it in their own way.
I am also preparing to rerun the model with new data. Some things have changed since the first version. New campaigns launched, budgets shifted, and external factors moved. Rerunning the model will help keep the results relevant and give teams a reason to come back to it.
Along the way, I am building lightweight documentation. Not slides or technical manuals. Just short summaries of what we learned, what changed, and what to expect. The goal is not to turn everyone into analysts. It is to give them enough context to make smarter decisions.
I am also adding model insights into regular planning tools. For example, the budget planning template now includes space to enter expected ROI and reference past results. That way, the model becomes part of the process, not an extra step on the side.
This stage is slower.
There are no big wins to show off. But this is where the long-term value comes from. If people understand how the model works and how it fits into their daily work, it will last. If not, it will just be another project that fades out once the slides are closed.
And yes, this kind of work is not shiny. It is slow, practical, and often invisible. Most of it happens in shared folders, quiet meetings, and unanswered follow-up emails. But when it is done right, it builds something useful. I wanted to share what the process actually looks like. Not just the technical steps, but the real parts too. The hesitation, the pushback, the internal debates, and the long road from analysis to actual decisions. If you are leading a similar project or thinking about starting one, I hope this gives you a clearer picture of what it takes. The model is just the starting point.