Skip to Content
Article

Issue #159 | You’re Not Making Bad Decisions. You’re Making Emotional Ones.

by Sam Tomlinson
March 15, 2026

Every marketer I know has blown up a strategy that was working. Not because the data justified it, but because something felt wrong. A down week. A “suggestion” from an exec. A pointed question from a client. A competitor launch that triggered a reflexive need to move. In each case, the math didn’t change; it stopped mattering.

This is the most persistent failure mode in the marketing/advertising industry. It isn’t incompetence. It isn’t a lack of data. It’s emotional override — the moment where a feeling becomes a decision and the data gets recruited after the fact to justify it.

If you’ve spent any time around poker, you know this as tilt.

It’s what happens after a bad beat. The probabilities didn’t change. The cards didn’t change. But something clicks in the player, and suddenly they shift from playing tight, logical and mathematical to playing with their feelings. They chase losses. Size bets to make everything back in a single hand. Refuse to fold even when the odds are decidedly not in their favor. Before they know it, one bad beat has cascaded into a horrific night (or week…).

I’ve written about tilt before in the context of campaign management, and I keep coming back to it because it remains the single most accurate analogy for what goes awry inside ad accounts.

A campaign underperforms for a week and the entire strategy is scrapped – new campaigns, new keywords, new audiences, new ads, new messaging, new everything – all before there’s enough data to know whether there was actually a problem. We’ve all seen this movie before: the lead quantity/quality from a previously-well-performing campaign drops for a few days and an exec wants to pivot. An account that printed millions is suddenly called into question after a “down” month. A prospect makes a comment to a sales/biz dev person that they didn’t like an ad and suddenly the VP of Sales is demanding data and justifications for every message in-market.

I’ve had each of those 3 things happen multiple times in the last 3 months. Spoiler alert: in every case where the “unhappy” person had their way, the outcome was measurably worse.

The consistent thread is that when emotions enter the room, information stops mattering. The decision is made first. Then the data/spreadsheets/research/decks are backfilled to justify it.

It’s rationalizing, not rational decision-making. It’s the catalyst that turns one bad break – an unfortunate proposal loss, a few leads that simply didn’t go your way, a couple choppy days on Meta – turns into a blood-red month.

Thomas Bayes was an 18th-century statistician who developed a deceptively simple framework for decision-making under uncertainty:

Start with what you believe before you see the data. That’s your prior – the product of your experience, your pattern recognition, your working model of how things behave. Then new evidence arrives. You update your belief proportionally to what that evidence tells you. Then, repeat this process every time new information arrives.

That’s it. It sounds obvious. It is almost never what happens.

What Bayesian thinking is not is blowing up your prior belief the moment something goes wrong. One bad week is not proof your strategy is broken. It might be noise, seasonality, a platform hiccup, bad luck (yes, luck does happen in marketing, finance, sales, operations – everywhere) or a macro event that has nothing to do with your business. The Bayesian response to one bad week isn’t panic. It’s a modest, proportional update: this is slightly more evidence that something might be off. Watch it.

Conversely, if you have a client that has historically converted SQLs to signed clients at a 18.5% rate for 5+ years, then suddenly you have a month where SQLs sign at a 44.25% rate, that’s not your sign to break out the champagne and caviar. The more likely conclusion is that (i) your sales team benefitted from some lucky breaks and (ii) there’s likely to be some reversion to the mean.

And it’s equally not the opposite failure: clinging to a belief in the face of overwhelming evidence because you built the strategy and don’t want to be wrong. 3 bad months across multiple accounts, during a period with no external disruptions, with creative that tested well but isn’t converting? That’s a lot of evidence. Update accordingly.

The discipline is in the middle. Let evidence move your beliefs. Don’t let a single data point detonate them or allow your ego to preserve them past their expiration date.

What makes this hard is that Bayesian thinking is fundamentally probabilistic. The vast majority of people are not wired to think in probabilities. Humans crave certainty. Nate Silver (of 538 and election-prediction fame) recently quoted a fascinating study of how people interpret odds, which concluded that people “bucket” them into 3 groups: anything below ~30% is “not going to happen”; anything above ~75% is a mortal lock. The stuff in the middle is a coin flip (50/50). The vast majority of that is simply wrong – things that have a 30% chance of happening happen all the time. Things that have a >90% chance of happening fail to happen all the time (just ask the Atlanta Falcons after Super Bowl LI).

Put that in the marketing context: people just want to know if something is working or not working. The ambiguity of “probably working, with some noise” is deeply unsatisfying, especially when someone is asking you to justify a budget in a meeting. So we collapse the ambiguity into false certainty, pick a narrative and defend it.

It’s no different than the poker player who chases the bad hand with a worse hand.

Attribution will remain structurally unresolved. The marketing industry’s quiet shift from attribution toward incrementality reframes the question without eliminating the uncertainty. In both cases, you’re making real-time decisions with imperfect data. Every platform still runs its own model using a different definition/window, for its own ends (claiming credit + justifying more budget). None of that has changed or will change.

But this is exactly where Bayesian thinking earns its keep.

It’s also why our forecasting models are built on Bayesian methodology. Rather than producing a categorical forecast and calling it a prediction, they generate probability-weighted ranges that update as new data comes in – exactly the way beliefs should update. The forecast doesn’t tell you what will happen. It tells you what’s most likely given what we know right now, then revises that estimate continuously. That’s the entire point of the forecast: to help us make more rational decisions under uncertainty.

The mistake most marketers make is treating ambiguity as a reason to either chase a perfect measurement solution that doesn’t exist or abandon rigor entirely and go with gut feel. Bayesian thinking offers a third path: you don’t need perfect data to make good decisions. You need calibrated beliefs – estimates that honestly account for what you know, what you don’t know and the probability that your current thinking is wrong.

The question isn’t “which channel gets credit?” The question is: given everything I know – the trend, the business context, the lead quality, the macro environment, the historical patterns in this account – what’s the most likely explanation for what I’m seeing right now, and what’s the proportional response?

That’s a fundamentally different relationship with ad account data. It doesn’t demand certainty before acting. It demands honesty about uncertainty, along with a disciplined process for updating as new evidence arrives.

It’s also significantly more difficult to execute when you’re on tilt.

A few weeks back I was in a conversation with a colleague about research timelines. We were talking about what it used to take to put together a client-ready brief: the right sources, the right framing, the synthesis across a dozen tabs. All-in, you’re looking at a minimum of 7-10 hours. Maybe more if you were being rigorous, and even more if you didn’t know where to look or begin the process.

That same brief takes minutes now. Not a summary of what the internet thinks — real competitive intelligence, trend analysis, well-constructed recommendations. In the time it used to take to get your bearings, you can now have a working answer.

The compression of research time doesn’t just mean you can move faster. It means the “I didn’t have time to look into it” defense is evaporating. The information asymmetry that used to justify gut-feel decisions – where experience and instinct were the only tools fast enough to keep up – is narrowing to the point of irrelevance.

This creates a genuinely uncomfortable implication for practitioner value. If the information advantage collapses (and make no mistake: it is collapsing) then what separates a senior strategist from a junior one with the right tech setup?

The answer isn’t access to better data. It’s not proprietary research. It’s not even pattern recognition, which can increasingly be replicated or augmented. What separates them is judgment. Specifically, the Bayesian discipline of knowing how much to update, when to hold and when to move. It’s the ability to parse signal from noise under intense pressure or executive-level scrutiny. It’s the willingness to sit in ambiguity long enough to let the evidence accumulate rather than grasping at the first narrative that resolves the discomfort.

That’s not a personality trait. It’s a practiced skill. And it is now the primary differentiator.

The same logic applies to the client relationship. If a CMO can generate the same competitive brief in the same timeframe as the agency, the defensible value of that agency shifts entirely. It moves from “we know things you don’t” to “we process what we know with less emotional distortion than you do.” That’s a harder thing to sell in a capabilities deck. It’s also a far more durable competitive position.

The bottom line is this: if you can get a client-ready brief in minutes and you’re still making reactive, emotionally-driven decisions, that’s no longer a resource constraint. It’s a choice.


Here’s a scenario, adapted from my life. Imagine you’re 6 weeks into running a lead generation campaign for a senior living community. Over the first month, cost per lead at $238, lead-to-tour rate is 36%. Both metrics are consistent with years of historical benchmarks for this operator in this region. Then, the first 2 weeks of March 2026 happen. CPL more than doubles to $520. Lead-to-tour drops to 19%. The client is freaked out. They want to know what you’re DOING to fix it. The instinct – yours, theirs, everyone’s – is to overhaul the strategy, change the creative, reallocate the budget. In short: blow it all up.

That’s the tilt response. Now, run it through a Bayesian lens.

What’s your prior? You have 4 weeks of strong performance in this account plus years of historical data from this operator in this region. Senior living has natural volatility tied to major events, seasonality, family decision-making cycles and local competitive dynamics. Your prior belief, supported by a substantial evidence base, is that this campaign is structurally sound.

How much should two weeks of poor performance move that belief? Consider the sample. In this particular account, 2 weeks represents 80-120 leads. With that sample size, random variation alone can produce swings of this magnitude. Now layer in context: The US stock market is down ~4.00% over the last 2 weeks. There has been a series of major global events, including a war in the Middle East. Gas prices in this region are up nearly 16% in 2 weeks.

Seniors and older adult children are naturally risk-averse. Any one of those 3 contextual updates could explain the short-term performance dip. All 3 happening simultaneously? Most rational people would conclude the recent performance is most likely an aberration, not a signal.

The proportional update is something like: “My confidence that this campaign is working as designed remains exceptionally high. I want to see data from the next 2 weeks before making structural changes. The most likely explanation for the performance decline is external. We have years of data that supports a negative correlation between market performance and lead conversion rates. We have dozens of instances showing that significant policy changes (including conflicts abroad) have a short-term chilling effect on people’s willingness to make a change. However, this is a sufficiently large impact to warrant a deeper investigation into the campaign as well, so we can make an informed decision in 2 weeks’ time.”

Compare that to the tilt response: blow up creative, re-allocate the budget and restart the learning phase (incurring real cost and real data loss) based on a miniscule sample size.

The difference between these two responses isn’t intelligence or experience. It’s discipline and process. One is a measured, clear-eyed response; the other is a visceral, emotional reaction.

Bayesian thinking has a cost, too: It could be that a competitor launched a new offer that’s clearly superior to yours. It could be that there’s a scathing review of your community circulating around a forum or community. It could be that your campaigns actually are broken, and there is a real issue that needs to be addressed. In any of those cases, waiting 2 weeks has a real cost. This is why the research point above matters: if you can compress the time it takes to rule out competitive shifts, review crises, or genuine campaign failures, you reduce the cost of holding your position.

The faster you can investigate, the less expensive patience becomes. You can still be wrong. But you’ll be wrong less often, and you’ll catch it faster.

The solution is to ask the following 3 questions before making any significant decision:

  1. What did I believe before this data point arrived?
  2. How much should this evidence actually move that belief, given its source, its sample size, and the broader context?
  3. Am I responding appropriately to the evidence I have, or am I on tilt?

The third one is the most difficult. It requires the kind of honesty that’s genuinely uncomfortable in a professional setting: admitting that your read on a situation might be more emotional than analytical. But that’s why it’s the question that separates the marketers who consistently produce positive returns across accounts over time from the ones who are forever starting over.

You won’t get it right every time. I certainly don’t.

The habit of separating your emotional reaction from your evidential analysis will produce better decisions over a long enough time horizon, simply because you’ve built a system that keeps your feelings from running the table.

The math doesn’t change. Don’t let the table change you.

Cheers,

Sam

Related Insights