Issue #160 | Automation Needs Guardrails, Not Micromanagement

Before we talk about how to work with automation, we need to talk about something most marketers refuse to acknowledge: ad platforms are not on your side.
That’s not a conspiracy theory. It’s a business model observation. Meta, Google, Amazon, AppLovin, Snap, LinkedIn – they’re all in the advertising business. Their revenue goes up when your spend goes up. Their incentive is to create systems that maximize the budget captured in their ecosystem and minimize the surplus value generated by that investment for you.
That latter objective is the more controversial, but no less true: every ad platform views surplus value – the difference between your maximum tolerable acquisition cost and your actual acquisition cost – as margin left on the table. If you would pay $100 for a new customer, but their platform delivered those customers for $80, the ad platform views that as a loss of $20.
The platform’s solution? Build systems that maximize their ability to extract value from advertisers while making the advertiser feel good about it.
Conference season is in full swing, and this tension sat at the center of nearly every conversation I had – the striking dichotomy between those who believe AI-driven automation is so good that human involvement is mostly unnecessary friction, and those who treat it the way they treated NFTs. Both camps are wrong, and they’re wrong for the same reason: they’re reacting to the technology instead of examining the incentive structure underneath it.
I’ve had a saying for years: defaults are the devil.
Every default setting in every ad platform was chosen by someone for a specific reason. That reason is rarely “because this is the optimal setting for your specific business objectives.” More often, it’s “because this is the setting that generates the most activity, the most spend and/or the most reportable platform-level metrics.”
In Google Ads, broad match is turned on by default. On Meta, audience expansion is turned on by default. Automatic URL expansion in Performance Max is on by default. Conversion actions grouped in ways that benefit the platform’s optimization model, not your business. “Guided” campaign setup that appears simple and straightforward, but produces a campaign that is optimizing toward goals (traffic, GMB interactions, etc.) that are unlikely to be relevant to your business. Official-looking “Optimization Score” or “Opportunity Score” metrics that display prominently in-platform AND are cited by Google and Meta reps in sales emails – but if you actually look at the documentation, the scores aren’t a measure of “optimization” or “opportunity”, but rather “…how closely campaigns align with Meta’s best practices…” In other words: it has precisely ZERO to do with account performance.
Every one of those defaults is a decision someone at the platform made.
This is the context in which Andromeda arrived. When Meta quietly rolled out GEM, Lattice and Andromeda over the past year, it fundamentally changed how the platform thinks about creative distinctness – and it did so in the dark of night. There was no grand roll-out. It just happened. Suddenly, two ads with the same visual aesthetic but different copy were treated as the same ad. 30 variants of a winning creative – a strategy that worked reliably for years – collapsed. The rules changed. The defaults changed. The advertisers who trusted the system blindly got burned. The ones who were paying attention adapted and likely came out better (I covered this in depth in Issue #137 if you want the full breakdown).
And why did Meta do that? They wanted more creative diversity to feed their AI models + Manus. Easiest way to do that? Force advertisers to fork over billions of net-new assets by re-engineering the system to solve for their objective (build an ad model that maximizes throughput while minimizing surplus value generation).
That’s the platform incentive problem in practice: the system will always evolve in the direction of the platform’s interests. Your job is to understand those interests, work with them where they align with yours, and put guardrails in place where they don’t.
The Two Failure Modes
So we have two ways to get this wrong, and most marketers cycle between them.
The first is over-trust. This is the “set it and forget it” camp – the belief that automation is so good now that human involvement is mostly unnecessary friction. Performance Max campaigns running with URL expansion fully open, serving ads containing who-knows-what that direct people to pages most of the team forgot existed. Advantage+ Shopping with no audience signals and no creative diversity, optimizing toward whatever Meta decides is a conversion. Smart bidding with no targets, no caps, no exclusions – just a machine with a blank check and instructions to “maximize conversions.”
I’ve seen Performance Max campaigns massively outspend dedicated brand campaigns on branded queries, cannibalizing traffic that would have converted anyway at a fraction of the cost. I’ve seen Advantage+ campaigns serve ads to audiences so far outside the intended audience (we’re talking campaigns for men’s health products being served to 18-24 year old women), despite crystal clear creative (it featured guys in their 40s + 50s), along with ample descriptive copy in the primary text. And – if all those contextual signals weren’t a strong enough hint to Meta that the early 20s female audience wasn’t right, there was the performance signal: not a single one of them made a purchase. But Meta kept on spending like a drunken sailor on that cohort. Why? Because the performance of the same creative to the intended audience was so strong that the campaign-level averages were exceptional. No one bothered to dig into the details on a campaign that was – by all accounts – performing well.
This is what I mean by designing systems to minimize surplus value while making the advertiser feel good about it. This brand was happy with their Meta performance. Of course, they likely would have been happier if they didn’t incinerate nearly $10,000 on an audience segment with 0 reported conversions in a month.
None of this required anyone to do anything wrong. It just required someone to stop paying attention.
The second failure mode is over-control. One account that I reviewed nearly 18 months ago was the poster child for it: in a single month, there were over 5,100 manual bid changes in this account. That doesn’t count the 1,000s of keyword changes (activate, pause, remove, negative). That’s hundreds of changes every single day – on an account spending $200,000 a month. Put simply: this is the marketing equivalent of allowing a toddler to flip every switch in the cockpit of a 747 mid-flight, then trying to deduce what switch does what. This failure mode is born of the belief that the machine can’t be trusted, that constant intervention is a form of diligence, that the more levers you pull, the more in control you are. It’s the same instinct that drove media buying before machine learning, when humans actually did need to manage placements manually, adjust bids daily and stay hands-on to keep things moving.
Those days are gone. The honest reality is that no human – no matter how experienced, how data-savvy, how mathematically gifted, how plugged-in to the account – is going to out-trade a machine that has access to trillions of signals, processes bids in milliseconds and sees patterns across billions of interactions. Less data plus less visibility plus less compute capacity equals a staggeringly poor capacity to predict the expected value of a given impression. Constant intervention doesn’t improve on the machine. It just gets in the way.
Both failure modes come from the same root error: treating automation as either an employee you can fully delegate to, or an adversary you need to keep in check. It’s neither. It’s a very powerful system with its own objectives, its own incentives and its own limitations. Your job is to direct it, not defer to it or fight it.
Cyborg Media Buying
The framework that actually works is what I’ve termed Cyborg Media Buying: use AI for scale, human-defined logic for guardrails. We aren’t fighting the machines. We aren’t surrendering to them. We’re building better systems to direct them. You’re building guardrails for a system that is simultaneously your best execution tool and a counterparty with misaligned incentives.
Automation is extraordinarily good at one thing: optimizing toward a signal at scale. It can process more data, evaluate more interactions and make better real-time bidding decisions than any human ever could. That’s genuinely remarkable. Fighting it is remarkably foolish.
But automation is also indifferent to things that matter to your business. It doesn’t know that your brand shouldn’t appear next to certain content. It doesn’t know that a particular audience segment has terrible LTV even if they convert cheaply. It doesn’t know that your highest-margin product is being cannibalized by a lower-priority campaign. It doesn’t know that your feed has a sync delay on Sundays when your 3PL manually updates inventory. It doesn’t know (unless you tell it) that Service #1 and Service #2 — while the same price — have wildly divergent costs to your business. It doesn’t care about any of that. You do.
Guardrails are how you communicate what you care about to a system that can’t infer it on its own. The specifics aren’t complicated, but they need to be deliberate — and they fall into three categories: what you tell the machine before it runs, the data you feed it while it’s running, and the creative you give it to work with.
And then there’s data infrastructure — which is where the real advantage lives now. As platforms strip away traditional targeting levers (and make no mistake: they will keep stripping them), the optimization layer shifts from what you tell the platform to target to what data you give it to work with. For Google + Meta, this looks like 0P/1P data. Customer lists/segments. Reposted conversion values that account for returns and repeat purchases. Offline conversion imports. Value-based bidding rules that reflect actual margin (vs. revenue). When you give automation data it can’t get anywhere else, it finds value it wouldn’t find otherwise. That’s the edge that’s available to you. It has nothing to do with how many bid changes you make.
On the creative side, Andromeda/GEM/Lattice changed the game in a way that most advertisers still haven’t fully absorbed. Visual similarity is now the primary axis on which Meta evaluates creative distinctness. That means your 30 “variants” — same layout, same color palette, same talent, different headline — aren’t 30 ads. They’re 1 ad with 30 copies of the same entry ticket to the auction.
We saw this firsthand with a client running 17 ad variants that shared the same visual structure. Andromeda compressed them into what was effectively a single creative entity. Reach + CPMr plateaued while frequency climbed. CPMs crept up — not because the audience was exhausted, but because the system thought there was only one ad to show them.
Our solution was to reduce the number of variants from 17 to 5 while rebuilding each around a genuinely distinct visual concept: different talent, different settings, different colors, different emotional registers. The result? CPMr fell by 59% and net-new reach 3x’d — all with the same budget + audience. The only thing that changed was giving the system distinct creative concepts that it could deploy. The guardrail response to Andromeda isn’t to fight the system; it’s to stop feeding it 30 versions of the same input and expecting 30 different outputs. Persona-based ad sets paired with visually distinct creative built around different emotional angles and audience contexts. Not more of the same. More that’s actually different. (I covered the full Andromeda breakdown in Issue #137 if you want the mechanical detail.)
Finally, there’s the out-of-platform guardrails: these are often configured through scripts and/or API tools. We use a ton of them in our accounts, with the most common being stop loss rules (turns off campaigns via the API if spend skyrockets and/or performance tanks), anomaly detection (rules that trigger if a campaign/ad set performance meaningfully diverges from historical performance trends), impression/fraud triggers (rules that exclude zips/metros when impression storms (massive spikes in impressions for high-cost queries) hit Google Search). Those baseline rules pair extraordinarily well with growth-oriented rules, such as ones that update targets/budgets based on inventory, schedules, weather, etc. and/or rules that automatically add or remove keywords/services/audiences based on the business’ capacity to serve them. These are – candidly – one of the reasons our clients have avoided substantial losses during Meta or Google bugs.
What Guardrails Actually Look Like
Guardrails aren’t a philosophy. They’re a set of decisions you make before you allow the machine to take the proverbial wheel, so you don’t have to make them reactively while it’s running.
The pre-launch decisions are structural: efficiency targets that tell the machine what success looks like in terms it can optimize toward. Explicit exclusions – placements, audiences, queries, brands – eliminate the garbage the machine won’t recognize as garbage. Conversion actions that reflect the business outcome you actually want, not whatever intermediate event is easiest to track. Data being passed back to the platform that’s accurate, timely and differentiated enough to give the machine incremental insight it can’t get anywhere else.
The in-flight decisions are diagnostic: anomaly detection that identifies a CPA spike when it starts, not when campaign-level metrics finally deteriorate enough to notice. Budget utilization monitoring that identifies when a campaign is hitting its cap every afternoon and leaving profitable spend on the table every evening. Pixel, feed and site health checks running daily, not because something is wrong, but because these aren’t failure states you can predict. They’re the inevitable result of running ads at scale. The only question is whether you catch them in hours or weeks.
None of this is lever-pulling. None of it interrupts the machine. All of it directs the machine toward outcomes it can’t infer on its own, because it doesn’t know your margins, your inventory status, your CEO’s tolerance for “off-brand” placements, or which of 2 equally-priced services costs you 3x more to deliver.
The job today looks less like traditional media buying and more like systems design. Developing compelling offers. Creating alignment between message, ad, audience and landing page. Setting the right targets. Building the right creative. Passing the right data. Designing the right account structure. Catching the subtle failures before they become expensive ones.
The marketers who figure that out first are going to have a structural advantage over everyone who’s still fighting the machine — or worse, still trusting it blindly.
Cheers,
Sam

