This post discusses and warns against optimization of online advertising campaigns when the basis for optimization is not solid enough. This is a case we encounter regularly at Seperia, and outside of Seperia.
As performance-based digital marketers, we live and die by the numbers. Specifically, when running conversion-focused campaigns on channels such as Google AdWords or Facebook, we optimize our campaigns according to hard data: our CPA targets and results. This means that for keywords, ads, placements, audiences and their combinations that yield better than average CPA results – we try to expand and push. And for those that yield poor results – we modify, lower, and pause.
For example, if our CPA for the overall account is $50, and for a particular keyword it is $30 – we’ll try to expand the reach of this keyword as much as possible, to generate as many conversions as we can from it. This will result in lowering our overall CPA. Vice versa, if a particular keyword or placement is generating conversions at a CPA of $80, we’ll try to modify and optimize the various attributes that affect its performance (for example the ad creative, landing page, bid, hours of day, match type, and various other parameters) , or we will pause it.
This basic rationale generally holds true, but…. People all-too-often get it wrong and hurt their campaigns by making too many changes too hastily, just at the very moment when they ought to be acting moderately and carefully nurturing their campaigns. At this crucial juncture in the development of an account, it’s essential to allow campaigns to gain momentum and grow naturally, while monitoring, assisting and tweaking them delicately. Optimizing at a given moment might seem like valid choice, yet it can actually result in a poorer overall outcome in certain scenarios. Why?
Trying to optimize based on non-statistically representative data is both a temptation and a grave mistake. Humans have a tendency to rush to conclusions, especially in the fast-paced environment of digital marketing. But, the fact remains that the size and quality of the data sample is critical to the quality of your decision making. Sometimes the data you’re looking at is actually the result of a market fluctuation, a random cause, market seasonality….etc. Consequently, early indications are not always a good prediction of the performance you can expect to see going forward. For example, you have 2 ads in your adgroup (or Facebook adset) and one of them seems to be outperforming the other. But how do you know if the result you’re seeing is statistically significant? Without diving into deep mathematics here, you can use this simple A/B split significance calculator tool The lower the p-value you’re getting, the higher the probability that your result is valid, giving you cause to indeed pause the losing ad and move 100% of your impressions to the ad with the better results. (For the next step you might want to test another ad against this one to improve results even further). For more in-depth information about the math behind A/B tests you can visit the relevant article on visualwebsiteoptimizer.com.
With automated bidding systems such as AdWords/FB, momentum is of the essence; unleashing your campaigns at the right moment is a critical driving force for growth. These bidding systems determine each advertiser’s impression share and ad position based on various factors such as specific and account-level CTR, quality scoring, bids, and the account’s overall historical record. When the system starts giving you more impressions and higher positions, you should seize the moment and ramp up your budgets, bids and coverage – not restrict it and hold back. Clearly, this too should be done with consideration and on a case by case basis …
Depending on the industry and the product, cross-ad and cross-channel effects can be considerable. Sometimes, an ad does not directly bring conversions or even clicks, yet it still helps by raising awareness and assisting the conversion funnel. This can be measured, to a certain extent, by using conversion attribution analysis. So, by applying naive optimization we might kill certain campaigns and ad units that seem to be under performing, only to realize later on that they did play a major part in supporting profitable campaigns. To avoid this pitfall, it’s important to apply not only attribution analysis, but also a little bit of common sense and controlled trial and error (as attribution alone is still limited by cross device factors and other cookie limitations). Remember that oftentimes display campaigns are not CPA- positive in their own right, yet help other marketing activities to an extent that should be measured and accredited.
Need more information? Contact us