A Game of Double Standards: CROs vs. the World

Conversion rate optimization as a field occupies quite a unique position among digital marketing professions such as advertising (PPC,SEO,SEM), UX design (designers, front-end coders), PR, and marketing. One significant difference which I recently discussed with Tim Stewart, a well-known Digital Day Elite Day speaker, is in the incredibly high standards to which work and advice provided by CROs are held to. In this article I will try to trace the roots for this double standard and examine if it makes business sense to continue its application.

History of A/B Testing in Different Fields

Historically, the discipline of conversion rate optimization was the first online business practice which got the ability to perform quantitative experiments through which to establish a causal link between action and business effect, and to have a decent estimate of the size of the effect. Google Website Optimizer became available for free in April 2007 and it wasn’t the first such software with certain companies making A/B testing fairly accessible as early as 2004.

For comparison, experiments in Google AdWords (now Google Ads) only became widely available to advertisers more than 10 years later in Feb 2016 and Facebook Ad Experiments only launched in late 2018. True experiments in SEO were never possible and even pseudo-experiments became really hard after the very early days of Google, probably since 2006-2007 (author’s recollection) when you could no longer push more than 2 results from the same domain in the results for most queries. True controlled experiments in marketing or PR will never be a thing.

Since most of us want to know whether what we do really works and how well it works, both for our own personal gratification and to be able to prove results (or lack thereof) for different interventions we propose, conversion rate optimization professionals mostly jumped on the A/B testing train eager to apply scientific rigor to their investigation and to back up their claims in the best possible way. Other online business professionals simply didn’t have the opportunity to do so due to not having the tools or for some: even the theoretical capability of performing experiments in their line of work.

The Double Standards

The above inevitably lead to the double standard I want to discuss in this article. Even though a CRO professional and an advertising professional may be equally experienced and skilled in their work, when the former makes a claim or suggests a change to the website design, content or user flow, they are met with skepticism and requirements to not only prove what they propose will work, but even estimate how well it is expected to perform. None of that is usually required from the latter professional when they propose or even outright implement a change in ad copy, ad targeting, choice of landing page, etc. PR and marketing professionals are even less concerned with that since, as I’ve already mentioned, they don’t even have the ability to do experiments.

Such a double standard rightly irritates many CROs as it puts them on uneven footing in meetings and when it is time to distribute early bonuses.

For example, an advertiser can suggest a change in ad management and if it passes some basic soundness checks it will get approved while in the meantime a CRO will need to also devise and run an experiment taking anywhere from a couple of weeks to a couple of months before the suggestion gets implemented, even if it was similarly informed by data and experience. An advertiser can get a fat check even if they can’t really prove they should take credit for the improved performance achieved by the ad campaigns (it could have been due to landing page improvements, business practice improvements, better stock, competitor mishaps, etc.) while a CRO will only get one if they can actually establish a causal link their efforts with business results.

Does this make business sense?

From a business standpoint, the goal of performing an A/B test is two-fold: to manage the risk of making a bad decision (implementing something which hurts sales, for example) and to estimate the effect of a given potential change. Estimation is useful not only for calculating the added value of a CRO company or professional, but also to inform the direction of future work.

With this in mind, let us examine the business risks and estimation needs related to a change in ad strategy and a change to the design of a signup page.

First, are there business risks involved in both decisions? The answer is certainly yes. Is it worth to A/B test the proposed changes? The answer will depend on the type and magnitude of the changes – if the expected risk is very small it may not justify the cost of testing.

Second, do we want to estimate the effect of the proposed change? The answer would be yes in both cases for reasons outlined above.

Why, then, do we usually demand a rigorous A/B test from the CRO specialist, but not from the ad manager?

One might argue that many changes in ad management are small and likely won’t have a big impact, however, the same goes for many CRO recommendations: implement a more intelligent select drop-down, change the wording of a button, add a CTA, etc. If the perceived risk is estimated to be negligible for one and the other, then both should be spared the effort of A/B testing their work. Similarly, the risk of a 5% drop in purchase rate is likely about as bad as the risk of a 5% drop in the performance of advertisement for most e-commerce businesses and so the A/B testing standard of proof should be applied to both.

Standards based on the need for risk management and estimation

If the above practice of applying different standards to different is discontinued, it will foster a better business environment and is likely to promote better results. If an ad manager claims the changes they are making are minor and won’t require testing, then they won’t be able to take credit for any effects outside the effects of changes which were properly tested and estimated.

At the same, the privilege of not testing certain interventions should be extended to CRO specialists – not every single change to a website should be tested and even if all are tested, the level of certainty and accuracy should be appropriately adjusted for each test, depending on the severity of the potential consequences. In other words, A/B test parameters should be informed by a risk-reward analysis, instead of using default values like 95% significance / confidence threshold, simply because “everyone else does that”. The need for a proper estimation of risk and/or quantification of the effect should be guiding the decision whether to A/B test a change, and not which department (or external agency/contractor) is proposing the change.

If you are a conversion rate optimization specialist, I believe it is in your best interest and in the best interest of the industry, to start this discussion with your clients or superiors. Point out that claiming credit for something is not easy at all without a proper test (it is tricky even with one!) and that this is valid regardless of what work one is doing. Simply presenting before versus after numbers and claiming the difference to be all due to your work does not constitute a valid proof for that whatever was done was the reason and if that is to be the standard of proof, then it should be applied to your field as well (I bet there won’t be many who would be happy to give up testing altogether).

True, you are likely to rattle some feathers as PR and marketing professionals will have a harder time proving the value of their work which is, after all, a good thing, since believing you know when you don’t really know is a bad business strategy. It is better to acknowledge that you can’t really know and can only guesstimate, at best, than to delude yourselves into oblivion. I think it will also foster a better understanding of the value of A/B testing and thus increase your utility as a consultant in fields like ad management since you already know your way around a controlled experiment while an ad agency might have not been pushed to do so up to this moment.

———————————————————————————————————————–

Georgi Georgiev is a specialist in web analytics and statistics with special interests in online experiments and data-driven approaches to online marketing and has been running the Web Focus digital agency since 2008. He has authored dozens of articles as well as several technical white papers on statistical methods in A/B testing. Georgi is also known as the founder of Analytics-Toolkit.com – a software suite for data analysts and specialists in conversion rate optimization.

 

Leave Comment