SEO Jo Blogs Interviews Web Optimisation Consultant,Tim Stewart

Tim Stewart is speaking at Conversion Elite on July 6th. I wanted to find out more about his background in CRO and a sneak preview into his talk “How To Give Your Optimisation Projects an Unfair Advantage Using Data”. I asked Tim a few questions about his presentation and how he started his career in Conversion Rate Optimisation (CRO).

“How Long Have You Been Working in Conversion and How Did You Get into Conversion?”

I started out AB testing banner creatives and placements, email subject lines and designs in around 2005, when I was running the commercial and sponsorship for a major publisher website

I’d been successfully working in internet marketing for about 6 years but had become frustrated when things that should work, didn’t. and things that shouldn’t have worked, did. I wanted to know why and this offered a way to experiment to find out. I tarted reading a bit about the idea but didn’t have a chance to do much more beyond basic AB comparisons. It was all manual no tools around at the time I could easily use, just part of getting the most response possible for advertisers.

After a redundancy, I consciously decided to get into the sector, as an addition to general marketing and commercial background experience I had by that point. I could see the potential and the market was coming this way so I thought it would be a useful specialist area and skillset to develop.

So, I looked around for UK vendors or agencies who were building a reputation and ultimately joined Maxymiser in early 2009 as their second Account Manager. I was there for about 3 years, a period of rapid growth for the company and the market. I learned a lot, tested a LOT, got a wide range of experience across multiple verticals and business sizes.

I left to start my own consultancy and have worked with a mix of clients, agencies and vendors, particularly Sitespect where I work with the UK Client Services team

I guess that start early in the lifecycle of the CRO market helped in some ways, the barrier to entry was lower and everyone was learning and making those early mis-steps out of the limelight.

But it also presented challenges, the tech was less mature, the client experience of a data led optimisation approach was limited. We often weren’t selling our product and services specifically, but the whole concept of what is MVT and AB testing. It was a tough balance of delivering results and managing realistic expectations. I had to learn to evangelise but also on how to deliver the consistent results that gave that evangelism credence.

A large chunk of that was not the testing itself but supporting clients through boarding, developing working processes and a testing culture, so they were able to get the best results.

And How would you suggest a newbie gets into Conversion?

For a newbie nowadays I’d recommend they read the wealth of material that is now out there. A lot of these challenges have been handled, there are guides and frameworks to help you learn and succeed

It is an established, credible and still fast developing sector with a friendly community  – go to conferences, speak to practitioners, get involved. Don’t copy slavishly, the test ideas and results you see will be context and site specific. But from the examples learn how to structure a test for the data and audience you have

And don’t just read CRO focused material; read up on psychology, copywriting, UX and design principles. More than any other marketing discipline it rewards a broad knowledge and a balance between data and creativity

Invest time in understanding analytics and particularly understand how the statistics of testing work. It is a common weak point when I am training people or coaching client optimisation teams.

If you are relying on the tool alone to define “winner” without an understanding of how that is judged, you will struggle to plan tests and metrics that suit the way that tool reports, struggle to interpret reports correctly

Get an idea for how to work through problems, how to solve for better user experience. Build some experience in project management, scheduling, documenting your work and process. But basically I think it’s more about discovering if you have that sort of inquisitive solving mindset. If you do, then adopting the process, skills and constant discipline to challenge and learn that is needed; will become natural in pursuit of that curiosity.

If you don’t, if the appeal is that it is a growth area with high demand for practitioners, but you don’t care about investigating Why and How results are achieved; you will struggle.

But more than anything I’d say – just start. There are competent free and low price tools that will allow you to test with a little time investment. The ROI is your improved understanding or how to plan, build, QA and report on a test, even if you are just practising on your own blog.

You can AB test emails and AdWords creatives – testing emails and adverts was where I started.

The best way to understand how those theories apply in the real world is to try them yourself, learn your weaknesses, address them.

2)  How Can You Convince Clients to Implement Your Conversion Recommendations?” (eg this takes time and man power to make changes to the site)

It really depends on the client, depends where your internal department sits in terms of credibility when trying to pitch for space in the development queue or budget from the marketing director.

In nearly all cases I will start with the data, the roadmap of what has been tested should be aligned with what the business needs. So if you have a test (or learnings from a set of tests) you should have an understanding of how that fits the business and the business model.

  • Presenting that you are “up” on an arbitrary metric picked for a test won’t get traction.
  • Presenting that you’ve reduced the drop out between Product Add and Checkout, which increases users into Payment as per the business strategy, carries wider acceptance.
  • You also need to know what good enough and big enough of a result makes a difference to the business.
  • If you have a change that will yield ~£120k benefit annualised, but will cost £200k to implement (or in larger companies may “cost” less but will delay roll-out of a more valuable piece). Then that won’t get priority.
  • You need to be clear on the size of the risk, the size of the benefit, the opportunity cost of not implementing, the opportunity cost to other priorities if your work is going to disrupt the existing schedule.
  • There are also soft skills, negotiating an appropriate place in the queue, the right sponsors within the decision makers. Harder to teach except through hard-earned experience.
  • But if you have the numbers, if you planned the test to answer a business challenge, if the result confirms it moves towards the business objective (or you can quantify a risk that is avoided) that is key

Cost and Time/Resource

This is the opportunity presented from what we learned on the test; this is the cost and time/resource needed. This is how that sum stacks up.

  • Because even if that doesn’t stack up, even if you “lose” and can’t get it implemented as quickly as you want. You still win.
  • Because this is how you get A place in the queue, even if it is not the one you wanted.
  • And sometimes you have a number (or a risk) that is so high it can stop the planned development roadmap.
  • But the business and the development team need that clear business case so they know the risk from not making that change.
  • You provide the people who have the authority to make the call with the information they need to make the call.
  • If you can back your assertions with numbers, it removes the emotion and politics to a large degree.

Measuring True Worth

  • On both sides, because if you know the true worth of the win and the business has a bigger win (or a larger risk) that takes priority, you can objectively measure your proposed change against that.
  • And if you are honest with yourself and the change you want doesn’t stack, doesn’t justify jumping the queue; don’t get emotional, you are right to back down.
  • You are helping the business to do so.
  • You are helping your own case for next time, you are learning what would be needed to get that priority. Which helps shape your roadmap.

Pick your fights but if you approach it with the facts and data ideally it shouldn’t ever be a fight, just a negotiation around when and which approach is used.

In between major releases sites often have smaller fast track changes, it may well be possible to not be included now, but be first in line for when one of those opportunities arrives.

Business Case

  • And if you have presented it as a business case and it is clearly documented when that chance arrives, then your change is likely to be considered ahead of other updates that do not.
  • That is one of the advantages of CRO – the process demands documentation and the results are quantified in a way a business case can be made.

3)  You said it is important to get data, what is the first step you would recommend for the conversion manager in how to get this data.

Analytics

  • Check the analytics first and foremost before pulling reports or doing too much analysis.
  • Make sure it accurately measures what you think it measures.
  • Draw up a list of corrections or additions – these are the benchmarks and metrics against which you’ll plan and measure.
  • If they are flawed or lacking detail, any tests you run or conclusions you draw will be similarly flawed.
  • It is common for there to be a big conversion “hole” in a part of the funnel, but when you investigate you find it’s a tracking error and there is an issue but it’s upstream or downstream of where you thought it was.
  • And the reverse is also true – thinking there is no obvious gap but finding out a metric is double counting or incorrectly reporting a value.
  • So it is an important and early investment to spend time checking and correcting the key metrics you will use.

Behaviour

  • Check Devices, check Channels, look at the mix between New and Returning users.
  • Get a feel for the shape of the traffic and variance.
  • Look at the hourly, daily, weekly, monthly, seasonal buying cycles.
  • Look for user behaviour patterns.
  • Get a feel for the purchase intent of that audience.
  • Get a feel for how users behave, get an understanding of what that looks like in analytics, what detail is missing that you’d like to know.
  • Some detail you might be able to add into the analytics to help plan a later test, some detail you might be able to discover from a test to justify the time to be spent on adding it to analytics later.

Attention to Detail

  • Drill in for detail when it comes to planning the roadmap and the tests, but initially look at the overall shape and see if any areas are clearly under or over performing, any areas that are more or less volatile than others.
  • What is a big enough size to matter to the overall objective?
  • What is under performing by enough that it needs priority focus?
  • What is so volatile in the daily, weekly, monthly average that it will need a bigger shift to be clearly measurable?

These are all key factors to establish as they will shape what you can do now, later and in future. They help shape the roadmap and ground it in reality.

Bad Optimisation Objectives

  • Too often I see optimisation objectives that state – we will increase revenue per user by 30% when never in the history of the site has anything ever moved the needle by more than 5%.
  • So the first step is establishing those benchmarks, understanding how the site performance is achieved, which factors affect it and which pages are most likely to change that user behaviour.

More Data the Better

  • If there are other data sources, get that data too.
  • CRM volumes and user demographics, AdWords and Facebook data, voice of customer data, heatmaps and session recordings. Usability studies, Live Chat transcripts and user survey results
  • You don’t need all data all at once, up front before you start.
  • But if there is other information other than analytics then look and see if any of it helps understand the business or the user base more clearly
  • And I’d also say even with session recording, heatmaps, good analytics; still do some basics.

TEST

  • If it is a shop, go through the process yourself, try buying something as a completely New user signing up, a Return user signing in, a Return user already signed in etc.
  • Test the main user journeys for purchase or lead generation. Then test secondary journeys like wishlist, order tracking, newsletter sign up, subscription management.
  • Check the data, how many people use each of these? Are the micro conversions even tracked in Analytics?
  • Repeat this on several devices, mobile, tablet, desktop. But also older slower PC, different browsers.
  • It’s not a full device QA – part of the goal of doing this yourself is to understand the flow, the distractions, the frustrations.
  • Can you see those same patterns in the analytics and the session recordings?
  • Are you looking at the site like someone who knows the site or as a customer?

Fresh Eyes

  • So many people don’t use their own site, or use it in a way that is totally unlike a typical user.
  • That’s one data point I use on every project. n=1 so it is not statistically sound but when it comes to the process it is invaluable.
  • It allows you to look at the site with fresh eyes.
  • And if your experience feels seamless but the data showed users struggle, it reminds you that you are not the typical user and reminds you that there is a bias in your perception.
  • It provides context for that data, an understanding of why those holes might be there, why users might not be as convinced to buy as your business model and targets requires them to be

4) What are the three things you should avoid when talking about Conversion Optimisation?

I don’t know if there is anything to completely avoid.

But there are some core areas where you might want to be cautious about how you present your responses. Because they are part of the discipline, but areas where it is easy to be misinterpreted.  Unless you are clear and explain the limitations and caveats to your statements the following will cause issues:

  • Best Practice! Quick Guaranteed Wins! Except to explain that there are common patterns that can be investigated. Heuristics; ideas to explore based on experience.
  • But there is no “quick fix that always works”. The only quick fix that works is – if something is broken, if a form doesn’t submit, a button doesn’t click, an image or CV doesn’t upload on a specific browser.
  • If something that is supposed to work, does not function; FIX IT.
  • That’s not something to test – just fix it, especially mobile as that’s usually where corners have been cut

Revenue

Revenue – especially annualised projections based off one test result, which changed something further up the funnel. Unless you can quantify and explain how error margins multiply for each step, how seasonal variations will exaggerate effect up or down. How any revenue projection is a wide range subject to a number of variables you can’t control for in one sample. Except…fundamentally, this is all about revenue, opportunity and business objectives. So, you should and do talk about Revenue.

But it needs to be handled with care, you need a clear idea of what accuracy you have on any projected values, what range of outcomes your data can (and can’t) define. And I would look to talk more about profit and risk value than a headline revenue figure – where this piece of the puzzle fits into the whole.

Which is why I said newbies should try to understand more than just the headline figures in analytics or the testing tool, they need to understand and then report back how the optimisation effort relates to the business

On a similar note to the above:

Conversion Rate

Site conversion rate as the only metric and “typical” conversion rates you should aim for. Conversion Rate is a ratio – you can make it move by adjusting several factors. Higher site conversion rate is not always what is needed.

There is no single average conversion rate for ecommerce or a particular vertical. There is a distribution, a range in which it might typically fall.

So I am always wary when a stated target is to “increase conversion rate by 20%” or “convert at 7% as that is the average for our vertical”. Unless it is then explained what a conversion rate at that level will deliver to the business, what other factors in the market affect the achievable range.

Quantify

Quantify and also report against which parts of the equation they will change to impact this KPI.

Because dropping price by 50% and offering free delivery for everyone will increase Conversion Rate to well above the “average for our vertical”. But unless it has been costed and the business is still viable you are sacrificing the business health and growth for a vanity KPI. In the same way, the “Site Conversion rate” is made up of New and Return Users, desktop and mobile, high frequency low value buyers/leads, low frequency high value buyers/leads

The total is made up from its parts. Each of which will have their own challenges and effect from macro factors. Whilst you can report a bump to the overall site conversion for an outcome, you should always be able to show which part of the audience changed to contribute to that.

So if New users convert 10% more on a test, Return are neutral, the site conversion rate won’t reflect the change that was made, just the diluted proportion of that which New users represent against the whole.

Be clear on this – yes the cumulative effect will impact the overall conversion rates. But the reported site conversion rate is only the averaged effect of only part of the equation.

5) What is the quickest way to get non-Conversion Believers on board?

Do the job properly. Results replace belief (or disbelief) with fact. Show a process, explain how that will be achieved, deliver on it and then point back to the plan and show people it was not accident or luck. That is the most effective way, but it is not quick. It takes time to build trust and buy-in to that level. Even with a framework and a structured approach it takes time to understand what you can test to influence user behaviour and get results.

From the earliest encounters, initial sales meeting or discovery sessions I use empathy and personal examples to build recognition of the problems we solve for, and therefore the need (and benefit) from doing so. Pretty much everyone you meet will have had a frustrating web experience. So, explain the concept; the aim to reduce friction, help users to research and buy, help provide the solution the business is set up to deliver to them. To do so by looking for areas which reduce the likelihood of the user achieving their objective and trialling alternative hypotheses to establish the optimal way to resolve these.

Trial and Error

Iterative trial and error, but each step informs the next and removes another layer of friction. Ask them if they have ever filled out a form only to have it wipe everything they entered when they Submit and only then find they failed validation. Ask them if they have ever Added to Cart but not been sure that it added, because it wasn’t clear, then added again only to find multiples in the Cart they then had to Remove and then hit Update Cart.

Or if they had to Add to Cart, create an account just to find out what the delivery options and costs would be. Or if they got as far as Checkout only to find the site doesn’t accept the payment method they wanted to use

The nods and grimaces of recognition will be clear. Everyone has had this experience on a website.Then ask them if they were on lunch break or short on time and just needed to get this done quickly and they failed, would they just quit out and try later, maybe forgetting? If it was painful to use on Mobile and they quit only to have to come back and repeat it on desktop later. Again, most people are nodding with recognition.

Then ask them if they had intended to buy. More nods. Then ask how often they actually ended up buying (from that same site). Fewer nods. Then ask the ones that did go back, how they felt about prospect of returning to the site, the brand, the prior knowledge this was not going to be fun. Are they more or less likely to quit if they hit another problem this time? Do they put off that return visit? Need more persuasion to try again? How many chances will they give the site? How much more will it cost to persuade them each time? How much more likely are they to defect to another site with a similar proposition that is easier to use? Then remind them that users on their site, right now, are having these problems. Are frustrated with them.

That for every 100 people visiting maybe 3 will buy, but another 7 might have intended to. Maybe we can only convince 1 or 2 more to do so with the current proposition and market competition. But that all adds up.

So this is what we do.

We look for data that shows where users have problems. We eliminate these little friction points, we signpost where next, we deliver on those little promises to the user. Which means fewer people drop out at each stage, fewer people are coming back frustrated and the result is more happier users who successfully find your business as the solution they needed.

More happy users, buying or applying or enquiring or signing up more easily, more chances for positive word of mouth. More users in the database, lower acquisition costs, more control over how user’s experience the brand.

What if it still Doesn’t Work?

If by that point they are not a “Conversion Believer”, if they cannot see the value in learning how to better match their users to their solution. Then I am not a believer in their business or their desire to improve the site for the end user. Which means they will probably be in a self-fulfilling cycle. Conversion Optimisation will not work for them, as they don’t believe in the core values and approach that makes it work

And in that scenario, I will not work for them either. And sometimes that is the best way to handle confirmed non-Believers; I turn down the revenue opportunity and wish them luck with their approach.”

Thank you Tim for taking the time to be interviewed on Conversion Elite. We look forward to seeing you on July 6th.

85 thoughts on “SEO Jo Blogs Interviews Web Optimisation Consultant,Tim Stewart

Leave Comment