Top Conversion Rate Optimization (CRO) Mistakes and How to Fix Them

cro Shopify

Common CRO mistakes can make generating meaningful results through Conversion Rate Optimization (CRO) and experimentation tough. While almost anyone can achieve the odd one-off conversion rate uplift, producing wins with consistency is another challenge.

Moreover, many CRO practitioners find that their winning variations fail to perform when serving 100% of their traffic, even when they achieve their desired results.

Thankfully, in our experience, almost all CRO-related difficulties can be traced back to several core mistakes. In this post, we’ll review a list of the most common (and harmful) ones, explaining what they are and how you can avoid them.

If we’ve done our job well, by the end of this blog, you should have everything you need to sidestep all of these pitfalls and start driving accurate, replicable results through CRO and experimentation.

The 10 Most Common CRO Mistakes And How To Avoid Them

Here are the 10 most common CRO mistakes and how to avoid them to achieve consistent and meaningful results through Conversion Rate Optimization.

1️⃣ Starting Too Big

As a general rule, it’s never good to start an experimentation program with an ambitious, resource-intensive experiment. Here’s why:

We began working with a new client – a property listing website- a few years ago. Some early research indicated that adding a map feature might benefit their property pages. This feature would allow users to see where each property was located on a map. As a CRO expert, we thought this would be a great addition.

Building this functionality was complicated and required a lot of time for development and testing to ensure everything worked well and was user-friendly. Unfortunately, when we finally launched our experiment, we found that the new feature hurt our primary conversion metric. Many users were actively leaving the maps view we had created. This was a surprising setback for a Shopify CRO expert like us.

This is just how it goes sometimes. You can only be sure of a change’s impact once you test it. Our mistake was spending too much time and energy on a hypothesis that could have been tested with a much simpler and less resource-intensive experiment. For instance, we could have used a “painted door” test to gauge interest in the new maps feature.

This would have been a much quicker and easier experiment and would have given us all the information we needed to validate or invalidate our hypothesis. This is a valuable lesson for anyone doing CRO in Dubai.

If we had found that many users were trying to use this feature, we could have built out the full functionality with confidence that it would improve site engagement.

2️⃣ Average Build Size is Too Large

This sounds similar to the last point, but it’s different. Previously, we discussed not starting your program with extensive experiments when you lack data. Now, we’re talking about extensive experiments in general.

Many people in the CRO optimization industry believe that ‘the bigger the build, the bigger the uplift.’

The idea is simple: if I make significant changes to a web page, the conversion uplift will likely be more important than if I only make small changes.

However, our experience showed us that our smaller experiments often gave more robust results than our larger ones. We decided to look into our database, which has thousands of experiment results, to see what we could find. As a CRO agency in Dubai, we wanted to ensure our methods were efficient.

The chart below shows what we discovered.

As you can see, minor tweaks have just as high a win rate as experiments with large builds – and they even have a slightly more significant average uplift (6.6% vs. 6.5%). This data shows no link between the size of the build and either the win rate or the uplift. If you spend all your time on massive experiments hoping for a significant uplift, you’re probably wasting a lot of time.

Even at a Shopify plus agency, focusing on smaller, more manageable changes can yield just as impressive results.

3️⃣ Tests are Too Small

While spending time on massive experiments is essential, testing only some tweaks is also crucial. Common CRO mistakes include focusing only on small changes and missing out on more significant opportunities.

As we’ll discuss in the next section about chasing winners, experimentation allows you to try out your boldest and brightest ideas – ideas that could completely change how your business operates. As a CRO expert, it’s essential to balance your approach.

If all your experiments focus on minor tweaks, you need to catch up on one of the most significant benefits of CRO: taking risks with a safety net. A Shopify CRO expert knows that a mix of strategies is critical.

Ideally, your program should include a combination of small, low-risk tests with a high chance of success and higher-risk tests that could either fail badly or succeed spectacularly. This balanced approach helps you make the most of your CRO efforts.

4️⃣ Chasing Winners

Following our last point, conversion uplifts are essential, but CRO in Dubai should also be about gaining deep insights into your customers and trying bold, innovative ideas with less risk.

As an agency, if we win too many experiments, we might question whether we’re bold enough. A high win rate may look good on paper, but it might mean we’re playing it too safe, only testing ideas we already think will work. Proper CRO optimization involves taking some risks.

The most value from CRO comes when you learn things you didn’t know. This allows you to achieve more extensive, more surprising wins, which can inform your experiment roadmap and your product, pricing, and business strategies. A CRO agency in Dubai should aim for a mix of safe bets and bold experiments to optimize results truly.

optimization agency

5️⃣ No Hypothesis

Many people doing CRO today run their tests, analyze the results, see if the new version won or lost, and then move on to the next test. A Shopify plus agency may follow this approach.

On one hand, these people should be commended for running experiments and making evidence-based decisions. But on the other hand, their process needs a crucial element of sound scientific methodology: a hypothesis. This is one of the common CRO mistakes.

But, every test should be designed to test a hypothesis. This way, even if your test fails, you learn something valuable, like that your hypothesis was wrong. You can then use this insight to improve future experiments with a higher chance of success. This is a crucial strategy for any CRO expert.

Advanced CRO is just as much about learning as boosting your conversion rate. Creating data-backed hypotheses and then testing them is the key to long-term success. A Shopify CRO expert knows learning from each test is essential for ongoing improvement.

6️⃣ Statistical Misunderstandings

Confusion around A/B testing statistics causes many difficulties for CROs in Dubai.

For example, many people stop their tests when they reach 90 or 95% significance. Mats Einarsen explained why this is a bad idea. He simulated 1000 A/A tests (where the control and variation are identical) and found that 531 reached 95% statistical significance at least once!

This shows that if you stop your experiment as soon as it reaches a certain significance level – even if it’s 95 or 99% – there’s a good chance your result could be due to luck. Proper CRO optimization means avoiding this mistake.

To avoid this, you must determine your required sample size before launching your test – and stick to it. This is crucial for any CRO agency in Dubai.

Here’s an excellent calculator to help you determine the sample size you need for your experiment. Here’s a good starting point for learning more about A/B testing statistics in general.

7️⃣ No Research

Once you see the value of CRO and experimentation, the next challenge is deciding what to test. A Shopify Plus agency might wonder: Should you change your hero image? Should you make your headline more emotional? How do you decide which idea to test first? And how do you know if it’s worth testing?

To answer these questions, you need to do your research.

Research can take many forms – analytics audits, scrollmaps, heatmaps, surveys, user testing, biometric testing, and more.– and indicates where and why your web visitors aren’t converting. Avoiding common CRO mistakes involves understanding your visitors’ behaviour. With this information, you can better decide which hypotheses are worth testing and which should be lower on your priority list.

Ultimately, there are many potential hypotheses you want to test on your website. A CRO expert knows that by prioritizing those backed by multiple data points and using a mix of qualitative and quantitative research, you can focus on areas likely to yield the most significant return.

AB testing agency

8️⃣ The Flicker Effect

Sometimes, when you run an A/B test, the old version of your webpage appears in the browser before the new version shows up. This is called the flicker effect (or flash of original content (FOOC)), which can mess up your experiments. Any Shopify CRO expert knows this issue well.

Not only does it ruin your website’s user experience, but showing your users both versions of your webpage—the original and the new—affects how they respond to your test, making your results unreliable. This is a common problem in CROs in Dubai.

Luckily, there are ways to minimize or altogether remove the flicker effect.

Our developers usually write the code for our clients’ experiments with CRO standards in mind. This means the code runs as quickly as possible so that users only see the version of the webpage they’re supposed to. Good CRO optimization means tackling issues like this to ensure accurate results.

9️⃣ Not Tracking Guardrail and Secondary Metrics

Your primary metric is the metric you use to decide if your experiment is a winner or a loser.

For example, if you have a website that sells shoes, you might set the number of orders as your primary metric. So, if an experiment results in a 10% increase in orders, you’d call it a win. Any CRO agency in Dubai knows the importance of choosing the right metric.

But what if you’re optimizing a page that’s a few steps away from your final conversion? For instance, you may have a four-step funnel and want to optimize the first page. What should be your primary metric?

In this case, you should choose the action you want the user to take as your primary metric rather than the final conversion. So, in this example, every time a user moves from the landing page to the basket page, you’d count it as a conversion. Start your Shopify journey with us to explore the best practices in setting primary metrics.

However, this choice needs to be corrected. Shopify plus agency professionals often encounter this issue.

Sometimes, for various reasons, you might find that your ‘next action’ conversion rate goes up, but your ‘final action’ conversion rate goes down.

Here’s a real-world example:

Making the mini-basket easier to use would increase the progression rate to checkout, which would positively impact our final conversion rate. However, this experiment increased the progression to the checkout rate by 28% and the drop-off rate on the checkout page by 43%! This resulted in a 7.7% decrease in final conversions. This is one of the common CRO mistakes.

Because of this result and many others like it, we recommend using your final conversion as your primary metric.

Note: There are a few exceptions when using something other than your final conversion as your primary metric might make sense. If you’d like to learn about these exceptions, check out our blog post about primary metrics. Any CRO expert will tell you the importance of understanding these nuances.

🔟 Wrong Primary Metric

Choosing the right primary metric is a great start, but to benefit from your A/B tests, you should also track some guardrail and secondary metrics.

Many people doing CRO miss this step, which means they lose out on valuable insights that could guide their future tests. This is one of the common CRO mistakes.

Guardrail metrics are secondary measures connected to key business goals. They help you ensure your test isn’t accidentally hurting other essential KPIs. A CRO expert knows the importance of these metrics.

For example, we worked with a camera seller and added an ‘add to basket’ button on the product listing page. This lets users buy without going to the product page.

This test increased our primary metric – the number of orders – but it hurt two guardrail metrics: average order value (AOV) and revenue. If we hadn’t tracked these guardrail metrics, we would have wrongly called this test a success, costing our client a lot of money. Every Shopify CRO expert should watch out for this.

Luckily, we were also tracking secondary metrics. These don’t decide if your test wins or loses but help you see things like engagement and scroll depth to understand the results better.

When we checked our secondary metrics, we saw fewer users in the variation were buying accessories and add-ons than the control. This happened because users skipped the product page, where they usually saw these items.

The insights from these guardrail and secondary metrics helped us avoid making a change that would harm business goals. They also helped shape our future testing strategy.

Final Thoughts:

Most people can get a one-time boost in conversion rates on their website, but making those gains last is harsh. We hope this post helps you spot common CRO mistakes and shows you how to avoid them.

Check out our biweekly newsletter to learn more about using CRO in Dubai to achieve your business goals. We share tips, strategies, and real-world examples that have helped us generate over £1 billion in extra revenue for our clients. This can guide you in effective CRO optimization.