What Ecommerce Brands Get Wrong About Conversion Rate Optimisation
Your conversion rate is 1.8%. The benchmark you've seen cited is 2–3%. You hire a CRO agency, run a discovery sprint, and start testing. Twelve months later, you've run 40 experiments, and you're at 2.1%.
The problem isn't usually that CRO doesn't work. It's that most brands reach for it before they've diagnosed what's actually suppressing conversion. A testing programme running on top of a structural problem will always underdeliver because the tests ask the wrong question.
What CRO is actually good at
CRO works when the underlying commercial model is sound and the friction is genuinely in the journey. Real demand for the product, competitive pricing, qualified traffic, and a checkout that functions — when those conditions hold, iterative testing on page layout, copy, imagery, and flow produces measurable returns.
That's a specific set of conditions. When they don't hold, you end up testing around a problem you haven't named.
The structural issues a test won't surface
Traffic quality
A 1.8% conversion rate looks like a conversion problem. It might be a traffic problem.
If a significant share of your sessions comes from broad paid social targeting, poorly matched search terms, or content that attracts browsers instead of buyers, your rate will be suppressed regardless of what the product page looks like. Segment by channel in GA4 — it takes an afternoon. What you typically find is that one channel converts at 3.5% and another at 0.4%, and the blended figure has been hiding the actual situation for months.
Testing a landing page that receives low-intent traffic will not fix a low conversion rate.
Product-market misalignment
Sometimes the conversion rate is low because the product isn't compelling enough at that price point, in that market, against those alternatives. No button colour test addresses that.
The signal is usually visible if you look: high add-to-cart rates with low checkout completion point to friction in the journey; low add-to-cart rates on visited product pages point to the product or its presentation not landing. Neither is primarily a CRO problem.
Checkout and technical debt
Checkout is where CRO and technical debt genuinely overlap. Unnecessary steps, broken address validation, limited payment options, inconsistent behaviour on mobile — these suppress conversions in ways that testing cannot fix. Fix them before you run experiments.
If your checkout completion rate sits materially below what Shopify or BigCommerce benchmarks for comparable merchants, you have a technical problem. The solution is engineering work, not a test.
Pricing and delivery cost
Price is a conversion lever most CRO programmes treat as fixed. It isn't. If your pricing is out of step with the market, or your delivery cost structure makes the basket feel expensive at the cost-reveal stage, you will see abandonment there that no headline test will resolve.
A competitive pricing audit is a half-day of work. It often reveals more about suppressed conversion than six months of A/B testing.
Why testing feels like the right answer
Testing feels productive. You run an experiment, you get data, you make a decision. There's a cadence. Agencies can report on it. You can show velocity in a board update.
Part of why this pattern persists is agency incentives. A testing calendar is legible, reportable, and renewable. "Your traffic quality is the problem" is none of those things — it points back to the acquisition strategy and hands the budget to a different team. Part of it is internal pressure to show momentum. A roadmap of experiments is easier to defend than a diagnosis.
That combination — an agency incentivised to test, a client incentivised to show progress — produces a lot of activity and rarely enough movement.
What to look at before you start testing
A pre-CRO audit doesn't need to be complicated. It needs to be honest.
Segment your traffic by channel and calculate the conversion rate per channel. High variance means the problem is in acquisition targeting, not on-site.
Map your funnel in full. Where specifically does volume drop? Low add-to-cart means the issue is early — product, pricing, presentation. High checkout abandonment means it's late — friction, cost reveal, payment options. These are different interventions.
Check your checkout technically. Payment methods, address lookup, mobile flow, and load time at each step. Fix anything obviously broken before you test anything.
Run a competitive pricing check. Not to undercut — to understand whether price is a structural barrier.
Review your product pages for the basics. Is the photography accurate? Does the copy answer the purchase questions your customers actually have? Are reviews present and credible?
It's the work that should happen before a testing programme is commissioned. It's often not done because it's less commercially legible than an A/B testing roadmap.
What a CRO programme looks like when the foundations are right
When structural issues are resolved, a well-run programme has a clear remit: test specific hypotheses about specific friction points, with enough traffic to reach statistical significance, in an order that reflects commercial priority.
The tests that produce conclusions you can act on are grounded in something specific — a user research observation, a pattern in session recordings, a checkout drop-off that technical review didn't explain. The question isn't what could be better — it's what the data suggests is going wrong.
Statistical significance matters. Tests need enough traffic and enough time to produce conclusions you can act on. Calling results early because you want to move on produces noise. A smaller number of well-designed tests will outperform a high-velocity programme of inconclusive ones.
The benchmark problem
The 2–3% conversion rate benchmark is cited constantly and used badly. It's an average across a vast range of categories, price points, traffic sources, and customer types. A brand selling considered-purchase furniture at £1,500 average order value should not expect to convert at the same rate as a brand selling £30 skincare. The industry average tells you almost nothing actionable.
What matters is your conversion rate over time, segmented by the variables that actually affect it, measured against your own historical performance. Are returning customers converting at a healthy rate? Is new visitor conversion improving as you refine acquisition targeting? Is mobile within reasonable range of desktop? Those comparisons produce decisions. The benchmark does not.
FAQs
What is a good conversion rate for ecommerce?
There is no single figure that applies across categories. Brands selling low-consideration products at £20–£40 may convert at 4–6%. Brands selling considered purchases at £800–£1,500 may convert at 0.8–1.5% and be performing well. The more useful question is whether your rate is improving over time and whether it differs significantly across channels, devices, or customer types.
How long does a CRO programme take to show results?
It depends on traffic volume. For brands doing under 50,000 sessions per month, individual tests may take four to six weeks to reach statistical significance. A programme of ten well-designed tests may take the better part of a year to run properly. Programmes that report weekly wins on low-traffic sites are reporting noise.
Should we run CRO in-house or use an agency?
It depends on whether you have the internal resources to run tests properly and interpret the data honestly. The risk with an external agency is that their incentive is test velocity and marginal wins, not telling you that your traffic quality is the real problem. Whoever runs it, the brief needs to start with diagnosis, not a testing calendar.
Is CRO worth it for smaller ecommerce brands?
For brands with revenue under £2m, a structured testing programme is rarely the right investment. At that stage, improving traffic quality, fixing the basics of checkout, and tightening product presentation will typically outperform anything a testing programme produces. The traffic volume required to run statistically valid tests is also difficult to reach. The same analytical thinking applied as a quarterly review is more valuable than a formal programme.
What tools do you need?
At minimum: a properly configured analytics platform (GA4, or Shopify Analytics for simpler operations), a session recording tool (Hotjar and Microsoft Clarity are both viable at low cost), and a testing platform if you're running A/B tests (VWO and AB Tasty are options). Most programmes aren't constrained by tooling. They're constrained by the quality of the hypotheses and the willingness to interpret results honestly.
What's the difference between CRO and UX work?
In practice, the distinction is often artificial. CRO means quantitative testing of specific variants. UX work means qualitative research into user behaviour and journey design. A well-run programme uses both. Testing without qualitative grounding means optimising things that aren't the actual source of friction.