Beginner's Guide on Shopify A/B Testing: What You Need to Know and How to Get Started
You may think that your Shopify store is perfect as it is. But is it?
Your brand, your products, and your target audience are unique. You can’t plug in a basic ecommerce theme and call it a day.
To find what works best for your Shopify store, you need to run A/B tests.
You will be surprised by the insights you will get.
But what is A/B testing? And how does it work for a Shopify store like yours? 🤔
In this article, we’ll take a look at why A/B testing matters and how you can set up your A/B tests for your Shopify store.
Why A/B testing?
A/B testing is a methodology that allows you to present two different versions of a page to your visitors where 50% of visitors will see version A of your page (the “control”) and 50% of visitors version B (the “variant”).
Based on the different pages you present, your audience will take different actions that will affect your ecommerce metrics. In the simplest terms, the version that results in the highest conversion rate while keeping your profits intact wins the test. If the variant wins, it will become the new control version in future tests.
Based on the results of your A/B tests, you can make data-driven changes to your store that lift your metrics’ performance 🔥
For example, you could run an A/B test on your popups to modify your CTA’s copy. Let’s say your email newsletter popup currently uses the copy “Subscribe,” which is quite bland. Your variant version could test a more actionable copy like “Join the club”.
After you run your tests long enough, you may see that the variant version leads to 20% higher signup rates, which will help you grow your email list faster than before.
A/B testing vs. multivariate testing
One crucial point to notice is that A/B tests only affect one element at a time. When you want to test changes in multiple elements — for example, changing your CTA’s size and color — you are running “multivariate tests”.
Multivariate tests affect multiple variables of a page, therefore increasing the complexity of the test. They also require much larger sample sizes to reach statistical significance, a topic which we’ll cover in the next section.
For most Shopify stores, A/B tests are enough to generate significant lifts in the store’s performance.
Should you run A/B tests on Shopify?
The fact that you can run A/B tests doesn’t mean you should. You need to consider some issues before you get started running A/B tests on your Shopify store.
To start, you need a basic grasp of statistics to run A/B tests. A/B testing is a statistical hypothesis testing methodology that requires understanding concepts such as sample size, confidence interval, and statistical significance. While these concepts aren’t complex per se, you can’t run A/B tests without knowing how they affect your experiments.
From all of these concepts, one commonly misunderstood is statistical significance, which is a fancy way of saying your tests are mathematically reliable within a range of confidence.
Marketers often use a 95% confidence score, which means that your experiments’ results are 95% likely not to have happened by mere luck. If a variant wins a test and you make the changes accordingly, you should expect to see similar results to the ones found in your test.
Before you start designing and running an A/B test, you also need to calculate your sample size — that is, the minimum number of visitors you need in each version to run your test confidently.
Your sample size will depend on your baseline conversion rate — that is, the current conversion rate for the ecommerce metric you are testing — and your minimum detectable effect (MDE) — that is, the minimum lift you expect to see in your test to call it successful. (To learn more about choosing your minimum detectable effect, read this guide)
Now, here is an online sample size calculator by Evan Miller 👇
In the example above, the metric's current performance was 3% (say, the checkout conversion rate), and the minimum detectable effect expected was 30%. That means, your variant version will only be successful if it increases the checkout conversion rate by at least 30%.
If your Shopify store doesn’t meet the sample size for both variants within a month, you shouldn’t run the test. You can select another metric with a lower baseline conversion rate or pick a higher MDE.
If you decide to run an A/B test with a small sample size, you will face two problems:
- You will need to run your tests for a long time, altering your tests’ results.
- You may end up with a “false positive” — that is, you detect a winner where there’s none.
As a rule of thumb, avoid running an A/B test if your Shopify store can’t meet the minimum sample size based on your baseline conversion rate and minimum detectable effect.
How to run A/B tests in your Shopify store
Step 1. Define your hypothesis
The whole idea behind an A/B testing process is to test the validity of a hypothesis — that is, an assumption you make about your store. Every hypothesis must come from your existing data, including:
- Web analytics
- User research
Let’s imagine you have the following assumption about your Shopify store: “showing the shipping and return policies near the “Add to Cart” button will increase the checkout conversion rate”.
We call this assumption the “null hypothesis”.
👆 Note that we are talking about a “hypothesis” and not an “idea” because the latter is the result of creative activity — a “brainstorming session” — whereas the former comes out of your data.
In our example, your hypothesis could have come from your customer support tickets, which show that a significant volume of visitors ask for your shipping and return policies before making (or finishing) a purchase.
Your hypothesis must have an ecommerce metric — a KPI, to be more precise — attached to it that defines whether the A/B test is successful or not. In our example, we could use the checkout conversion rate, but also the add-to-cart conversion rate (i.e., the number of people who add a product to their cart divided by the total visitors).
To develop a hypothesis for your A/B tests, we'll use a hypothesis kit:
- Because we saw [add your data or user feedback]
- We expect that [change in variant version] will cause [result]
- We’ll measure this using [data metric]
In our example, the hypothesis kit would look like this:
- Because we saw high-intent visitors contact our customer support about our shipping and return policies
- We expect that adding our shipping and return policies below our “Add to Cart” button will cause a conversion rate increase
- We’ll measure this using the checkout conversion rate
The data that comes from your tests will tell you whether your null hypothesis is valid or not. If it’s not, the variant version (the “B”) won the test. If it is, the control version (the “A”) won, and everything will remain the same.
Step 2. Choose and install your A/B testing software code
Running A/B tests in your Shopify store is a technical process that Shopify users can't execute on their own (Shopify Plus users can run A/B tests by default, but I assume you aren’t a Plus user).
For that reason, you need to use a third-party A/B testing software tool. Three of the most popular A/B testing software tools in the market are:
These tools aren’t much different from one another except on price and complexity. You should compare all of them and pick one that best fits your needs and budget.
Several Shopify apps specialize in A/B testing, like Neat and Shogun. These are simpler and more focused on Shopify ecommerce stores, making them an ideal budget-friendly alternative than the other tools mentioned before.
Step 3. Create the tests
Once you have your hypothesis, you need to create your tests. Every A/B testing tool works differently, but in every case, you will be designing the changes in the variant version.
What you need to do should be clear by now; how you are going to design your variant page is what matters.
In our previous example, it’s clear we need to add the shipping and return policies below the “Add to Cart” button, but where exactly will that be? What will the margin and padding between the CTA and the policies’ link be? What about the color? Should we highlight the links?
You need to answer these questions before you finish designing your tests. If possible, discuss your design options with your design and marketing teams.
Step 4. Start the test
After you have designed your variant page, you can start the test. The question you need to ask yourself is: how long will I be running this test?
The simplest answer is “until your test meets the predefined sample size.” Simple, but not complete. Why? Because you are missing one key factor: timing.
Customers' shopping habits aren’t linear: some people like shopping on the weekends, others at the end of the month, and so on. The weather, holidays, and location also play a part in your business cycle.
The clearest answer will come from your customers: how long does it take for them to shop? Do they research for weeks before purchasing, or do they often buy your products out of impulse?
Only after you have taken all of these factors can you finish a test. Even if a test exceeds its minimum sample size, wait for at least a week before finishing your test.
Step 5. Analyze the data
Your A/B tool will have gathered a lot of data, and you need to decide: which version won the test?
The A/B testing tool you chose will show you the results and suggest the winner.
At first, you may be tempted to pick the version that generated the highest lift for your chosen metric. However, remember to consider other metrics, including:
- Add-to-cart conversion rate
- Abandoned-cart rate
- Net profit
Also, you need to segment your data. The aggregated data will show you a result that may cause you to pick a winner quickly, but after you segment the results, you will find your alternative hypothesis was proven right among certain segments. Some segments that you want to use in your analysis include:
- User type: New or returning visitors
- Desktop OS: Mac, Windows, Linux
- Mobile OS: iOS, Android
- Device: Desktop, Mobile, Tablet
- Browser: Chrome, Firefox, Safari
- Medium: Organic search, paid, social media
- Account: Logged-in or guest visitors
Let’s imagine that in our example, you see the variant page didn't generate a significant lift on the checkout conversion rate.
However, after you segment the data, you see that the variant did generate a significant lift for your logged-in visitors. What’s more, that lift reduced your customer support tickets for that segment, which increases your net profit indirectly.
In this case, you need to look at your sample size and conversion rates. Are the conversions generated by the logged-in visitors statistically significant? If so, you can call the variant version a winner. Otherwise, you will have to keep running your test until the data for this specific segment is statistically significant.
From this example, you can see that your A/B tests’ results are all about increasing your conversion rate as well as your net profits.
Final considerations for your Shopify A/B testing strategy
By now, you should know everything there’s to know about running successful A/B tests in your Shopify store. Before we finish, I want to leave you with some tips.
What matters in your A/B tests isn’t to deliver winners consistently, but to learn from every test you run. The data you gather will tell you a lot about your brand and your audience more than any other quantitative analytic method.
Focus on the big wins. Running A/B tests is expensive and time-consuming. Unless your store has seven or eight figures worth of monthly traffic, you will need to run your A/B tests over several weeks until the results reach statistical significance. That’s why you want to focus on the highest-impact tests—those that generate the highest lift over the most profitable metrics. Instead of testing the color of your CTA, test shortening your checkout process, your product pages, and popups. Such bold tests will likely bring the highest ROI for your business.
A/B testing is a never-ending process. Your audience and your brand will evolve, so you may have to run the same tests multiple times over the years to factor in these changes (at least for the highest-impact tests).
Ivan Kreimer is a freelance content writer for hire who creates educational content for SaaS businesses like Leadfeeder and Campaign Monitor. In his pastime, he likes to help people become freelance writers. Besides writing for smart people who read sites like Getsitecontrol, Ivan has also written in sites like Entrepreneur, MarketingProfs, TheNextWeb, and many other influential websites.
You're reading Getsitecontrol blog where marketing experts share proven tactics to grow your online business. This article is a part of Ecommerce marketing section.Subscribe to our newsletter → Main illustration by Icons8
You're reading Getsitecontrol blog where marketing experts share proven tactics to grow your online business. This article is a part of Ecommerce marketing section.