Marketers traditionally use A/B testing to compare two versions of a website, and identify which version performs better. But don’t be mistaken – that’s not all that A/B testing is good for. Amazon sellers can also use Amazon AB testing to evaluate different versions of their product listings, and figure out which listing they should use to run their Sponsored Product Ads.
A/B testing a website is pretty simple. All you have to do is use a tool such as Optimizely or Instapage to create two (or more!) variants of your site, and split your traffic between those variants. Unfortunately, Amazon A/B testing product listings isn’t as straightforward.
Why is this so? Amazon only allows sellers to direct their traffic to one product page, and there’s no option that lets you push views to separate product listings simultaneously. To get around this, you’ll have to manually log into Amazon to change your product listing after a predetermined period of time.
Say you run your A/B test for a total of 60 days. For the first 30 days, you’ll be using your standard product page as a control. Once those 30 days are up, you’ll log into Amazon and change your product listing, and let this run for another 30 days. At the end of the test, the next step is to collate all the data, and find out which is the better-performing variant.
Now that you know how to A/B test your Amazon product listings, check out these tips which will help you increase the accuracy of your tests.
Most people go into A/B testing with plenty of assumptions. Say seller A spent weeks researching and reading up on how to title Amazon products, and he crafted his product title based on all the best practices he found online. Does this mean he can skip testing his product title, and focus on the other elements of his listing? Nope.
With A/B testing, it’s all about letting the numbers speak for themselves. So cast aside your assumptions and test everything, including your title, main image, additional product images, image order, product title, bullet points, product description, and especially your price.
We find that many Amazon sellers assume they’ll lose all their customers the second they increase their prices. While this may be true for those in highly competitive industries (or those selling homogeneous and non-differentiated products, for that matter), it certainly isn’t the case for all sellers.
Some rookie marketers will A/B test their website (or ads) to the point where they get 100 eyeballs on each variant, and then call it a day. Obviously, this isn’t a large enough sample size, and there’s a high chance that it will affect the validity of your results.
This begs the question… how large should your sample size be? While this isn’t a hard and fast rule, we generally recommend that you stick to a minimum sample size of 5,000 impressions per variant. While it can be tempting to stop an A/B test early when it looks like one variant is clearly winning, you should always wait till you’ve reached your predetermined sample size before concluding the test. It’s always possible that the variant which performed poorly earlier on will ultimately emerge as the winner.
To maximize your upside, test your best selling listings. Don’t waste your time on your newly launched products which aren’t getting any traffic as of yet.
What’s the rationale behind this? Well, your bestsellers get more views and sales, which translates into more data for your tests. You might be able to conclude an A/B test for a best selling item in just a week, but if you’re testing a new listing which doesn’t get much views, you might be stuck testing for a few months.
A/B testing involves changing only one element at a time – no ifs, ands, or buts.
You’d think this is obvious, but some marketers who are a little too impatient end up changing multiple elements at one time. Why is this problematic? If Variant B of your product listing has a different price, title, and feature image from Variant A, and it’s experiencing a higher conversion rate, you wouldn’t be able to pinpoint what’s causing that increase.
Maybe the change in title increased the conversion rate by 5%, and the change in price increased it by 10%, leading to a total increase of 15%. Maybe the change in title reduced the conversion rate by 2%, and the change in feature image increased it by 17%, leading to the same nett increase of 15%. You wouldn’t have a clue.
This one’s pretty straightforward – make sure there aren’t any external factors (such as promotions, sales, and newly launched coupon codes) influencing your A/B tests.
Here’s an example: say you’re A/B testing your product listings in the month of November, leading up to Black Friday. Obviously, on the week of Black Friday, your conversion rates and revenue will be artificially inflated. This messes with your numbers, and results in less accurate results.
Want to nail your Amazon PPC ads, and have your revenue shoot through the roof? You’ve got to start with a strong foundation – and that’s a high-performing product listing that will compel your viewers to convert as paying customers. What are you waiting for? Start A/B testing your listings today! You can use PPC Entourage’s A/B Split Test tracking to record results and compare data with ease.