top of page

Fail Your Way to Success: 7 Tips for Running A/B Tests



A/B tests are a big reason for Wix’s success. Every week, we open about a dozen new tests. We usually show half our audience the current version (“A”) so they can act as the control group. We show the other group our new version (“B”).


I was a product manager when I started running A/B tests about a decade ago. I often resented the time and effort involved in testing my proposed changes. I figured it was a waste of time to test when my change so obviously made the product better. But I learned, repeatedly, that what I expected to help the product often hurt it. A/B tests showed me which changes to keep and which to kill.


Working here at Wix, I’ve gained some valuable insight into how to run effective A/B tests. These are our top tips for using A/B tests to make your site more awesome.


01. Don’t trust your common sense 


I love being around people who frequently test their assumptions; they’re the most humble people I know because they know that they’re often wrong. But they’re also often the most successful, because they know how to repeatedly try and fail – and only keep their successes. By repeatedly testing, I keep learning new ways that I can be wrong.


If you created a website, it was probably built based on your best guesses of what would work. Think back at those guesses and assumptions. Each one represents a new opportunity to make the site even better by testing other options.


02. Don’t lie with numbers


Numbers don’t lie, but they don’t always tell the whole truth. Torture them enough and they’ll say what you want. We don’t mean to distort the numbers, but if we’re not very careful and scrupulously honest, there are ways that analysts can put our fingers on the scale to nudge the numbers until they match our expectations.


We can stop tests as soon as we’re winning. If we’re losing, we can make slight (and probably irrelevant) changes and try again. We can throw out data that doesn’t look right, unless it’s in our favor. We find and correct for anomalies that hurt our case, but not the ones that help it. But this isn’t true to our profession. As analysts, it’s our job to tell the truth – the whole truth.


In fact, my favorite part about being an in-house analyst is that I’m being paid to tell my bosses the truth. Admittedly, it’s not always as easy or fun as using numbers to make a good story sound like Proven Scientific Facts, but at the end of the day it brings with it an incredible power – and responsibility.


It’s easy to convince yourself and even easier to convince others that the side you wanted to win did. So be careful, and be honest. Listen to what the numbers are saying, not just what you want them to say.


03. Think in probabilities, not binaries

 

“Statistical significance means you’re sure the results of your test were reliable.” I read that on the Internet, so it must be true. But it’s not.


A/B tests exist in the world of probability, not in the binary world of yes and no. Attempts to translate probability into binary often provide false clarity. “Is it statistically significant?” and “Did ‘B’ win?” These are questions you’ll get asked by managers who want yes or no answers to probabilistic questions.


Ideally answer with statements like, “There’s a 90% chance that B was between 1% worse and 5% better.” Or (describing the same results) “B was 2% better, with a margin of error of 3% at the 90% confidence interval.” Neither “B won” nor “B didn’t win” quite captures what happened.


If you have to give a yes or no answer, apply context and common sense to the numbers. In the above case, if B was a minor change, declare it the winner and move on. But if B being worse could have disastrous consequences, get more data before declaring a winner.


04. Test where the potential impact is biggest


Find the places where changes could make the biggest difference for your business.  The purchase page is often a good place to start because improvements there often go straight to the bottom line. Look for pages with high traffic, and where changes will most directly increase revenue or decrease costs.


05. Consider alternatives to A/B tests 


A/B testing is great for things that are easily measured and in areas where you have enough traffic, but for some tasks there are better tools. Want insights into user experience with different versions? Sit with actual users as they use your product. Want to make sure a low-risk change didn’t break anything? Make the change and monitor the numbers. You can roll back and re-release with a test later if you need to. Some changes don’t justify the effort spent on a test. Your colleagues have plenty of other important tasks they could be doing instead.Did you decide you don’t need to do this test? Great. Just bookmark this page so you can come back here when you do need it.


06. Define the test clearly 


Write down what you’re testing, what you expect to gain and how you’ll know if it worked.Imagine you’re at the end of the test reporting on the result. Are you going to be able to get the numbers you need? More importantly, make sure they are they likely to justify having run the test.


07. Store, study and share the results 


You already wrote down what you were testing and why, right? Keep that in a spreadsheet or database. Write the test results there too.  Periodically review the differences between your expectations and your results.


Learn from your wins. Learn from your losses.


Do meta-analyses on your tests. Look for patterns where results might differ from expectations. We once found that the first day of our tests were noisier than other days. I like seeing how often the winner of the test’s first week predicts the winner of the test’s second week. It helps me understand how much of what I’m looking at is signal and how much is noise.


Share your results with other analysts and with the product managers who request these tests.


Bonus tip: Don’t oversell the results


Don’t suddenly announce that you proved that B is better (or worse) than A by 10% on some metric. Passion is good, but you’re an analyst. You’re being trusted to provide reliable, accurate information. They may spread this information to others and then need to issue embarrassing corrections. There’s a margin of error, regression toward the mean, statistical anomalies and human error. Even when you’re excited — especially when you’re excited — speak carefully when sharing your results.


Posted by Gil Reich Analyst



bottom of page