10 Mistakes to avoid while A/B testing

This is one of those lessons where you get bombarded with bite size tips. This time is not about tips or best practices, but since there are several very common mistakes done in testing, I decided to focus on what NOT to do.

1) Don’t test insignificant things

Testing insignificant things is a huge waste of time.

If you look for it online, you’ll probably find people testing button colors on websites and email campaigns, and declaring a gain on green over red. While you can optimize based on those kinds of tests, the results are simply obvious.

Instead of wasting time running these tests, just make an assumption, make the change and leave the testing for more significant stuff.

2) Never test multiple items

When you run A/B tests, you need to make sure you are testing exactly the same thing.

Let’s say you’re testing copy on a promotion, your control is “20% Discount” and you write “SAVE 20%” as your treatment.

Now you don’t know if what influenced the split is the change of the word or the fact that your treatment is in all caps.

This has to be done either in separate A/B tests or on a multivariate test including all the variants.

3) Premature results

Calling a winner before the test is over can lead to horrible results. I’ll explain why:

Let’s say you send the campaign on a Tuesday at 10am and your test ends on Thursday at 5pm. You check the progress and see that until Thursday 11am, one of the variants is winning by 79%. But wait a second! You don’t know what’s going to happen on Thursday afternoon. Days of the week are different, times vary, human behavior changes from one day to the other.

You can’t call a winner before at least 95% of the test completed. And I will encourage you to let it finish all the way to 100%.

4) Not enough data

One of the worst thing you can do in email testing, or any testing for that matter, is listening to results from a sample of data that is too small.

Testing on a sample of 100 people will result on unreliable data, and there is nothing worse than trusting wrong data.

Once you send the campaign to the entire list, you are running the chance to get a completely different result.

5) Before and after

The point of A/B testing is to test 2 different variants (control and treatment) in the same conditions. In other words, nothing other than the one item being tested varies.

When you a before and after test, meaning that you run variant A, then make a change to run variant B, you are ignoring a huge factor: timing. It’s not uncommon in marketing to see differences in open or click rates from one day to another.

Always run tests at the same time.

6) Underestimating results

Not all tests end on a dramatic difference like 70% – 30%. The only cases you’ll see this is when things are being done horribly and you improve with just any change you make.

But when you’re doing things right and you increase your performance by 2.1%, that small win can make a huge difference.

I’ve seen people ignore the results because the difference was not as expected. Never underestimate resuls.

7) No hypothesis

We talked about the hypothesis in the previous lesson. Testing just to see what happens is a terrible mistake that will result in not learning anything.

You have assumptions of why things are happening the way they are, you have to prove that or eliminate it. The hypothesis is the beginning of your test, but it’s also what will determine what you learn at the end.

8) Not learning anything

Like I said, most marketers will tell you that the goal of testing is to improve performance.

The goal of testing is learning. Knowing that treatment version beat the control by 12% is nice, but do you know why?

Data is great, but at some point you need to make sense of it and translate it into human language you and team can understand and implement.

What did you learn from the test? Was your hypothesis correct?

9) Not taking in consideration past learning

When you’re learning about your testing, you are more likely to be able to predict the future.

You are learning about how your list reacts to different things in every single test you run, if you are not learning on each test and using your past learnings to design better campaigns, you are pretty much not making any improvements to the future of your marketing.

Takes notes, interpret data and organize it.

10) Don’t assume results are permanent

Nothing is permanent. What worked for you last Tuesday might not work the same on Wednesday in two weeks.

I met a person that once ran a test to prove if negative was better than positive tone to get more opens, it did. The problem is that from that moment, he ran every single subject line like that, after a while it was exhausting for his subscribers.

If you use “first name” personalization on every single subject line you send, it will lose its effect pretty soon.

Everything changes. Never assume things are permanent and never stop testing.