How to plan and execute a split test

So we stated that this process is an experiment, and for an experiment to be successful, it needs to be carefully planned and executed.

Go get your mad scientist hat (do scientist wear hats?) and get ready to turn your desk into a mini lab.

In this lesson, we’ll create an example to make it easier to digest. Let’s say you are going to test subject lines.

2 things before we get started:

  • The lesson is the actual homework, implement these steps on your next campaign (and don’t look for homework at the end of the lesson)
  • This is how we at SM plan and run tests, so I decided to share this with you rather that giving you the “best practices”

Alright? Let’s do this!

1) What’s the goal?

You’ve probably noticed that I start everything by establishing a clear goal, this helps you and your team focused. It’s very easy to get sidetracked in this kind of stuff and end up achieving something completely different.

A goal is simple: We need to get more opens.

2) Current situation

Start by gathering some quick data to get a snapshot of the current situation.

In this example, we’ll collect the open rate of our last 10 campaigns to get an average.

Current situation: 7% open rate in the last 10 campaigns.

3) Hypothesis

Easy tiger… Let’s keep things simple. A hypothesis is nothing more than an explanation of why you believe this is happening.

We all have assumptions about things. Why do think people are not opening your emails?

Hypothesis: I think our subject lines are too impersonal.

4) Pick your test

Now that you know exactly what you’re aiming for, determine what kind of test you need to run.

To continue with our example…

Test: We’ll run an A/B Test to increase opens.

5) Set your metrics

How are you going to measure success? Determine which metrics to track and make sure all is set.

In this case it’s easy…

Metric: Open rate.

6) Set your variants

With a hypothesis at hand and knowing what kind of test you’re planning to run, you can start working on your variants.

Get the original version in place, that will be the “control,” or “Variant A.”

Control: “The word that boosts opens… and kills clicks”

We want the subject line to feel more personal, like we are talking to that specific subscriber in front of the inbox. We’ll test using a “You” approach in the copy:

Treatment: “This word will boost your opens… but kill your clicks”

7) Run your test

It’s time to set up your test, we’ll cover this in more detail in a separate lesson, for now let’s just go with the basics.

Head over to your ESP and set all the logistics: The type of test, segments of the list, metric, how long, etc.

8) Determine a winner

The winning campaign is determined by metric.

  • Control: 27% open rate
  • Treatment: 43% open rate

The treatment outperformed the control. Yey!

The campaign is sent to the rest of the list using the treatment version of your test.

9) Final results

The results once you send to the entire list might still vary. Just because your test gave you a 43% open rate doesn’t mean you will hit that exact number.

Watch the final results and keep track of them as you continue to run tests.

10) What did you learn?

Nobody does this, and it’s a huge mistake because data doesn’t really tell you why variant B won the test, it just tells you that it did.

Do you think an NBA coach just looks at the final score and goes home happy? Nope, they analyze the game, the strategy they used in the 3rd quarter when they completely stopped the opponents offense, the substitutions, e-v-e-r-y-t-h-i-n-g.

They need to know WHY they won.

Based on your goal, your hypothesis, your test and its results, try to understand what happened. Why did B win? Was you assumption correct?

Acknowledge this, make notes, share it with your team. Don’t just high five.