feat newsletter testing hypothesis 1

For better newsletter testing insights, start with a strong hypothesis

You know a newsletter isn’t a “set it and forget it” product. To succeed, and to have more impact, you need to test to see what’s working.

What is your newsletter testing plan? How often? What’s the last thing you tested? What did you learn?

If you aren’t testing, I’m not here to scold you. In fact, I’ll tell you that you are not alone. There are plenty of publications claiming to be “data-driven” who aren’t testing, either.

Maybe whatever they are doing works. Maybe it could be working better. They won’t know. And if even it is working today, here’s a rule I believe strongly in: Things work until they don’t.

If you aren’t testing and measuring, you won’t know if something stopped working until it crashes. Newsletter testing and experimenting give you data to make more informed decisions. It also helps you be adaptable to changing conditions.

You deserve to know when your newsletter strategy is working and when it’s not, based on the subscribers’ real behavior. Testing beats “gut instinct” over the long term every single time.

Here are a few straightforward tips to help you get off the ground with newsletter testing.

Don’t copy someone else’s test (especially from a conference)

Conferences are full of presentations on ideas that “did well”. So are industry blogs and case studies and white papers.

So what does it mean when something “did well”? I’ve seen ideas that “did well” because it was the boss’ idea, and the boss decided it “did well”…while disregarding a mountain of evidence to the contrary.

Unless the writer shares more details, numbers, and context — we don’t know what “did well” means. That sucks. It’s certainly not actionable.

Rather than copying someone else’s successful test, use it as a starting point for your own experiments. Even if you have a lot in common, your specific goals are different. Your audiences view your product differently than their product.

This is also true for Best Practices. Be careful. Don’t get too caught up by industry influencers. It’s rare that you see all the things that didn’t work prior to the one successful idea that gets a write-up.

Just like unrealistic lifestyles on Instagram, consistently seeing 99% of ideas “did well” can screw up a person’s expectations and perception of reality.

I get that no one wants to brag about what didn’t work. But when you design an effective newsletter testing plan and get better at experiments, a lot of ideas aren’t going to work. That’s not failure, it’s learning. If you’re right all the time, you’re either a genius (perhaps!) or you aren’t being rigorous enough in your process (which happens all the time).

The key to newsletter testing: a useful, specific hypothesis

The problem with a lot of testing is it’s used as an either/or option. We test Idea A versus Idea B. Which one “did well”? But that’s far too limiting. It’s entirely possible that neither idea moves the needle in a significant way.

Or maybe the “winning” idea isn’t cost-effective or scalable in any way.

Muppet Beeker putting a banana into some sort of scientific funnel
We’re gonna need more bananas

It’s a natural phenomenon that we believe we are right. All the time. In reality, that’s not true. We are wrong more often than we are right.

Writing a strong hypothesis helps us avoid the embarrassment of being wrong. So, a key part of your newsletter testing process is setting up a value hypothesis.

What makes a solid hypothesis?

  1. A hypothesis is falsifiable. Can it be disproven?
  2. A measurable hypothesis is specific. What will change, by how much change, and over what time period?
  3. A deliberate hypothesis is written down. If you don’t write it down, someone will try and hijack the results to fit another idea.

“Making our newsletter more voice-y will drive more engagement”
vs.
“Adding an intro paragraph body of the email written by a staffer will increase our clicks (measured by click-to-open ratio) to 3% from 1%”

The first one isn’t defined enough to measure. It’s way too easy to pick and choose whatever measurement looks best, which means outside of total catastrophe, the result will always be “it worked”.

See the contrast with the second hypothesis? There’s a clear measurement. If your click-to-open rate stays at 1% at the end of the experiment, it doesn’t support the hypothesis.

That’s okay. It doesn’t mean you have to ditch the intro paragraph. What it means is that using the intro paragraph is unlikely to drive CTOR improvements.

Here’s another example:

“Redesigning our newsletter will get more readership on our site”
vs.
“This specific newsletter layout will lead to a 15% increase in clicks to our original stories on the website”

I’ve seen the first version more times than I can count. Like our first example, leaving out definitions means that pretty much anything can be held up as “did well”. Maybe page views were already going up because of a controversial story, thereby getting more “readership” independent of the layout change.

Without focus, your hypothesis isn’t going to be actionable.

The second example measures the performance of one metric and ties it to one specific layout. If that layout doesn’t support the hypothesis, it still needs work. If it far exceeds the hypothesis, ask for a raise. 🙂

A straightforward template I like for writing hypotheses

  1. Note what’s happening currently.
  2. Discuss and record possible reasons why this is happening.
  3. Suggest how you’re going to improve or fix the situation.
  4. Choose measurements to show how you’ll know it has worked

Write down each step, but the format doesn’t have to be formal or fancy.

Only test one thing at a time

A little discipline pays off. Let’s say you redesigned your whole newsletter. Now you are ready to test the new layout, but you send one version first thing in the morning and the second version at 6 pm.

Now you have two variables: the layout and the time of day. Introducing extra variables is going to make it significantly harder to gather insights you can use.

The worst-case scenario is managers don’t have the patience for gathering data. They end up going with the “throw spaghetti at the wall and see what sticks” method. That never works, and someone has to clean all that shit up. Resist the urge to fall into the trap.

Once the test is running, step away and leave it alone

Another common misstep is stopping the test early. The test is all setup, and it’s going to run for two weeks. Everything is tight. Strong hypothesis. One variable.

Three days in, there’s a clear winner. The new version is waxing the floor with the old version. It’s 60% better!

“Wow, that was awesome. Let’s just stop it now and change it to the new — “

Hold up — Don’t stop the test! Let it finish. Please!

This includes making any changes. Just let it go through the whole time, untouched.

It’s common for one version to jump out ahead early, and by the end of the test, that version isn’t the one that supports the hypothesis. There are an infinite number of external factors you have zero control over that will influence the results.

In most cases, I think it’s best to not even look at the results until the test is finished. Let the whole thing run its course, then compare the results. Looking early may introduce unintended bias into your analysis.

Test early. Test often. Write them all down.

It’s a damn good feeling when your test results support your hypothesis. As I mentioned earlier, you’re going to be disproven a lot. Celebrate your wins!

However, be careful not to take one test in isolation as gospel, or indicative of more than it is. Keep testing. Who’s to say that your 10% improvement couldn’t be 30%?

That leads me to one bonus tip: Keep a log of all your tests, including dates. Spreadsheets work great for this. Make them visible to the entire team. The more ideas you have to test, the more chances you have at finding the game-changer innovations.

Get more updates like this, and
unsubscribe anytime with no hassle

Similar Posts