We constantly seek to optimize the effectiveness of our email, push notifications, and in-app campaigns. For myself, and many others, A/B testing is the most popular optimization method, but there is one problem with it.

Consider instances like a product launch, a show premiere, or a flash sale. All of those are events that happen at a specific point in time. Let’s say a new show premieres on Netflix this Friday at 9 p.m. Logically, I’d want to send the campaigns around that time to capture the maximum attention and give the show maximum traction from the very start. Since I have one shot at this, I’d make sure to apply all the learnings and best practices I had from previous A/B tests to give the campaign a maximum chance for success. The campaign is ready, let’s click send.

Basic engagement strategy for the show premiere

*The campaign does poorly*

The feeling when you open the report and you see bad results just sucks. I’ve had it happen to me many times. Every time would frustrate me personally, and on top of that, I’d get questioned by clients, which is always stressful.

So why did the campaign do poorly? It’s because previous A/B test learnings don’t always repeat — maybe there was a huge show premiering on Disney+ at the same time, it was a Holiday, or just nice weather so everyone was outside. External factors like these introduce inconsistency in campaign results, even when repeating a strategy that was highly successful previously.

The solution

The solution I came up with has nothing to do with aviation, but I called it Pilot testing. It comes from the term “pilot program,” defined as a small-scale, short-term experiment designed to understand how a large-scale project might function in practice.

In essence, pilot A/B testing involves targeting a small portion of the audience with a preliminary A/B test campaign a few hours before the main campaign launch. This allows us to test crucial elements such as subject lines, images, and call-to-action (CTA) copy in an environment that is almost identical to the main send. 

Operationally, I’d execute this as two separate campaigns — the first as a pilot A/B Test to 5-10% of the total audience, and the second only with the winning variant to the rest of the users.

Engagement strategy for the show premiere with a Pilot A/B Test

Benefits and drawbacks

Incorporating Pilot A/B testing into my strategy had practical implications for two things:

  1. Improved campaign consistency: I no longer experience instances of unexpectedly poor campaign performance. This not only benefits the business but also provides peace of mind to me and the team.

  2. Improved campaign results: Campaigns that would have performed well, started to perform even better because we incorporated a recently tested winning variant, often under nearly identical conditions.

It’s also important I acknowledge some drawbacks to this approach:

  1. Increased time investment: Conducting a pilot A/B test requires creating and analyzing an additional campaign. Furthermore, the main campaign needs to be updated on short notice, as it typically follows shortly after the pilot.

  2. A potential lack of statistical significance: Since the pilot is usually sent to only 5-10% of the audience, the sample size may be too small to detect significant changes. Therefore, I recommend employing this tactic only with a sizable audience.

It’s also worth noting that some customer engagement platforms offer similar built-in functionalities. For example, the one in Braze is called “Intelligent Selection”.

Intelligent Selection A/B Testing Feature from customer engagement platform Braze

My problem with these functionalities is they come with certain platform-specific limitations. For example, the one in Braze can’t be used on scheduled campaigns and performance is evaluated only with respect to the primary conversion event.

I have nothing against using of these functionalities, but inform yourself about their limitations first.

The conclusion

In conclusion, pilot A/B testing is a tactic I use regularly and recommend you give it a try too. It complements regular A/B testing particularly well; for non-time-sensitive campaigns, stick to the standard approach and build on your learnings over time. However, for critical campaigns, consider giving pilot testing a try—it’s straightforward to execute, and the impact is visible immediately.

How did you like this post?

Thanks for your feedback!

🚀

Why name denis-test.com?

© 2024 Denis Kolbas

Privacy Policy