Details of Man working with electrial components

The Science of Testing: 4 Basic Steps to Review Before Your Campaign Launch

By C&EN Media Group

Reading Time: 5 minutes

In the lab, the importance of the scientific method of experimentation and observation goes without question. It’s the methodology used for research for centuries and is widely seen as the primary (if not the only) way of obtaining reliable knowledge.

Testing in the marketing world should be no different, though marketers often forgo this important step in favor of quickly rolling out a great idea. Like scientists, marketers can test tactics before launching campaigns to see if they have potential to produce the desired results.

Here are four ways to foolproof your testing:

  1. Define the overarching goal of your test.
  2. Drilling down: Be sure the metrics themselves are clearly defined.
  3. Emulate a scientific experiment: verify you have a true control ‘group’.
  4. Test your test: is the right data being captured and reporting in place?

number1 Define the overarching goal of your test accurately.

It’s surprising how many times a marketing campaign is rolled out that doesn’t align well with the goals of the marketing plan, and this rings true with A/B testing as well. With every new initiative, start by asking two key questions to make sure you’re on the right path.

a. What am I trying to achieve with this test?
b. Does it align with the marketing plan or even organizational goals? Are you focusing on higher open rates, when you need to be cultivating higher quality leads?

Often marketers want to redesign a lead generation page or try to incorporate new widgets or technology in their funnel, but you simply can’t assume a redesign will work better than the current page. It may look prettier or be more tech-savvy, – and even get more page views, – but if a refresh inadvertently presents barriers to the consumer submitting their information, then you’ve impeded your own goal.

And the use of new technology or a fancy new technique is usually not the goal – gaining leads is. Or gathering additional critical information on your potential customers. So before rolling out with something new, be sure to test it against your existing tool and clearly define what you are hoping to achieve with the test. This clarification should then wholly inform the design of the thing being tested (email, landing page, ad, etc.).

number2 Drilling down: Be sure the metrics are clearly defined.

It’s great to conduct an A/B test but if you’re not sure what metrics you’re comparing, it will be difficult to know if A or B performed best. This of course depends on the kind of campaign you are running.

For instance, if your marketing goal is sales and you’re testing two different emails, the open rate is not going to be as important as the click through and conversion rates. While the open rate is a good metric to look at, as is the unsubscribe rate and the deliverable rate, those don’t tell you which email was most effective at driving conversions.

This doesn’t mean that the open rate is irrelevant. It means that when making a decision about whether or not a test was a “winner”, the click through and conversion rates are the primary metrics. The open rate is still important and should also be noted – emails that are not opened cannot be clicked through.

And this brings up the third absolutely critical point:

number3 Emulate a scientific experiment: do you have a control ‘group’?

This is so crucial and yet many marketers get it wrong. Any scientist will tell you that the control and the test environments must be identical in order for test results to be read with any degree of confidence. Any alteration to either the control or test environments can completely invalidate the test.

For example, let’s say you want to test the text in your call-to-action button. If you vary the subject lines of the two different emails you are sending, you won’t know if an increase in clicks is due to your new and improved CTA, or that one of your subject lines generated more opens and therefore, more clicks. Which brings up another critical element of testing:

Test One Thing or Everything

There are two main ways of conducting A/B tests: single variant or multivariate.

As the name implies, single variant testing assumes that all factors of both A and B are identical except the one thing you are trying to test. This one thing could be anything from the subject line of an email, to the image in an online ad, to the segmentation of a piece of direct mail.

Another way to run an A/B test is to test everything at once without focus on a specific component. This is often done with landing pages, when one page is completely redesigned and the impact of the design (the single variant) is measured rather than individual elements.

Multivariate testing is when combinations of multiple elements are tested and measured simultaneously to determine which combination works best. While more complex to execute and measure, it can cut down on testing times and allows for multiple elements to be tested together, rather than separately. One of the major downsides to this method of testing is the number of subjects or sample size that is required to draw statistical conclusions on each variate. The more variates introduced, the smaller the sample pools must be divided up.

number4 Be sure that the right data is being captured, reporting is in place, and results are read.

No test is worth doing if it cannot be measured properly, or if results are not analyzed accurately.

If at all possible, test your test before launch. Are all of your personalization fields being filled correctly? Is data being captured accurately on the thing(s) you most want to measure? What is the user experience? If possible, use subjects outside of your department, even outside of your organization, to review the test and provide feedback.

Send test emails and have recipients click on links, fill out forms, and respond to calls to action before launching the campaign. How does the email or page or ad look in various browsers? On mobile? Look at the data collection. Is the correct data showing up in the right fields? Examine any auto-reply emails. Is the copy up to date? Do all of the links work correctly? Is the personalization accurate?

Too often we can get caught up in one aspect of our testing and overlook other critical components. It’s important to take a step back and take a complete and thorough look at all parts of the test before launch. You’ve worked hard to develop the test, and nothing is more frustrating than unreadable results because a small detail was overlooked.

The Results: Read and Analyze

Once everything is in order and the campaign is launched, be sure to actually look at the results in a timely manner. For an email, results can be read in the first 24-72 hours. For a lead generation landing page, statistically significant data collection could take a week to several months, depending on the amount of traffic being driven to the links or landing pages.

Statistical results are a powerful tool to have when advocating for a specific marketing tactic. If properly executed, the results of a successful test should be relatively easy to predict. And this will make all stakeholders feel confident about rolling out with them.

We want to hear from you:

What have you learned through your own A/B testing?

Keywords: , , , ,