Marketing campaigns are not stagnant entities, or rather, they shouldn’t be. If they’re going to be effective, they need near-continual refinement and adjustment in response to new information and changing conditions. Most marketers go live with a campaign with that understanding, then gather information and review the strategy’s effectiveness after a few months of running. They learn from the data and make any necessary fixes to the campaign for better future results.
Unfortunately, this approach is somewhat flawed. Applying fixes to a campaign after it goes live is better than not making any fixes at all, but theoretically it would be much better if the campaign were fixed before it went live. Fortunately for marketers, there’s an easy way to do this.
AB testing is a form of campaign analysis where a central campaign is split into two analogous yet distinct segments: an “A” campaign and a “B” campaign. Both campaigns run under identical scenarios for a given period of time, and results are collected about each. Those results should indicate a clear winner between the two, and the differentiating factors between them should be the root cause for its superiority.
Essentially, AB testing is a scientific experiment to apply to your marketing campaign to determine what, if anything, needs to be corrected in your strategy. If you apply AB testing over the course of several rounds, producing new variants with each round, you’ll create a survival-of-the-fittest type of scenario that will ultimately produce a campaign better than any of the variants you have created along the way. For example, you’ll start with an AB test, then introduce variant C if variant A proves to be better. Then you’ll hold an AC test. If A is still the winner, you’ll introduce variant D in an AD test, and so on.
While most marketers only use AB testing in a live environment, it’s more advantageous to apply it to a testing environment first, on a much smaller scale. That way, you’ll save the time and money of going to a live market—and you’ll only put your best foot forward for your potential new customers.
Finding the perfect scenario for your preliminary AB test can be difficult, since no matter what you’ll have to invest some time and money, but if you perform it properly and apply the necessary changes before going live, you’ll wind up with a much more cost-efficient, better performing campaign.
How to Differentiate Your A and B Tests
The first step is to create a “B” variation of your already existing “A” campaign. For example, let’s say you’re working within the confines of a simple PPC Google AdWords campaign. You’ve set your target keywords, you’ve outlined your copy and headlines, and you’ve set up a landing page where your visitors will (hopefully) be persuaded to convert. You have two main options for differentiation here: ad copy and the design of your landing page.
For most AB tests, you’ll want to start differentiating on the largest scale possible. Landing page design changes tend to have more impact than copy changes, and will influence your conversion rate more than your click-through rate, so in this scenario, a landing page differentiator is preferable—at least to start with.
How you choose to differentiate within that segment of your campaign is up to you—there are plenty of options. You could change the background image, the placement of your form, the number and type of fields within your form, the colors of your page, the copy in your headlines, and the addition or removal of extra features like testimonials or external links. You can make one or more of these changes right away, but remember—it’s better to make bigger changes first.
Finding the Right Environment
Since running your campaign in a live or pseudo-live environment will cost you money, it’s important to find the right platform for your AB test.
Let’s take the PPC example above. One of the best options is to run the test like you would a normal campaign, just with a much smaller budget. If you’re planning to run the campaign with a $1,500 monthly budget, try running it for one month with a $500 budget to determine where you stand.
Alternatively, you can test your landing pages on a different platform, such as on your social media channels. Facebook has a paid advertising feature that can be run for as little as $5 a day, and can give you great insights for a simple AB test without costing a lot of money.
You’ll also want to make sure to capture as much information as possible—use objective data like click-throughs and conversions, but also consider qualitative metrics like heat map results or user surveys to round out your analysis.
Once you’ve run the AB test for a week or longer, you’ll have enough information to make a judgment about your campaign. Obviously, in most cases, numbers will be your bottom line—the landing page that produces the greatest number of conversions is your best candidate—but you also need to pay attention to qualitative metrics to understand why the better page is better. This type of data can tell you whether something is critically wrong with your alternative landing page (so you never make the mistake again) or if there is a key feature of your main landing page that is so important it’s worth enhancing (such as a testimonials section that deserves a greater callout).
Chances are, you’ll be enlightened (or at least surprised) by some of the metrics your initial test produces. Any positive changes you make as a result of the test will help your campaign significantly—and applying those changes before you start a full-fledged rollout can save you both time and money.
Just because you’ve ironed out a handful of kinks before taking the campaign live doesn’t mean you’re completely out of the AB testing woods. You should treat your campaign as an ongoing experiment, ever evolving, because conditions always change and there is always room for improvement. After your preliminary round of AB testing, you’ll have a much better initial product, but it’s going to take continued AB testing in order to perfect your approach.
Roll out your winning landing page with a secondary differentiator—one that’s been altered in a way different from your initial test. For example, if your first AB test focused on aesthetic design changes, try making some major copy changes. Gather information regularly, and continue to make adjustments in order to ratchet up your conversion rates.
AB tests are an inbound marketer’s best friend. There’s simply no better way to objectively measure the effectiveness of your campaign, whether you do it before or during the official launch of your program. Starting off with an AB test before rolling the campaign out in a live environment will help you identify any pain points preventing you from better conversion rates, and save you money in the long run. Just remember that AB tests are something to play with throughout the entire duration of your campaign—the more you learn, the better your campaign will become.