Blog

How To Be Successful Using A/B and Multivariate Testing

AB testing

Everyone knows that website traffic is a valuable commodity. You work hard just to get them into the game, but it isn't over once visitors arrive. In order to maximize these early efforts, you'll need another big push to get them across that goal line - conversion.

In recent years, marketers  have focused on improving website performance (and  revenues) by deploying a variety of A/B and multivariate testing programs. It wasn't too long ago we were forced to rely on intuition and speculation in terms of understanding what converts on a website, but advancing  online testing techniques now allow us to rely on data generated by live site visitors to improve ROI.

For the record, online testing is a statistical methodology that allows website owners to test the impact of website components on target metrics. Primarily, online testing is usually either an A/B test – testing multiple variations of a site element against another -  or multivariate testing – tests multiple variations of multiple site elements against one another simultaneously.

Here are a few steps (and thoughts) to consider when performing test to optimize your website’s performance.

Why Test?

challenges to address

There is a huge amount of traffic swishing across the Interenet, and this means there is also wide range of intent among these travelers. While designers focus on creating websites to engage, it isn't always possible draw a clear view of what users are doing or what they want.

In recent years this has led to increased fact based quantitative efforts as marketers continue to use technigues that move away from simple intuition, toward tests that allow sites to measure the impact of specific website components though targeted metrics.

You can do this the easy way (A/B testing) or the harder way (multivariate testing). Each method has benefits but deciding which to deploy should depends on your goals.

A/B test (steps)

By definition A/B testing, or split testing, measures the effectiveness of one landing page over another. The easiest way to start is using a  current landing page (the control), and a second page with an alteration used as the experiment. Watch Google's A/B experiment below.

There are a number of different elements that can be tested including colors, fonts, layouts, graphics, buttons, headlines offers. Just about anything.

1. Create objective/goal 

Define the overall goal of the test to help drive test measurement. For example, what will increase and by how much if  the color of the text changes from one page to another? And then decide how you will measure the desired action (metric, conversion page, ect).

"Define the desired conversion action that denotes the test success and how it will be measured such as goal page, action or funnel."

Once you've determined this define a hypothesis for the experiment and identify any known factors that may affect traffic or test performance.

2. Picking the page and conversion

Decide on a desired goal that you want to track. That can be a contact form submission, a download, a purchase, a sign-up, a time-on-site goal, etc.

If you are tracking a request a quote page (below) make your desired conversion goal obvious, but remains consistent to the rest of the page design. For example, the RAQ form below is a test page used to determine how effective the VeriSign seal is in converting visitors (answer: very) to the page's gaol. A second (B page) is designed without the seal to determine overall effectiveness.

verisign ab test

3. Know your history.

In a recent webinar titled "Online Testing: Everything You Need to Know to Be Successful",  Joe Stanhope, senior analyst of Forrester Research, said historical data is the best place to start, providing marketers the opportubnity to  grab some low hanging fruit. "Locate known problems and start with those," he said.

And here are a few good places to start:

  • Web analytics
  • New vs returning visitors
  • Depth of interaction
  • Traffic sources
  • Content
  • Onsite search
  • Pathing and exists
  • Segment performance
  • Feedback from usability studies, customer service and focuis groups

 4. Run and analyze.

ab test

The concept is simple:  a visitor arrives on a page and is served a randomly chosen version of that  page getting equally distributed to different versions. The performance of the different versions is tracked against the conversion goals you previously defined for the test. If your goal is a RAQ, each time a visitor submits the form using Google Website Optimizer (a solid free option) to track which Web page version was shown to the visitor.

Setting up experiments in the back-end interface is a pretty straight-forward process. Of course, you'll need to build the alternate pages and drop a unique snippet of JavaScript into each one, as well as into the conversion goal page. Google Website Optimizer integrates with AdWords and like most of Google's online marketing products, has a wealth of documentation and tutorials available.

Analyze the results.

Multivariate Testing

This more complex test allows marketers to test multiple page variables at once. The primary difference when measuring results is that multivariate testing can test a myriad of variable combinations, while A/B testing is more limited. However, this test will be more time consuming and the data collection process more lengthy. Of course, the more elements you are working with the longer it will take and the more data you'll need to sort through.

The process of designing a multivariate experiment is very similar to setting up an A/B testing experiment, however, what to test is a bit more involved.

1. Identify the Challenge

The design of your page is set:  product name, (check) product description (check) , testimonials (check), awards (check), ratings (check) and a prominent call to action- a download RAQ (check). However, for some reason unbeknownst to your team, only 30 percent of the visitors are taking the bait. That leaves whopping 70 percent that bounce without converting.

Obviously, the challenge is to determine what factors can be adjusted to flip that ratio. Note: We're assuming the traffic targeting  arriving  either through a search or relevant referring site.

2. Develop a Hypothesis

There is a reason for your low conversion rate but if you new the reason you wouldn't need any of these tests. Here are three steps to help determine what the problem could possibly be.

  • Look at your page with a fresh perspective. Try viewing and using the page in the same manner as a new visitor would navigate the stie. If this is too difficult because of your extensive experience in creating it, find someone from another team that may not have even seen it yet.
  • And the data shall lead you. Web analytics data can show you where and when visitors are making a wrong turn.  Use your your analytics tool to provide some key insights to what is happening below the surface.
  • Usability testing will garner some great independent feedback that will provide you with some serious clues as to why your conversions are so low and ultimately help determine what influences conversion rates.

3. Testing 1, 2, 3

In a multivariate test, different variations are combined to produce multiple versions of the Web page. And these test (versions) must have a different goal to measure performance against each other. The most common goals to measure are sign-ups, purchases, clicks, leads, page views and bounce rates. Regardless, it is important to define the goal closest to your objectives. For example, if your a marketing company trying to optimize for visitors requesting a quote, the goal should be defined as a visit to the Thank You page after request is submitted.

Whenever a visitor arrives on your Web page, it displays a randomly chosen version of the Web page. In other words, your traffic gets equally distributed amongst different versions. The performance of the different versions is tracked against the conversion goal(s) defined for the test.

To ensure these test are reliable consider these three metrics:

  • Number of visitors: the higher the number of visitors, the more reliable the results.
  • Conversion rate: in general, results for pages with a low conversion rate (say 1-2%) take a much longer period to produce statistically significant results, than pages with a higher conversion rate (say 40-50%).
  • Difference in performance: testing with a large difference in the performance of variations (say >10%) is always more reliable than one where the difference is extremely small (0.5% or so).

4. Crunching the numbers

While one experience may not be right for all visitors multivariate testing provides the ability to identify visitor segments and their performance with these different experiences. For example, it is possible you'll discover visitors prefer a different experience than repeat visitors and that will end in better overall results. More sophisticated systems will also suggest visitor segmentation automatically to reduce the time needed to analyze test results against hundreds of visitor attributes.

Targeting different experiences for different visitors segments will substantially increase conversion rates and enable you to target visitor segments based on visitor attributes -  from environmental attributes such as different browsers through behavioral factors like  keywords or recent navigation clicks - and include customer attributes from other systems.

Now get testing.

Share this post

Comment on this post

*
*

Ready to discuss your project?
Request a Quote
Blue Fountain Media is recognized as a
Top Digital Agency by
Ranked #1 Interactive
Agency by top interactive agencies example
Ranked Top 10 Digital
Agency by awwwards logo
Ranked Top 10 Agency
Worldwide by IMA logo
Featured Blog Article
Download our Free Whitepapers
Our Locations
New York Headquarters
New York Headquarters
102 Madison Avenue - Second Floor
New York, NY 10016
Chicago Office
Chicago Office
222 Merchandise Mart Plaza, Suite 1212
Chicago, IL 60654
Seattle Office
Seattle Office
14980 NE 31st Way, Suite 120
Redmond, WA 98052