DotComOnly editors select and review products independently. If you buy through our affiliate links, we may earn commissions, which support our testing.

Why Not Running a Split Test Is the Smart Play

Split tests, or A/B tests, are an excellent way to […]


Split tests, or A/B tests, are an excellent way to improve your website’s search engine ranking. They’re an integral part of the SEO toolkit for any business or website owner. 

Good split tests help you determine the best version of an email newsletter subject line or find the right personalized call-to-action (CTA) that’ll convert 42% more of your visitors. They help website and business owners get ahead of the competition and increase conversions, traffic, and sales

But sometimes the best play can be not to run a split test and leave your website alone. 

Wait, what? 

That’s right. Sometimes the best move is to do nothing at all when it comes to split tests. When you run an unnecessary test, you put your site and business at risk from mistakes, inaccurate results, and useless changes that do nothing for your bottom line. Worse, you risk mistakes that make your website run more poorly and cost you sales and subscribers. 

You can avoid all those headaches by skipping the split test in the first place. You’ll save time and money on designing and running these tests, and while keeping your site performing at a high-level. 

Let’s take a closer look at why not running a split test is the smart play for your website. 

The Risks of Running Unnecessary Split Tests

Digital marketers use split tests to improve their marketing effectiveness, but if they’re not run correctly or for the right reasons, you’re just wasting time, money, and effort. A well-run split test should help you learn more about your marketing or sales funnels and help you craft better experiences for your site visitors. Here are a few risks you’ll want to avoid with your split tests.

It Wastes Resources

With so many different things you can test on a website, you may be tempted to test them all at once. For instance, testing out different colors and layouts, copy on call-to-action buttons, and lead magnet offers. 

These are valid things to test, but they’re not worth split testing against each other. The gains you’d get from each one is too small for you to invest the time and effort. You’ll have invested a ton of time and effort into making the changes (and money, if you’ve had to hire a web developer or graphic designer) and not get much return. 

Compare the investment cost of creating the tests to what you’d get out of the test, and you’ll see that it’s probably more than you’d gain from a successful test. 

It Gives You Inaccurate Results

Timing is critical with split testing. When you run them at different times or for different lengths of time, you’re not comparing similar data sets. You’re missing out on the like-for-like comparisons and so can’t gain correct insights from the data. 

Similarly, if you run your tests for less than full weeks at a time, you won’t get accurate conversion results because the day of the week and month of the year impact those rates significantly. 

Google Analytics for split test

That means you should run a seasonal A/B test during the appropriate weeks of the year you want to know more about, such as Christmas or Black Friday sales periods. Always run your tests for full weeks at a time, like Monday to Monday, and ideally, for a minimum of seven days. You’ll get a complete picture of your traffic and conversion patterns this way. 

It Provides Conflicting Results

Running too many tests simultaneously can give you conflicting results that don’t tell you anything. Or rather, they tell you too much, but you can’t do anything with the information. Remember, the point of split testing is to significantly improve your SEO and conversions, not just lift things by 5-10%.

Instead of testing ALL THE THINGS, focus on the elements that can significantly impact your results, such as:

  • The copy on a landing page.
  • The type of copy on your web pages you use (text- or video-based, more or fewer images.)
  • The length of copy on a landing page (short- vs long- form.)
  • The CTAs on a page you want visitors to take action on, such as a landing or contact page.
  • The position of a form or CTA button on a page.
  • The type of product giveaway you use (freemium, free trial, or paid with a money-back guarantee.)
  • Your free trial length.
  • The number of steps in a checkout or signup process.

The factors you test will depend on the type of website or business you have, your analytics, and the goals you’re looking to test. 

Decide on these before starting your test, and you’ll be sure to get a clear picture of what wins and what doesn’t. 

It Reinforces Vanity Metrics

We’ve all gotten caught up in those vanity metrics at some point. It feels good to test something that increases views, time on site, bounce rate, and social media followers. Those metrics look good on paper but are ambiguous at telling you about the return on investment (ROI) you’ll get by making changes to those things. They’re hollow numbers that don’t tell you much of value about your sales or marketing funnels. 

For example, testing whether a long- or short- form landing page gets you more shares on social media doesn’t tell you much about your landing page. Your test is only half right; you should be testing whether a long- or short- form landing page gets you more sales or signups.

That’s not to say vanity metrics are entirely useless since they can help measure non-transactional marketing goals like brand awareness. Just don’t use them when split testing your website for SEO or conversion rates. 

When You Should Not Run Split Tests

Now that you’ve seen why it’s risky running unnecessary split tests let’s apply those to real-world scenarios to show when you don’t need to run them.

Scenario 1: Your Page Already Has a Good Conversion Rate

There’s a lot of debate about what makes a “good” conversion rate, but it always depends on many factors. The main one is your industry. The average conversion rate across all industries is anywhere from 2.35% to 6.1% but can be as high as 11% for certain industries. 

Scenario 2: You’re Testing Someone Else’s Idea on Your Site

Reading A/B testing case studies to learn more about testing is good. Copying those same tests on your site is terrible. What worked for someone else will usually not work for you. Your website and business are unique, and you have different goals than them. Why are you using their tests on your website? 

Instead of copying their tests exactly, use them as a foundation for your own A/B testing. That’ll give you a head-start on creating your split tests, but keep them unique and applicable to your website. For instance, using basing your ad placement A/B test on’s ad test, but using your site goals and site visitor behavior instead. Or by using American Bird Conservancy’s opt-in test to see whether different copy on or placement of your opt-in forms increases sign-up rates. 

Scenario 3: Your Site Has Low Sales or Traffic Numbers

In testing, you always want to reach a statistically significant result. Otherwise, the results aren’t meaningful. The more variations or tests you run, the bigger the A/B testing sample size you’ll need. You’ll need to send more traffic to each version to get reliable results. 

If your site has low traffic numbers or it takes a long time to get people through your sales process, you’ll need to run your test for longer to get accurate results. Which means, your tests may not be accurate because of all the risks mentioned earlier. For example, you could never test a seasonal marketing campaign because your site won’t accumulate enough visits during the season to make your test worthwhile. 

Set up website analytics and metrics on your WordPress site before you decide to run any split tests. That’ll save you time and effort, and ensure you’ll get meaningful results. 

Scenario 4: You’re Already Running Split Tests Right Now

If you’re already running a split test on something right now, there’s no need to test a different variation. Or worse, change the parameters of your current test to see if you can “tweak” the results. Any time you make a change to a test, you risk making your results unreliable. You’ll have no idea which of the changes gave you the lift in conversions or invalidated the hypothesis you were testing. 

Once you start a test, resist the temptation to make any changes. Instead, wait for the results to come in and take action once the test is done. 

Final Thoughts

Understanding when to run a split test and when not to is a critical part of your website’s success. Split tests can help you improve conversion rates and increase income and visitors, but only when you run them for the right reasons. 

Split tests can help you uncover insights about your website that you didn’t know about while saving you time, money, and effort. No matter what your business or why you’ve got a website, split testing will help you improve and sustain it for a long time.