Marketers have once again stolen what isn’t theirs: the A/B test.
They have repackaged the scientific split test, and turned it into their unique piece of branding. Look, they’ve even hijacked the wikipedia page too. But at least they’ve kept the definition the same:
A/B testing is a term for a randomized experiment with two variants, A and B, which are the control and variation in the controlled experiment.
But that’s okay because things need updating. This is the second generation of online marketers we’re talking about. Imagine if we were still running TV advertising, or full page prints in Time magazine to sell software.
So where does the issue lay?
The third step of the A/B test. Here’s an example of an A/B test in content marketing:
Step one: Pick two titles to run with on an A/B test
Step two: Measure which one gets more clicks
Step three: Compare results, and stick with the winner.
“What’s wrong with step three?” you may ask. Nothing, measuring results is good, but that’s step four.
Step three should be looking at the next issue, how it effected your readership.
When you ran your A/B test, how did title B effect:
- Average time on the page
- Bounce rate
- Pages per session
Just because title B got more clicks, doesn’t mean it was more valuable to your readers. Did you ever consider that it mislead them?
When we use the sumo sizing method to try experiments, we often go down a path of trying to get the attention we don’t deserve. That’s how we annoy our readers, and it’s the reason why books like All Marketers are Liars, become best sellers.
Not all A/B testing is bad, but just for the sake of a vanity metric, we put the very thing we built so hard to win over at risk: trust.
How will your next experiment effect the trust of your reader?