Human beings aren’t perfect thinkers. We like to imagine ourselves as logical, straightforward problem solvers, but the reality is, most of the time we’re afflicted with hidden biases and misconceptions that completely skew our interpretations of even the most objective data. When we look at a set of data, or observe something in its natural environment, we form concrete judgments that then shape our interactions with those items. And from a practical standpoint, we’re good at it; the human brain has evolved to detect patterns easily as a survival mechanism.
This mechanism is oversensitive and flawed, resulting many of the biases I’m about to go over. In our everyday lives, they might not bear much impact, but in the realm of marketing and advertising, objective data is crucial. If these biases are affecting your interpretations of otherwise objective data sets, you could wind up presenting faulty information to your clients, or worse, adjusting your campaign to grow in the wrong direction.
Why Bias Compensation Matters
Fortunately, you don’t have to become a slave to your biases. There’s no way to reprogram your brain and avoid them altogether, but there are strategies you can put into place to make it harder for these biases to affect your work. Think of it as a kind of handicapping or adjustment system; for example, if you know the wind is blowing to the east on the golf course, you might align your shot to the west of where you want the ball to actually go. This doesn’t eliminate the wind as a factor, but it does help you get the results you need with only a minor adjustment. Otherwise, your shot will end up blowing far to the east of where you need it to land.
There’s one big problem with this golf metaphor, and that’s the fact that you’ll realize your shot is off course once you complete it; in the marketing realm, if you interpret your metrics incorrectly, you may never learn this fact.
Take a look at this optical illusion as an effective demonstration of how bias can mess with your mind:
(Image Source: Nerdist)
Compare the center square of the side panel to the center square of the top panel. Most people will argue that the side square appears to be a bright orange, while the top square appears to be a dull brown. If I stopped writing here, many of you would continue to believe that.
However, the reality is that both squares are exactly the same color. If you cover up the surrounding colored squares, which trick your brain into overcompensating for lighting conditions, you’ll see this to be true. This process is the kind of “bias correction” I’m talking about; without it, you’ll end up misinterpreting your data, but with it, you can come to a more accurate conclusion.
Types of Biases
There are two types of biases I’m going to cover in this guide, though the second isn’t technically a “bias” in the formal definition. Both can have a dramatic effect on how you see and interpret data, so you’ll need to account for both whenever you measure or report metrics for a given campaign.
(Image Source: Brain Bashers)
There are countless sub-types of biases, and I’ll be exploring some of the most relevant for modern marketers.
(Image Source: Connexin)
Let’s start by taking a look at some of the most common cognitive biases that can affect your interpretation of metrics.
This list is not comprehensive; there are a startling number of cognitive biases that can affect your reasoning, social behavior, and even your memory. However, I’ve captured the majority of biases that can affect how your mind finds, dissects, and interprets marketing results. In each subsection, I’ll describe the bias and detail strategies you can use to compensate for it.
First up is confirmation bias, one of the most commonly recognized cognitive biases. This phenomenon holds that once an individual has settled on a specific belief, they will seek out and/or favor any information that leads them to “confirm” that belief, and avoid and/or demerit any information that contradicts that belief. For example, take the strange dress that became a sensation over social media a while back:
(Image Source: LinkedIn)
The center picture is the original, with the two on either side showcasing the dress with different lighting and filters. The middle pic generated responses describing it as either gold and white or blue and black. Users that encountered one definition often saw the dress as being those colors, not realizing that the visual information in the photo was ambiguous.
In the context of a marketing campaign, this can happen when you’ve pre-formed a conclusion about one of your strategies. For example, you might assume that your new content strategy is doing well because you’ve invested a lot of time and money into it. You might then only look at data points that confirm this assumption; let’s say you’re getting a lot more comments and sparking new conversations. But you might ignore or overlook contradictory data points, such as lower organic traffic numbers.
To compensate for this, select which metrics you’ll measure to determine success before you even flesh out a strategy. Then, remain consistent with this set of metrics and remain as objective as possible in their analysis—even if the numbers contradict your instincts.
The selection bias is usually relegated to surveys, which depend on an ample, random sample of participants in order to be considered unbiased and effective. A selection bias would be some improper procedure that led to a pool of participants slanting the results in one direction or another. For example, if you only interview people in Idaho for a national-level survey, you’re going to receive answers that disproportionately represent an Idaho resident.
If you’re conducting surveys for your marketing campaign (such as gathering data about your audience’s content preferences), the possible effects here are obvious—if you select a narrow or skewed pool of participants, your data will be inherently unreliable. But this also applies to data you might pull in Google Analytics.
For example, if you’re poking around to different sections, you might find that your “general” traffic visits an average of three internal pages before leaving. From this, you could form the conclusion that your site is effective at enticing people further in—but what about just your social traffic? If your social visitors often bounce after the first page, it could be an indication that your blog posts (or other social links) aren’t as effective at piquing that curiosity.
The anchoring effect has everything to do with what you encounter before encountering a certain event (or in this case, a certain metric). Because our minds are wired for comparisons, whenever we hear a numerical value, we instantly compare future numerical values to it—even if those numbers are completely unrelated.
Take a look at the following cartoon as an example:
(Image Source: Wealth Informatics)
Both participants are essentially generating random numbers—the last digits of their SSNs. When asked what they’d estimate for an identical bottle of wine, the person with the higher number will generally estimate it to be a higher value.
This can happen in your metrics reporting, too. For example, let’s say you recently read an article that boasted a 300 percent improvement in ROI after making a simple change to a marketing campaign. If you notice a 30 percent growth rate in your own traffic, you might think it’s pretty low. Conversely, if you hear someone complain about a terribly low conversion rate—like a fraction of a percent—that 30 percent growth figure might start looking pretty good.
Irrational escalation, sometimes known as escalation of commitment, is a bias that has less to do with how you report or interpret metrics, and more about what you do with your conclusions from there. Under this bias, individuals have a greater likelihood of taking some strong action if they’ve taken some related weak action in the past.
The typical example is the “dollar auction” game, in which a one-dollar bill is auctioned off before a group. Anybody can bid any amount they want for the dollar. At the end of the game, the winner gets the one-dollar bill for whatever amount they bid for it, but there’s one twist—the second-place finisher must pay his/her final bid to the auctioneer without getting the dollar in return. Invariably, bids escalate far beyond the dollar value of the one-dollar bill; this is because once you’re committed to a certain idea, or a certain strategy, it’s easy to incrementally invest just “a little bit more,” even if it becomes irrational at some point.
What’s the practical takeaway here? Let’s say you’ve invested in a certain marketing strategy for many months now, and you’ve seen decent results, but the past few months have been slow to the point that you’re barely breaking even on it. The irrational escalation bias would have you continue investing in it, since you’ve already come this far, even if there is no proof of future benefits. The only way to defeat this bias is to weigh the pros and cons of each strategy, even the ones you’re used to, with objective, preferably numerical evidence.
The Overconfidence Effect
All of us are desperately and irredeemably overconfident. I’m not talking about your self-esteem or your comfort levels in various social situations; I’m talking about your tendency to estimate your own perceptions. Everyone believes they are better than average at making decisions and answering questions, in almost any scenario.
Because of this, marketers often believe they know more about data analysis than they actually do, and believe themselves to be better decision makers than they actually are. What happens is this: a marketer will look at the data, form a conclusion about it, and then stick with that conclusion without exploring any other possibilities. In general, there are too many unknowns for any one definitive conclusion to hold.
To compensate for this, bring more minds into your analysis and discussion. Each person will be overconfident about his/her own analytical ability, but together, you’ll be able to make up for each other’s weaknesses and come to a more uniform conclusion.
Essentialism is a complex cognitive bias that permeates our life in profound, and sometimes horrible ways. Its name derives from the root word “essence” because it reflects a natural human tendency to reduce complex topics and ideas down to their barest essence. This is important during the early stages of learning and development, where abstraction is difficult and acquisition is imperative, but later on in life, this gives us the nasty tendency to categorize things, places, and people based on what we know about other things, places, and people. It’s at least partially responsible for stereotypes and prejudices.
In a far less serious offense, essentialism is also responsible for causing marketers to over-generalize or categorize certain types of metrics. For example, they might believe that bounce rate is inherently “bad” and therefore, bounce rates should always be lower—even though people bouncing might be a good thing if they aren’t a part of your target demographics to begin with.
There’s no easy way to stop your mind from wandering in this direction, but you can strive for neutrality by treating every metric as having both positive and negative traits; see each metric for what it is without trying to reduce it to a universally “good” or “bad” position. This is especially important for traits relating to user behavior, which is qualitative and at times, unpredictable.
(Image Source: Masmi)
I think we all know what optimism bias is like. We’ve all felt it in one application or another, and most of us still experience it throughout our daily lives. No, this has nothing to do with whether you consider yourself an “optimist” or “pessimist” in general—instead, it’s a well-documented psychological phenomenon that applies to most people.
The biggest effect here is that people inherently believe they are less likely than average to experience bad events, especially if they’re rare. Most people never think they’ll be robbed, or that their house will catch fire, or that they’ll lose their job. But people still do.
In the marketing world, this usually refers to PR disasters. Most brands never give a second thought to the idea that their social media statistics are tanking because of a foolish comment they made some time earlier, or believe their drop in organic traffic could be because of a serious penalty. The fact is, these things happen, even to smart, well-planned brands and strategies. Don’t count yourself out of the possibility here.
Group Attribution Errors
The fundamental group attribution error results when you see the behavior of a single person, and immediately project that person’s traits to the entire group. For example, at a bar you might see a group of people at a nearby table and one of them is particularly obnoxious, yelling and screaming. Many would then immediately assume that the entire group is obnoxious, rather than just the one individual.
In the reporting sense, this can also apply, depending on how wide your measurements are and whether you use any instances of anecdotal evidence. For example, let’s say you wrote a knockout piece of content and a handful of users took to commenting actively on it. Generally, comments are a good sign that your piece was interesting or valuable enough for your readers to engage with, but can you make this assumption for the entire group, or was it just a handful of weirdos who you happened to snag?
This isn’t to say that small population samples are inherently useless—they can be valuable, and they can represent the whole. What’s important to remember is that they don’t always represent the whole, and you need to compensate for this by looking at bigger samples.
(Image Source: The Rad Group)
The bottom line for most of these biases is that you shouldn’t take anything at face value, or trust your instincts too much. Most of your instincts are based on evolutionarily advantageous cognitive functions, which means when it comes to the logic and math of statistical analysis, our minds can’t be trust. Treat everything with a secondary degree of scrutiny.
Misconceptions and Misinterpretations
As if all those cognitive biases weren’t enough, there are cases where we don’t even define our metrics accurately. Forget confirmation bias—if you’re looking at one metric thinking it’s another, your numbers are wrong anyway. This section is designed to clear up some of the most common points of confusion for web traffic and social media metrics, but make no mistake—this is far from comprehensive. You owe it to yourself to double check your interpretation of every metric you measure; even one differing word can compromise an entire construct.
Google Analytics is free, easy to navigate, and reliable, but that doesn’t mean it’s always straightforward. Take a look at some of the discrepancies you might find here.
To start, head to the admin tab and select “Filters.”
This will give you the opportunity to “create” a new filter; there are several filter types to choose from, but usually you’ll want to go for one that filters users based on IP address or ISP information. This will keep Analytics from tracking information from any of the users you specify.
Next, there are a few social media metrics that require exploration.
(Image Source: Facebook)
As a general rule, the way you compare metrics to one another holds a lot of power over the conclusions you’ll eventually reach. For example, it’s critically important for you to take “apples to apples” measurements. If you’re going to evaluate your progress in a certain area, you need to replicate your measurement conditions as precisely as possible; for example, if you’re looking at the bounce rate for organic visitors over the course of a month, you can’t compare that to the bounce rate of social visitors over the course of a different month. This is akin to comparing apples to oranges. Allow only one variable between your compared metrics, such as month in question or type of traffic—when you introduce two, the comparison crumbles.
Recognize that your communicative ability has a strong bearing on how others interpret metrics. One wrong or misleading word about how a specific metric should be read could compromise a person’s interpretation of that metric for the foreseeable future. This is especially important with clients; you want them to have the clearest, most objective view possible, so remain diligent and consistent from the beginning to give them the full and accurate picture of your marketing metrics.
Utility and Value
There are two important takeaways regarding the utility and value of measurement and analysis I need to address. Thus far, my guide may have you believing that measurements are inherently inaccurate, or that they aren’t worth pursuing, but this is far from the case. Measurement and analysis are crucial if you want your business to stay alive. What truly matters is how you approach them:
First, your measurements are only worthwhile if they’re objective. And to make things worse, it’s incredibly hard to be objective (as you’ve seen in my list of cognitive biases). If you allow your instincts or your preconceived notions to take over, then your metrics become like a mirror—you only see what you want to see. Data should be a tool for you to answer important questions, not a means of self-affirmation.
Second, don’t base everything off of numbers. The numbers are objective, that’s true, but thanks to modern technology, there are too many numbers. Data can be manipulated to tell you almost anything, and thanks to human imperfection, it’s practically impossible to ever come up with a completely unbiased, objective conclusion about anything. What’s important here is maintaining a healthy degree of confidence; feel free to use your metrics and numbers to form conclusions, but in the back of your mind should always be a shade of doubt. Analytics aren’t perfect; accept that.
Though my hope was to create a detailed and valuable guide, I know this is inherently not a comprehensive one. To create a truly comprehensive guide on human bias and the tendency for errors in marketing would require far more resources than I have and, quite possibly, more knowledge about the human mind than we currently hold.
If there’s one bottom-line takeaway from this guide, it’s this: no matter how reliable your data is, it still requires a human mind for interpretation, and human minds are fallible. You can reduce this fallibility (as you should), but you can’t eliminate it, so instead expect it, compensate for it, and don’t let it compromise your campaign.
Want more information on content marketing? Head over to our comprehensive guide on content marketing here: The All-in-One Guide to Planning and Launching a Content Marketing Strategy.