top of page

A list of cognitive biases to watch out for in user research

Understanding the psychology behind our decision making can lead to more informed choices. Here are top cognitive biases to avoid.

Illustration by Anita Goldstein.

Profile picture of Daniel Schwarz

3.8.2021

7 min read

Cognitive biases are thought patterns that cause errors in judgment and rationality, and being aware of these biases and how they work can help us as designers create better user experiences and ultimately, better web designs.


However, designers can be victims of bias too, especially when it comes to user research. This article will take a look at a few biases that might cause designers to make bad design decisions.


Before we dive in, let’s first mention the status quo bias. While not the most important bias, if we fail to recognize it then it could hinder all our other efforts.


The status quo bias is a reluctance to explore improvement or change. This bias can occur when the current state of affairs doesn’t seem all that bad, but it can also happen when we’re demotivated due to a design brief, working environment, or are simply feeling burn out.


If you’re ready to uncover your own cognitive biases (yes, we all have them!) and make your research more reliable, then read on. (Read more on neurodiversity in web design.)



Bandwagon effect


The bandwagon effect is the tendency to do or believe something simply because others do.


In order not to fall victim to this effect, it’s best to avoid agreeing with stakeholders vocally. Instead, let research findings do the talking and allow stakeholders to express their stance via anonymous voting.


The bandwagon effect: A single person with arrows pointing in the direction of a large group of people.
The bandwagon effect is the tendency to do or believe something simply because others do.


Groupthink


Groupthink is a similar but more dangerous effect where the desire for harmony results in the suppression of dissenting viewpoints. The result is often an awkward mishmash of viewpoints that doesn’t actually represent anyone’s honest, independent opinion.



Ambiguity effect


The ambiguity effect leads us to avoid ambiguity.


It’s natural to favor investing our time and resources where we believe the risk will be worth the reward. When the likeliness of a reward is more ambiguous, we become more hesitant to investigate.


However, in design, the majority of outcomes are unknown simply because we’re not our users. This means that almost every decision we make involves taking a leap into the unknown. It’s scary and there can be a lot of dead-ends, but luckily there are things we can do to lower the risks when deciding which opportunities to explore.


  1. Stakeholder voting: More votes means more reliability.

  2. Strategic research: Start off with research methods that formulate speed over accuracy (for example, test low-risk or low-fidelity mockups before testing high-fidelity mockups).



The ambiguity effect: three amorphic shapes labeled as "ambiguity" and one perfect circle marked as "clarity"
The ambiguity effect is the tendency to avoid the unknown and prefer the familiar.


Curse of knowledge


The curse of knowledge is when those who are better informed find it difficult to empathize with the less informed.


One scenario that comes to mind relates to user experience. As designers, we often assume that our experience with the product mirrors our users’ experience. When in fact, we have to remember that we’ve spent hours and hours inadvertently mastering it. So, what seems well-executed to us, might actually be unfamiliar to users initially.


In order to work around the curse of knowledge and to truly validate a solution, we must take our own personal experience with a grain of salt and listen, empathize, and test.



Hard–easy effect


The hard-easy effect is when we overestimate the user’s ability to carry out difficult tasks and underestimate their ability to carry out simple tasks. This relates to the curse of knowledge, where we’re at a disadvantage by being the masters of our own designs.


Similarly but more specifically, the planning fallacy is the tendency to underestimate the time required to complete a task.


We can overcome these cognitive biases by using a type of UX research called performance testing, where users are assigned specific tasks and a mix of quantitative and qualitative insights are yielded from the results. Maze and Useberry are two useful tools for conducting performance testing.



Additional reasons to conduct UX research:


  • Overconfidence effect: When we feel highly confident in our answers, we actually tend to be wrong a lot of the time. Logic can be deceptive and dangerous!

  • Reactive devaluation: There’s always the risk of devaluing ideas because of how we feel about the person who came up with them.


Adapting our mindset for research means being objectively mindful about things that we’ve learned and encouraging open, honest and free-thinking dialogue, rather than steered.


Next, let’s discuss biases to avoid during research.



Cognitive biases to avoid during research



Insensitivity to sample size


Insensitivity to sample size causes us to underestimate variations in small samples.

Over the years we’ve become accustomed to the notion that the minimum sample size for research is five, and that “ten is better.”


While it’s totally true that a larger sample size means more reliable data, we have to factor in the fact that we also need to find the optimum balance between reliable data and budget constraints.


It takes a mixture of intuition and experience to know how many user testers to recruit on a test-by-test basis, but considering that we tend to underestimate variations in small samples, it’s always worth recruiting a few more testers just in case.


Insensitivity to sample size: Graphs marking the number of users who are being researched, and an enlarged selection of just a few select users within that group
Insensitivity to sample size is the tendency to underestimate variations in small sample sizes.


Observer-expectancy effect


The observer-expectancy effect is when our own expectations of an experiment influence the actions of recruited research subjects.

  • Example: “We think you’ll do this and that.”

  • Outcome: With this expectation in mind, the recruit is then (subconsciously) influenced to carry out the task in this way.


If we’re biased to our own beliefs, then we actively look for information that confirms these beliefs while disregarding contradictory information (also known as confirmation bias). In addition, we also run the risk of unintentionally influencing the results.


To avoid falling victim to these effects, refrain from setting expectations onto research subjects.



Anchoring/focusing effect


Anchoring is the tendency to over-focus on one snippet of information (usually the first snippet we come across) and frame all subsequent snippets in relation to this anchored one. This can then cause us to misconstrue the value of all snippets of information.


When we over-focus on any snippet of information, we run the risk of failing to understand the information holistically.


Although we should be mindful of the things that we learn while conducting user interviews so that we can ask the right follow-up questions, we should also avoid overthinking the feedback until we’re ready to synthesize it objectively.


A nice way to avoid falling down this rabbit hole is to set an agenda — script the interview beforehand and follow it loosely.



Gambler’s fallacy


Gambler’s fallacy is the dangerous mistake of thinking that past results affect the probability of future results. For instance, if five out of five research studies lead to an unanimous result, that doesn’t mean that five additional studies won’t completely turn the tables.


Never quit a research study prematurely. If you recruit ten test subjects, make sure to test all ten of them.



Compassion fade


When dealing with user feedback, large amounts of anonymous feedback can seem less important than small amounts of feedback from identifiable users. This is known as the compassion fade.


While it’s important to empathize with identifiable users, all data should be observed objectively. Otherwise, we might end up spending irrational amounts of time solving issues that affect only a small number of individuals while neglecting to solve those affecting a larger number of people.



Law of the instrument


The saying goes, “If all you have is a hammer, everything looks like a nail.”


The law of the instrument refers to the bias towards familiar tools or methods, even if they aren’t right for the task at hand.


For example, designers often use heatmaps to diagnose issues when really they’re meant for identifying issues.


Another example is using surveys to validate changes that are better validated using A/B tests. To overcome this, diversify your toolbox and learn when you should and shouldn’t use each tool.



Law of the instrument: A design program and a mouse cursor
Law of the instrument is the bias towards familiar tools or methods, even if they aren’t right for the task at hand.


Additional cognitive biases to avoid during user interviews:


  • Ostrich effect: When encountering negativity, we shouldn’t bury our heads in the sand (a trait mistakenly attributed to ostriches). Instead, we can frame the challenge as an opportunity.

  • Empathy gap: To avoid underestimating the strength of feelings, ask users to rate how they feel on a scale of one to ten.

  • Survivorship bias: In a research context, this is the tendency to concentrate on those that ‘survived’ a usability test, even though those that didn’t make it may have just as many insights to share. This bias can also exist in reverse.



Cognitive biases to avoid when synthesizing research data


Obtaining useful data from research is one thing, but are we sidestepping our biases while synthesizing the research too?


Let’s dive into some more biases.



Stereotyping


Stereotyping is when we attribute various characteristics to users, which can lead us to design for users or problems that don’t exist.


As an example, stereotyping a ‘stressed mother’ as somebody that’s ‘stressed because of motherhood’ can make us think that the issue is motherhood, leading us away from the real issue at hand.


When synthesizing research, note only what you explicitly observe. Make no assumptions.



Illusory correlation


Similar to stereotyping, illusory correlation is when we inaccurately identify a relationship between two unrelated things. Rare coincidences tend to be more memorable, so we assume that because we’re able to recall the instance from memory, then it must be true or important.


Synthesizing data is like doing a jigsaw puzzle. Pieces can sometimes appear to fit together, but don’t. Again, try to avoid making assumptions.



Framing effect


The framing effect is the risk of reaching different conclusions depending on how the research data is framed. To combat this, try to frame the data in multiple ways.


For even better results, ask colleagues and stakeholders to do the same.



Framing Effect: Two cups, one reading "only 10% fat" and the other reading "90% fat free"
Framing Effect is the risk of reaching different conclusions depending on how the research data is framed.


Confirmation bias


Arguably one of the most well-known cognitive biases, confirmation bias (or similarly, expectation bias), causes us to focus only on information that aligns with our expectations.


To overcome this bias, expectations should always be set aside. Otherwise, we run the risk of selectively hearing only the feedback we like to hear, which defeats the objective of conducting research anyway.



Confirmation Bias: Two overlapping circles, one reading "Existing beliefs" and the other reading "Facts and evidence." The overlapping area between them marks the information we focus on.
Confirmation Bias is the tendency to focus only on information that aligns with our expectations.


Additional biases that affect the synthesis of research:


  • Clustering illusion: This is when we overestimate the significance of streaks or anomalies in large samples of data, which can cause us to miscalculate what the data really means.

  • Illusory truth effect: It’s easier to believe something that’s simpler or has been stated multiple times. With that in mind, we should consciously embrace complexity and be wary of recurring feedback - a certain claim isn’t necessarily true just because many people believe it.

  • Information bias: This is when we seek new information with hopes to change the outcome of an experiment. When facing a clearly negative outcome, avoid seeking alternative information that you know deep down won’t turn things arounds.



Conclusion


As designers, we want to make awesome decisions. However, we must make sure that these decisions aren’t devoid of rationale. In fact, they should make sense not only from one viewpoint, but many, and this means learning to identify our cognitive biases.


It’s impossible to sidestep cognitive bias completely, but as long as we’re mindful of the way we act and think, and how we interpret the information we see and hear, then our research should be reliable and thorough, at the very least.


Find new ways FWD

Thanks for submitting!

By subscribing, you agree to receive the Wix Studio newsletter and other related content and acknowledge that Wix will treat your personal information in accordance with Wix's Privacy Policy.

Do brilliant work—together

Collaborate and share inspiration with other pros in the Wix Studio community.

Image showing a photo of a young group of professionals on the left and a photo highlighting one professional in a conference setting on the right
bottom of page