We are all driven by our personal biases – whether it’s in choosing which organization to work for or which school to send our children to, or in something as simple as choosing an outfit to wear to a casual evening out. We look for evidence that supports our choices. Unfortunately, cognitive biases or reasoning errors make us ignore other compelling evidence that could have led us to make different decisions.
In the world of security leaders, it is no different. Cybersecurity decisions at all levels of an organization are impacted by cognitive biases. On the other side of the fence too, it has been found that cybercriminals exploit cognitive biases to plan their cyberattack strategies. It is, therefore, necessary to factor in the effect of cognitive bias while building an organization’s security strategy.
Building a strategy to protect against cyber threats requires understanding and prioritizing efforts to address existing or potential threats. Our efforts to build harmony between the best characteristics of humans and the best characteristics of technology to tackle cybersecurity challenges depend on understanding and overcoming bias.
Let’s look at three common cognitive biases, explore how they impact different areas of cybersecurity decision making, and finally identify strategies for mitigating the negative impact of cognitive bias.
Using top-of-the-mind info to set security policy (Availability bias)
Availability bias impacts what cybersecurity experts perceive as high priority threats. The more often someone encounters specific information, the more readily accessible it is in their memory. If the news cycles focus on ransomware or specific types of external threats, that type of threat will be top-of-mind, which may drive leaders to overestimate the likelihood of being targeted with such an attack. In reality, reports seen on the news may not even apply to their industry or may be an extreme outlier.
This is availability bias in action, where a high-profile breach could cause enterprises to ignore or downplay the threats posed by malware, poor patching processes or the data-handling behaviour of employees. Relying on what’s top of mind is a common human decision-making tool but can lead to faulty conclusions. At an organizational level, availability bias can influence the allocation of resources and can lead to a misinterpretation of risk.
Using people’s common attributes to set security policy (Representativeness Bias)
Grouping people together based on specific characteristics or attributes can be both convenient and effective, but it also introduces the risk of representativeness bias.This occurs when we erroneously group people (or other things) together based on qualities that are considered normal or typical for that group.For instance, if you made the statement, “older people are riskier users because they are less technologically savvy than their younger counterparts” you would likely observe affirmative nods from around the room. However, current research suggests that younger people are actually far more likely to share passwords for certain services and they often reuse passwords across domains. If sharing a streaming service log-in is ultimately the same as sharing banking information or corporate information due to reusing credentials, the younger user is far riskier than the older user.
Overcoming representativeness bias through an understanding of individual human behaviour is critical to security solutions that want to address human error and/or human risk factors in protecting data.
Purchase decisions affected by the framing of words (Framing Effect)
The framing effect is a type of cognitive bias that is impacted by how a choice is worded. The way information is presented, or framed, shapes our purchasing decisions. People are biased towards selecting outcomes that focus on positive, or “sure thing,” outcomes. They are also more willing to choose riskier, or more expensive, options if they are faced with a potential loss. The outcome of the framing effect in the cybersecurity industry is that decision-makers may choose overkill solutions that address specific, low-probability risks. While all-or-nothing security may seem like a sure thing, and a way to avoid risks, bloated solutions can negatively impact employees’ ability to actually do their jobs.
Framing effects are somewhat fragile, and their impact depends on the one-sided nature of the phrasing of a question. Buyers of security solutions can overcome the impact of framing effects by slowing down and thinking more analytically about the problems they are trying to solve, and the suggested efficacy of the solutions offered.
How to minimize bias in cybersecurity decision making
These biases are only a small sample of how the cybersecurity industry is shaped by human decision making. To address the impact of cognitive bias, we must focus on understanding people and how people make decisions at the individual and organizational level in the cybersecurity industry. This means raising awareness of common cognitive biases across organisations and within our security teams to better identify situations where critical decisions are susceptible to the negative impact of mental shortcuts.
Beyond awareness, analytics can also help remedy some cognitive biases and their accompanying concerns. For instance, instead of relying on anecdotal or stereotypical assumptions that certain user groups are riskier than others, use of behavioural analytics can help by building a data-driven understanding of human behaviour and risk. The ability to apply security controls without relying on broad, inflexible rules (whether for a specific group, or for an entire agency) also solves the problem of overkill cyber solutions that may seem appealing due to availability bias and framing effects.
The bottom line is that cognitive biases shape our cybersecurity decisions from the keyboard to the boardroom, and these decisions ultimately determine the effectiveness of our cybersecurity solutions. By improving our understanding of biases, it becomes easier to identify and mitigate the impact of flawed reasoning and decision-making conventions. More tangibly, understanding user behaviour at the individual level can also help minimize the degree to which cognitive biases drive an organization’s security posture.
By Margaret Cunningham, Principal Research Scientist, Forcepoint