Strategies

4 Areas Where Infosec Facts and Fiction Clash: Mind the Gap Pt. 3

There are gaps in security programs between what we think is going on, and what’s really going on. In this final part in our trilogy, we examine the possible causes for this—and solutions to close these gaps.
May 21, 2019
7 min. read

As we’ve seen in this series, security defenders’ perception of a security program can differ from the reality. Part 1 examined three key gaps that lead to incomplete risk management processes. Part 2 explored the gap in critical areas of perception of risk and defense between security leadership and security technicians, and how it can lead to misaligned resources, unnecessary blame-storming, and diluted effectiveness. In this final installment, we discuss possible causes for these gaps—and explore solutions to close them.

To summarize the gaps we’ve discussed so far, we have:

  1. Performing a deficient risk analysis by not doing a complete asset inventory
  2. Acting on biased threat analysis after being swayed by alarming headlines
  3. Not selecting the controls that would mitigate the most likely and impactful risks
  4. Mismatches of perception between security leadership and technicians

These four gaps can weaken security programs and represent potential holes that attackers could exploit.

Why do these gaps exist?

The TL;DR is bias. The long version, which follows, explores three very human mental shortcuts that can obscure or exaggerate the facts.

Availability heuristic

The first is the Availability heuristic1, which causes people to overestimate the likelihood of something they keep hearing about. This is natural in an age when mass media and social media constantly feed us sensationalized stories about rare and terrifying events. By definition, most news is about reporting extraordinary events. Scary, overwhelming threats get more attention and tend to lodge in our memories more. Over time our brains respond to frequent news of catastrophic threats—such as advanced, state-sponsored hackers wielding zero-day exploits bolted onto unstoppable malware—by concluding that such threats are very likely to happen, skewing our analysis of risk. In reality, simple, common attacks are the more credible threat. When Advanced Persistent Threats (APTs) do occasionally storm the gates, they’re mostly doing spear phishing (the same as most criminals) which is already highly effective. Even the NSA has reported no zero-day exploits usage in any high profile breaches over past two years2.

The second distraction is the Drunkard’s Search, which gets its name from this old joke:

A cop sees a drunkard groping around under a streetlight, and asks what he’s doing. The drunkard says he’s looking for his car keys. When the cop asks where he lost them, the drunkard points to a nearby parking lot. Confused, the cop asks why he is searching here under the lamp. The drunk replies, "This is where the light is".3

This bias refers to the fact that people tend to search for what they need in the easiest places, rather than looking in the places that are most likely to yield useful results—even if the “easy” places end up requiring more work. In security, this could mean using outdated threat lists or compliance requirements as a guide. Combine this with the powerful influence of the news cycle, and you can see why APTs are seen as bigger problems than the more common threats of phishing or credential theft attacks. You can also see why some security practitioners prefer to use data from their own past experiences, which are easily available, than to pursue more rigorous risk analysis techniques.

McNamara fallacy

A cousin to Drunkard’s Search is the McNamara fallacy4, which refers to focusing on a few, preferred measurements while ignoring other data. It’s named after U.S. Secretary of Defense Robert McNamara, who measured success in the Vietnam war solely by enemy losses rather than considering whether the larger, less quantifiable goals of the conflict were being achieved. When the McNamara fallacy is at work, hard-to-measure metrics are dismissed as unprovable or unimportant. This is an easy trap to fall into with security. Many factors—such as attacker capabilities or potential vulnerabilities—can be very hard to quantify, but that doesn’t mean they should be excluded from a risk analysis. You can also see how security professionals, whether they’re deep in the weeds or drawing from decades of experience in one particular industry, can be misled into thinking theirs is the only viable perspective in a security program.

Closing those bias gaps

Given these biases that can lead us astray, how can security professionals become more effective? The first step is being aware of these biases and their effects. On top of that, here are some other ways to help close these gaps.

Improving inventory

We’ve talked about the importance of an accurate, up-to-date inventory to a comprehensive risk analysis. One useful tip is to aggregate and integrate data from internal IT operational and security tools such as host inventory assessment scanners, network scanners, endpoint agents, change control databases, configuration managers, and Cloud Access Security Brokers (CASBs). You can also find good ideas within NIST Special Publication 1800-5C, which includes a guide on “IT Asset Management” with advice on “physical and virtual assets and provid[ing] management with a complete picture of what, where, and how assets are being used.”5 The security industry is also starting to address this problem with new products for cybersecurity-focused inventory6 and asset management7.

Quantify opinion into trustworthy data

There may not always be hard data available for decisions you need to make today, but most organizations can tap subject matter experts (SMEs) in security, IT operations, and business requirements. You can turn their opinions into usable data while counterbalancing against biases and guesstimates using what’s called “calibrated measurements.” You can simply ask for a SME’s opinion, and also ask for a measure of their confidence in their answer8. For example, a SME might tell you, “The likelihood of attack on our organization from a SQL injection is 75%, but I’m only 50% certain of my answer.” This calibration process helps bubble up any potential bias and inaccuracy that comes from relying on qualitative data. Polling SMEs with calibrated measurements does involve training them on how to do this9, requiring some process10 and a bit of time to get going; but it’s one of the best ways to harness pockets of expertise within your organization.

Addressing governance and Informational gaps

Defined roles and prescribed duties within an organization eliminate ambiguity and confusion, so it’s in CISOs’ best interests to clearly define and communicate them—especially when it comes to things like risk ownership and responsibility. Mismatches in perception and security understanding will be reduced if named roles are defined as being specifically responsible for a task, and processes are defined for how they gather and share information with other teams. A helpful tool for these things is a RACI diagram11.

RACI diagrams are named after the four commonly assigned duties: Responsible (who will do the work), Accountable (who is in trouble if the work isn’t done correctly), Consulted (who provides information for the work), and Informed (who is kept in the loop on the work). They are arranged in a matrix, with roles represented in each cell. Here is an example of a RACI diagram for the application inventory function within an organization:

  Responsible for Accountable for Consulted on Informed about
Identify the apps IT Senior Leadership Asset owners CISO
Value the apps Asset owners Senior Leadership IT CISO
Assess risk to the apps CISO Asset Owners IT Senior Leadership
Decide on acceptable risk level for apps Senior Leadership Asset owners CISO, Legal, Auditors IT
Implement risk mitigation on apps IT CISO Asset Owners Senior Leadership

These assignments may be different for your organization; the example is here for illustrative purposes (but we won’t be offended if you use it verbatim). You also don’t need to use these specific responsibilities—the goal is to ensure that every key aspect associated with risk management is clearly communicated to all parties involved.

Wrap up

There are many ways that security professionals can drift away from running an efficient and effective security program, often because of common human biases and misunderstandings. We’ve presented a few techniques that we think can help close these gaps, saving time and money in building a strong security program. In the security profession, we must always keep striving to improve our methods and practices—because our opponents are doing the same.

Previous article in this series
Join the Discussion
Authors & Contributors
Raymond Pompon (Author)
Footnotes

1 https://en.wikipedia.org/wiki/Availability_heuristic

2 https://www.fedscoop.com/nsa-no-zero-days-were-used-in-any-high-profile-breaches-over-last-24-months/

3 https://en.wikipedia.org/wiki/Streetlight_effect

4 https://en.wikipedia.org/wiki/McNamara_fallacy

5 https://www.nccoe.nist.gov/publication/1800-5/VolC/

6 https://blog.jeremiahgrossman.com/2018/03/my-next-start-up-bit-discovery.html

7 https://www.darkreading.com/axonius-unsexy-tool-wins-rsac-innovation-sandbox-/d/d-id/1334055

8 https://en.wikipedia.org/wiki/Calibrated_probability_assessment

9 https://www.slideshare.net/tonymartinvegue/expert-estimation-and-calibration-training-siracon-2019

10 https://www.rsaconference.com/writable/presentations/file_upload/grc-w05-how_to_measure_anything_in_cybersecurity_risk.pdf

11 https://en.wikipedia.org/wiki/Responsibility_assignment_matrix

What's trending?

Forward and Reverse Shells
Forward and Reverse Shells
09/15/2023 article 5 min. read
Web Shells: Understanding Attackers’ Tools and Techniques
Web Shells: Understanding Attackers’ Tools and Techniques
07/06/2023 article 6 min. read
What Is Zero Trust Architecture (ZTA)?
What Is Zero Trust Architecture (ZTA)?
07/05/2022 article 13 min. read