BLOG

Improving Security by Ignoring Vulnerabilities

Lori MacVittie Thumbnail
Lori MacVittie
Published November 05, 2018

Generally speaking, the statement "ignore vulnerabilities" is not something you expect to hear from a security company. After all, vulnerabilities are responsible for breaches of such magnitude that they fill our feeds for months with post-mortem commentary, analysis, and recommendations.

And you certainly don't see "ignore vulnerabilities" paired with the notion of "improving security." 

Now you have - though as with most advice, it comes with caveats and qualifications.

You certainly shouldn't ignore all vulnerabilities, but it turns out there are a class of vulnerabilities that you can safely ignore right now - or at least deprioritize for a rainy day. I stumbled upon this concept while reading the 2018 State of Open Source Vulnerability Management from WhiteSource Software.

In addition to some very interesting statistics, the paper puts forth the idea that open source vulnerabilities can be grouped into two categories: ineffective and effective.

The premise of WhiteSource's categorization is that some vulnerabilities are ineffective - i.e. they are not exploitable because they are not invoked by custom code. Being able to analyze and differentiate, so the story goes, means security and developers can focus on those vulnerabilities deemed effective and thus reduce time and effort while improving the overall security of the application.  

For example, consider a custom application that relies on an open source component containing a vulnerable function. Under White Source's definition, the vulnerable function in this example might be declared "ineffective" because it is never invoked by the custom application. Astute readers will note that the vulnerable function could be invoked by a function in an open source component (either yet another component or the same one) and thus render it effective. When I asked WhiteSource about this possibility, they expanded on their categorization noting that this possibility is taken into consideration. If the vulnerable code is invoked either from custom code or indirectly via another open source component, it is labeled "effective." Conversely, if there is no path - direct or indirect - which invokes the vulnerable function, it is labeled "ineffective."  

Given that WhiteSource's research determined not only that 96.8% of developers rely on open source components, but that 7.5% of all projects are vulnerable, being able to prioritize which vulnerabilities to focus on would certainly be a boon. WhiteSource further determined that a whopping 64% of open source products were found to contain only ineffective vulnerabilities, which they posit can be safely ignored.

Now, while I'm not convinced that we should just blithely ignore vulnerabilities in any code because they aren't actively invoked, I do see value in using such a distinction to prioritize vulnerability management. By focusing on vulnerable code that is actively invoked, developers and security professionals can immediately improve the overall security of an application. This makes better use of senior developers, whom White Source's report found spend more time on average addressing vulnerabilities than junior developers.

Some sort of prioritization method needs to be in place. WhiteSource stated that there were nearly 3500 reported vulnerabilities in 2017, an increase of 60% over 2016. Not all of those 3500 reported vulnerabilities affect every application or organization, but we should remember that these numbers are accretive. That is, the 3500 are new vulnerabilities that are added to a running total.

Needless to say, there are a lot of vulnerabilities in code - custom and open source. Being able to prioritize remediation based on whether they are "effective" or "ineffective" is in line with emerging security strategies that score risk based on existential threat in addition to other factors. The existential threat of an ineffective vulnerability is nearly non-existent. That said, ignoring ineffective vulnerabilities may not be the best long-term approach. It's possible that eventually such vulnerabilities will be effective. Changes in custom code as features are added and/or enhanced, as well as changes over time to open-source components, can lead to a path opening that invokes a vulnerable function. This is one of the reasons why source code analysis specifically for vulnerabilities should take place continuously at best, and during the final build at worst.

But in the interest of meeting deadlines and using developer time efficiently, pushing the "ineffective" vulnerabilities to the back of the queue so the "effective" vulnerabilities can be redressed immediately might just be one of the better way developers can improve security right now.