If you aren’t aware of Drupalgeddon 2, then you’ve either been living off the grid or don’t use the popular content management system (CMS). For those unaware, Drupalgeddon 2 refers to two severe security flaws in the Drupal CMS that enable remote code exploits (RCE) of dangerous proportions. The two vulnerabilities are the darling de jour of cryptominers seeking cheap and easy resources to exploit in their quest to hit it big in the cryptocurrency mining game.
After the publication of two severe security flaws in the Drupal CMS, cybercrime groups have turned their sights on this web technology in the hopes of finding new ground to plant malware on servers and make money through illegal cryptocurrency mining.
Their efforts and expectations were fully rewarded, as the two vulnerabilities—CVE-2018-7600 and CVE-2018-7602—left over one million websites vulnerable to hacks if they didn’t receive immediate updates.
—Bleeping Computer, “Drupal Sites Fall Victims to Cryptojacking Campaigns,” May 9, 2018
I’m not going to say anything more about that except if you’re running a vulnerable version of Drupal, get off the Internets and patch right now.
This is the equivalent of a cross-site scripting (XSS) attack; injecting a malicious script into a trusted page in the hopes that victims will never notice the “bad script” that quietly begins consuming resources in the background. Crypto-jackers have simply taken advantage of a vulnerability to inject the script at the host rather than on the fly.
Now, when the script is injected is almost irrelevant to the discussion. The fact that it’s in a trusted page is what’s important. Because, you see, once it’s in the page, the browser is going to load it and evaluate it. Because it’s not smart enough to distinguish between scripts you put there and scripts someone else put there. Browsers are like honey badgers—they don’t care.
The exploitation of this unconditional trust is not new, and there are all sorts of app security mechanisms designed to protect against the insertion as well as the execution of a script after the fact. See, there’s an HTTP header called Content-Security-Policy (CSP) that restricts the browser to loading and evaluating resources only from sources you specify. It’s based on whitelisting principles, and you can get very granular with resource types and allowed sites if you so desire.
Most don’t, according to BuiltWith, a site that tracks the technologies websites use. Based on its latest data, a paltry .2% of sites on the Internet include CSP headers.
Digging further, 8.4% of the Quantcast Top 10,000 have used CSP headers. Which sounds better until you do the math. That’s only 840 sites. By vertical, shopping and business are the most likely to use CSP to protect visitors (and themselves) while news, sports, and governments are much, much less likely to employ this option.
The question is, why? Why is this well-documented security header rarely used to protect apps and their visitors?
The answer may be because we’re relying on developers to insert the header. It’s not a default option, after all. It must be explicitly added to every HTTP response.
So why don’t they?
The bulk of applications today—especially web applications—rely on externally sourced and loaded resources. Fonts, images, scripts for social login, scripts to include your tweet stream, scripts to load data dynamically from back-end services. Scripts, scripts, scripts. According to HTTP Archive, which tracks so many interesting tidbits about the performance and composition of web apps over time, the median desktop website requests 20 external scripts. For mobile, the median is 19.
To take advantage of the protections offered by CSP—mainly, preventing XSS or unauthorized script from loading—developers would need to list every one of the domains from which those external scripts would be loaded.
There are three different ways to get this header into a response:
The first option tends to make this security solution the responsibility of Ops. The latter two throw the responsibility on the developers.
None of them, based on available data, are used very often.
From a developer’s perspective, I understand why that might be. Maintaining a list of trusted sources and ensuring they’re included in a security header is boring. It’s tedious and falls into the category of yak shaving (tasks developers do that aren’t directly related to coding, like configuring servers and systems and security headers).
But that’s not necessarily the reason CSP headers aren’t used. It’s more likely that they introduce technical and operational debt. Embedded in the web server configuration means maintaining a unique configuration per application. That adds operational debt and introduces yet another gate that must be passed on the way to production. Using code-based solutions incurs technical debt, because any changes to trusted sources of scripts automatically means a change to code.
Or it could be that security standards don’t include CSP headers because the majority of security professionals aren’t also HTTP experts.
They should be. Or at least a couple of them should be. Given that the majority of apps—whether mobile or web, apps or APIs—rely on HTTP as their de-facto transport layer, some amount of HTTP protocol-fu should be a requirement.
Regardless, from a security pro’s perspective, I’m appalled that such a simple solution is casually ignored. No, it’s not perfect. No, it isn’t 100% guaranteed. But gosh darn it, it might just stop the Drupalgeddon 2 cryptominer script (or others like it) from loading and executing because it relies on an externally loaded script that is not trusted by the developer.
The Need for AppSecOps
This is why security—at least that portion of security that focuses on app protection and defense—must shift left and become intimately familiar with applications and how to secure them. Not access to them. Not access to their host OS. Not networking. Not systems.
Apps. Layer 7 and above. HTTP. JSON. HTML. Web sockets. The whole kit and caboodle.
We need AppSecOps. You can call them SecDevOps or DevSecOps or Bob if you want. Whatever you want to call them, we need them. We need security pros with an affinity for apps and a nose for operational solutions to protect and secure apps against threats, both imminent and active. There’s as pressing a need for this cross-discipline professional as there was for DevOps and site reliability engineering (SRE).
The data tells us this need is existential. Thirty percent of the top breaches involved attacks on web applications. Sixty-two percent were exploits of vulnerabilities. And, 77 percent were carried out by botnets, not individuals.
We have developers that specialize in data science, in embedded systems, in hardware, and in UX. But we don’t seem to have the same level of specialization in security. We have network security, physical security, and general security, but we don’t yet have specialization in app security at the app layer. We lock the doors to the network and then leave the windows in apps wide open. And it’s through those windows that the bulk of attack activity is occurring today. Troy Mursch’s Bad Packets Report found nearly 50,000 websites infected with cryptomining scripts. Obviously, we want to prevent the infection in the first place but barring that, let’s stop them from executing.
Security needs to shift left into the app development cycle. That means security professionals who can ride along from dev to deployment to find solutions that can protect and defend apps against modern threats. It means security professionals who speak HTTP and HTML as well as they speak the language of compliance. We need people who are focused on app protection from an application point of view.
We need AppSecOps. And we need them sooner rather than later.
If you’d like to learn more about Content Security Policy headers:
If you want to check if your site uses Content Security Policy headers, try one of these sites: