In July 2020, the FBI Cyber Division issued Flash Alert AC-000129-TT reporting that malware had been found in the software used to calculate China’s value-added tax (VAT).1 However, the Chinese State Taxation Administration requires companies to install this particular software to operate within China. Third-party applications are already risky, but here a pre-infected application was part of a regulatory mandate.
We’ve already seen supply-chain breaches of SolarWinds and Codedev, where advanced attackers planted back doors into trusted applications. Granted these are extreme cases, but plenty of other similar situations could lead to just as bad an outcome. For example, what about trustworthy applications suddenly becoming remotely exploitable with a zero-day vulnerability like Log4j or Heartbleed.
Just like malevolent insiders, these applications start out innocuous but can unexpectedly become malicious. One of the safest ways to treat any application (other than not running it) is to act as if it could fall under the control of a malicious actor. Let’s analyze this in more detail.
Analyzing the Insider Application
Let’s start by spelling out the specific scenarios of how a trusted application turned malicious could damage an organization.
First, if an application itself turns malicious what can it do? We can assume that the application has direct access to the host operating system it’s running on. All the data the application can read, write, or process is now subject to copying or corruption. We can build that as a first impact, as shown in item number one in the following list of harmful effects. From there, we can create other likely impacts from a singular application acting as maliciously as it can. This yields the following list of potential harmful effects:
- Unauthorized data access: Copy or corrupt whatever data it can access as part of its authorized permissions. This would include all the cached credentials stored on that system as well. In these cases, zero-trust defenses will not constrain their already-approved existing access paths.
- Data exfiltration: Send any stolen data back to malicious owners, wherever they reside.
- Modify self: Receive new code to update itself for new capabilities.
- Replicate and infect: Try to copy itself or a version of itself to other reachable hosts or turn itself into a proxy server for remote attackers.
When we speak of data flows to untrusted hosts, we should assume directional flow is irrelevant. This is because attackers can easily create a covert channel for exfiltration within an outbound data request. We can also assume return traffic could contain commands or code updates.
All of these scenarios are unpleasant for the targeted organization. However, other than impact number one, all the other impact scenarios require some kind of network access, either to the internal network and/or the Internet. It’s safe to assume that the network access used for these malicious actions would make use of the access granted to it as part of its installation. In terms of lateral movement, infection, and internal system compromise, we consider executing code on a trusted system automatically includes all possible risky outcomes since the machine is now under attacker remote control.
This leads us to our second step in analysis: a way to characterize these applications instead of treating them all the same.
Characterizing Application Risk by Connectivity Requirements
One way to characterize an application is based on its network and resource connectivity requirements. First, let’s consider the default configuration. Organizations that haven’t fully adopted the least-privilege principle will grant applications open network access. This access is often at least open east-west (within its own data center) with fully outbound Internet access. This means a malicious insider application is at liberty to attack adjacent services and then exfiltrate the stolen data. We need to do better.
But what network access rights should we grant an application? Although a lot of applications have unique requirements, they can be broken down into general access categories based on their general purpose. Table 1 lists some access categories with descriptions of their network access permissions.
|Required Network Access||Functionality of Application|
|Single-site Internet access||Applications that need to auto-update or download data from a single Internet destination.|
|Multisite Internet access||Applications that need to download data from many Internet sources, such as aggregator or look-up tools.|
|Read access to noncritical inside hosts||Applications that need read permission on many internal noncritical hosts, such as nonproduction monitoring tools.|
|Full access to noncritical inside hosts||Applications that need read and write permission on many internal noncritical hosts, such as production monitoring tools|
|Read access to critical inside hosts||Applications that need read permission on many internal critical hosts, such as production monitoring tools|
|Full access to critical inside hosts||Applications that need read and write permission on many internal critical hosts, such as production automation tools|
When we speak of the application’s access to a host, we can speak of “read access,” which means it can copy data off that host, and “write access,” meaning it can change data on that host. In the case of write access, we should assume this results in full control of the host since, by writing to another system, a malicious application could potentially change local settings, inject commands, or even insert new executable code.
Regarding the difference between noncritical and critical internal hosts, we assume that the organization has segregated them with security controls.
With Table 1, it’s easy to see how the potential exposure (or attack surface), and therefore risk, increases with both the number and type of access an application needs to function. The riskiest case is therefore the application that needs full control on those internal critical hosts as well as access to many Internet sources.
Now that we’ve characterized our applications, least-privilege network access rights become clearer. We can also take our analysis to the next level and look at other kinds of controls to mitigate an application that’s become malicious.
Securing the Malicious Application
Let’s look at what we think are the most effective controls for an application inside the network that’s been turned rogue. We’ve broken down these controls to address each specific threat in the attack escalation.
Detective Controls for Malicious Applications
These are the controls that alert and log when a normal application begins to act maliciously. Obviously, malware authors are quite skilled at evading these controls using tricks like exploiting unpatched internal vulnerabilities, stealing or forging keys2, or taking advantage of blind spots in the architecture. But that doesn’t mean defenders can’t deploy these kinds of tools and hope for the best. Every piece of friction we put in the path of an attacker may be the one that locks them out.
The question to answer is: what’s in that application? The key is to analyze and verify as much as possible about the code and libraries before deployment. Processes like Software Bill of Materials (SBOM) and the sigstore project3 can provide details on the major components and their origins. Even enforcing built-in tools like code signing are a straightforward way to stop the execution of unknown application code. Change-control detection tools, like the integrity scanner Tripwire, can alert system owners if an application’s code or configuration unexpectedly changes.4
And finally, standard security controls like antivirus (AV) applications all the way up to the more sophisticated extended detection and response (XDR) systems can detect unusual malicious activity as well as known malware components.
Preventive Controls to Slow Down Malicious Application Activity
Applications that become malicious may try to move deeper into the operating system or sideways to attack adjacent systems or services. Therefore, limit their actions to the bare minimum. Controls to do this can be as basic as built-in operating system tools, such as Microsoft Defender SmartScreen or Apple Gatekeeper, that restrict loading of untrusted code or libraries.5 Many cyber-attackers also f native tools for “living-off-the-land” (LOTL) attacks. So lock down and closely monitor any preinstalled or built-in tools that an attacker can subvert. If this is too much work, you can isolate these applications in a sandbox6 or restricted container. You can even investigate container signing to ensure container integrity.7
A less operationally expensive option is to employ network segmentation by using network access controls like internal firewalls and DMZs to reduce the application’s communication paths to the bare minimum. Internal services like legacy protocols for authentication and service calls are something to consider restricting. Often these kinds of tools are built into both on-premises network switches and cloud-based virtual networks. To reduce the internal attack surface, limit access to open file shares and internal network services. Lastly, combining least-privilege and zero-trust approaches can mean that systems won’t trust this application just because it’s on a local network.
Scenarios Using These Controls
When, where, and how should you apply these controls? That is going to vary, but laying out some example scenarios can help make things clearer.
The Potentially Compromised Business Application
This scenario describes applications like the one mentioned at the beginning, the Chinese VAT application that turned out to be malicious. The average enterprise may be running hundreds of these kinds of simple but necessary applications. Despite the risk, many of these applications may be considered critical to the business’s function.
The risk these applications present goes beyond their leaking the data entrusted to them. Put controls in place to ensure that the application doesn’t become an infiltration point into the rest of the organization. Comparatively speaking, these kinds of applications shouldn’t be too hard to lock down.
If the application needs Internet access (and carefully scrutinize that request), then lock its outbound access to the specific addresses and ports it requires. If it needs to auto-update its software, see if it’s possible to do this under controlled and logged processes.
Ensure the application itself has limited access to internal networks and resources, if any at all. Again, make least-privilege mode the default access, reviewing and verifying each connection request as necessary. Lastly, constrain access to the application for users and backup servers to only the necessary services, with access rights configured in a one-way direction. Any administrative interfaces should require multifactor authentication.
Since this kind of application represents the most basic level of risk in this spectrum of potential insider applications, this also represents the minimum level of controls to apply. The other scenarios embody higher levels of risk, so they require additional controls.
The Potentially Compromised Logging Application
What happens when a highly useful and popular open-source application is compromised? We recently saw this with the Log4j/Log4Shell vulnerability. Shrewd security professionals already expected this could happen with Internet-facing open-source software tools like Apache Struts, but Log4j sat deep within protected infrastructure. Compromising Struts by sending it poisoned data was not expected, although still quite predictable given that every data input into a system is an attack surface.
The kind of access these applications require is straightforward: they need data pushed to them from the monitored systems. Therefore, this access should be unidirectional with limited access back to internal trusted systems. The logging system will likely need a way to send updates and alerts, so you can also define and restrict it to just the necessary pathways. The most risky proposition is that these logging applications need to “pull,” or go and read, data from trusted internal systems. In those cases, treat them like our next risky application.
The Potentially Compromised Monitoring Application
Applications that monitor internal systems are inherently riskier because they need access rights to many internal resources. In some cases, these applications may require access to trusted internal applications holding critical and confidential data.
As before, the key is to limit that access as much as possible to the specific ports and protocols required. Authorization is also important, so only give read access to the internal resources, not write or modify access. This may mean setting additional permissions on files and directories. Don’t forget those LOTL attacks because many of those native utilities and tools may have available access pathways.
For systems that require access to critical systems, one design to consider is splitting up the monitoring applications by creating one instance for noncritical systems with its own set of controls. A second instance for critical system monitoring could have a higher set of controls. This can lower the operational and audit overhead by having the more stringent oversight placed only on the application with critical systems access.
Another security architecture consideration is disallowing all Internet access to these kinds of applications. If they do become malicious, the applications should be blocked from exfiltrating stolen data as well as receiving commands from remote attackers.
The Potentially Compromised Automation Application
This is the worst-case scenario—the application that manages all IT infrastructure. It has access to everything and can make changes to anything. It’s a risk we’ve talked about at F5 Labs before, when Lori MacVittie commented in The New Insider Threat: Automation Frameworks:
"Lacking controls on frameworks designed to scale and manage the infrastructure necessary for critical apps, the potential for significant damage to be wrought is huge."
When considering different solutions, there are many tradeoffs. A popular automation framework is both more likely to be well debugged in terms of vulnerabilities and more likely to be a target of advanced attackers looking to attack via the supply chain .
All of the previously discussed controls should be considered here, especially disallowing direct Internet access to internal systems. Even access to email servers and external DNS look-ups can be used for data exfiltration, so consider restricting that access as well.
By design, automation applications need administrative access to nearly every system in the infrastructure. This means bringing full logging and audit processes to bear to keep an eye on the systems. Audit and logging systems should also be exempt from automation management so their records can’t be tampered with or erased. If anything ever changes on the automation system, especially with configuration and the software, the appropriate teams need to be notified.
Locking down applications in the event they are, or become, malicious is additional work but it is work worth doing. We have already seen that applications can be poisoned with malware. For a defender, it’s nearly as unpleasant deploying a compromised asset as it is dealing with an application that becomes compromised later. The latest Cyentia/Kenna research has shown that 95 percent of an organization’s assets have at least one highly exploitable vulnerability.8It would be prudent to take this thought a step further and look at vulnerabilities like Log4j or even Heartbleed, where core software components were found to have severe remote-execution zero-day vulnerabilities. Therefore, fully embracing the “assume breach” principle also means treating all applications as potential malicious insiders.