BLOG

Container Security Basics: Workload

  Jordan Zebor

  Lori MacVittie

Published June 19, 2019

If you're just jumping into this series, you may want to start at the beginning: 
Container Security Basics: Introduction
Container Security Basics: Pipeline
Container Security Basics: Orchestration

Workload is a fairly recent term that is often used to describe applications but can also refer to infrastructure services. That’s important, because there can be a variety of ‘workloads’ running in your container clusters that aren't necessarily coming from your developers. The use of free and open-source services for a variety of purposes within container environments is growing. That’s actually true across IT operations, which was the number one category for downloads of free and open source software in the past year.

So when we say workload security, we mean any piece of software you’ve downloaded, developed, or deployed into your container environment. Think Consul. Think Kubernetes itself. Think Prometheus and Elasticsearch. Think NGINX and istio.

And then also put on that list any API gateways, caches, and ingress controllers you’ve deployed to support your Kubernetes environment. This is where a thorough bill of materials is important, and why taking an inventory on a regular basis is critical to maintaining a secure environment.

Once you’ve got your list, it’s time to address key workload security issues.

  • 1. Authentication is not optional
    If you’ve been following along through this entire series, this is old hat to you. It’s one of the common themes across all aspects of container security: lock the front door.

    All these services run as workloads in a cluster. That often means default credentials or an “open” initial configuration for accessing APIs and data. Workload security includes all components, software, and application services necessary for application functionality and operation
  • 2. Malicious content is malicious
    Even after you’ve locked the door by requiring credentials and applying access control, you’ve still got to worry about malicious content. That’s true of applications, microservices, and operational services in use. All workloads that present an interface to a user (whether operator or consumer) are potentially at risk.

    This is where scanning and acting is a must. Finding vulnerabilities in the application layer – whether of HTTP or application logic – is step one. Once you’ve found them, it’s time to do something about them. If it’s a custom app, send it back to development. If it’s a third-party component, determine whether or not a patched/upgraded version exists. Third-party components are often delivered in container images and, as noted by Snyk in its 2019 State of Open Source Security report, 44% of them have known vulnerabilities for which there are newer and more secure base images available.

    Let's say it again: running a scan does nothing to improve security if you don't remediate.

    If you've got a hole in your wall that lets would-be burglars easily circumvent the locked doors, you’d patch those. So patch the virtual holes in your virtual walls.

    The use of a web application firewall or API security gateway can provide a means to remediate when vulnerabilities cannot immediately be addressed by developers or have yet to be resolved by third-party providers.

    This is best summed up as "screen your calls" because it’s about inspecting and evaluating requests to any workload before accepting it.

  • 3. Shared resources mean shared risk
    Like traditional virtualization, containers are not completely isolated. Containers ultimately share the same physical OS. That means vulnerabilities in the shared OS mean shared vulnerabilities.

container vs vm isolation

If an attacker is able to exploit a vulnerability in OS components, they can compromise one or more workloads. Those vulnerabilities might be exposed thanks to a failure to 'lock the door' or 'screen your calls'. It's not a far-fetched scenario, as we’ve seen with CVE-2019-5736, in which a runc vulnerability at the OS layer sent the Internets into a panic. 

The core security principle here was famously stated by Dan Walsh, SELinux and RedHat container security guru as: "Containers don't contain"

It's important to note that all network traffic with services and users outside the node must traverse the host OS. Networking between pods and containers on a node is accomplished via virtual networking with virtual bridges and clever use of iptables. But ultimately, traffic has to leave that physical node and that means processing at the host OS. Agents, plug-ins, and other daemons may be observing or capturing that traffic for security or visibility purposes. Those are potential points of compromise that you should add to the inventory you started compiling after reading about Pipeline Security.

  • 4. Logging sensitive details
    You may not have expected a discussion of container security to include a warning about logs. But there is, and the reason for that is because sometimes sensitive information ends up in log files. Authorization tokens, encryption provider keys, credentials. All can be inadvertently logged or displayed by workloads on stderr. Often this is due to the need to track down issues due to authentication errors. In general, you should discourage – disallow if you can – the logging of secrets to the system.

    To assist in ensuring developers are aware of this risk, consider providing a logging design guide and specify what is and is not acceptable to write to the log.

Much of workload security in a container context is the same as that of any other application workload running in your production environment. Control access and require strong authentication, watch for malicious content, and be aware of shared and platform level vulnerabilities.