HTTP Rising: Telemetry, Tracking, and Terror in Container Environs

F5 Ecosystem | December 04, 2017

HTTP is ubiquitous. Your television speaks HTTP. Your phone. Your tablet. Your car. If it’s a device with networking, it probably speaks HTTP as fluently as you speak your native language.

team-http-badger

HTTP is a flexible thing. Unlike its networking neighbors – TCP and IP – it is almost limitless in the information it can carry from point A to point B. While IP and TCP are required to adhere to very strict, inflexible standards that define – to the bit – what values can be used, HTTP takes a laissez-faire approach to the data it carries. Text. Binary. JSON. XML. Encrypted. Plain-text.

Like honey-badger, HTTP don’t care. It will carry it all – and more.

One of the ways in which HTTP is constantly flexing its, well, flexibility is in its rarely-seen-by-users headers. This the meta-data carried by every HTTP request and response. It shares everything from content type to content length to authorization tokens to breadcrumbs that tattle on who you are and where you’ve been – whether you want it to or not.

This is important to note because as we’ve seen in the container space, HTTP headers are growing as a mechanism not only to transport data between clients and services, but as a means to share the meta-data that makes these fast-moving environments scale so very efficiently.

Of growing note is the notion of a service-mesh and, with it, the addition of custom HTTP headers that carry operational information. This blog from Buoyant – the company behind one of the two leading open-source, service mesh implementations – illustrates the reliance on HTTP headers for sharing telemetry necessary to enabling correlation of traces that help simplify the highly complex set of transactions across services that make up a single HTTP request and response pair.

For those not interested in reading the entire aforementioned blog, here’s the most relevant bit – highlighting is mine:

While we at Buoyant like to describe all of the additional tracing data that linkerd provides as “magic telemetry sprinkles for microservices”, the reality is that we need a small amount of request context to wire the traces together. That request context is established when linkerd receives a request, and, for HTTP requests, it is passed via HTTP headers when linkerd proxies the request to your application. In order for your application to preserve request context, it needs to include, without modification, all of the inbound l5d-ctx-* HTTP headers on any outbound requests that it makes.

It should be noted that the referenced custom HTTP headers are only one of several that are used for sharing telemetry in these highly distributed systems. As noted in the blog, the l5d-sample header can be used to adjust tracing sample rates. So it’s not only being used to share information, it’s being used to provide operational control over the system.

Let that sink in for a moment. HTTP headers are used to control behavior of operational systems. Remember this, it will be important in a couple of paragraphs.

Rather than separate the control plane from the data plane, in this instance both planes are transported simultaneously and it falls to the endpoints to separate out form from function, as it were. As this particular solution relies on a service-mesh concept – in which every inbound and outbound request from a service passes through a proxy – this is easily enough accomplished. The proxy can filter out the operational HTTP headers and act on them before forwarding the request (or response) onto its intended recipient. It can also add any operational instructions as well as insert telemetry to help match up the traces later.

Application networking, too, is becoming a common thing in container environments. While it’s always been a thing (at least for those of us in the world of programmable proxies) it’s now rising with greater frequency as the need for greater flexibility grows. Ingress controllers are, at their core, programmable proxies that enable routing based not only IP addresses or FQDNs, but on application-specific data most commonly carried by HTTP headers. Versioning, steering, scaling. All these functions of an ingress controller are made possible by HTTP and its don’t care attitude toward HTTP headers.

Sadly, HTTP headers are also their own attack vector. It is incumbent upon us, then, to carefully consider the ramifications of relying on HTTP headers to not only share operational data but control operational behavior. HTTP headers are a wildcard (seriously, read the BNF) that are universally text-based in nature. This makes them not only easy to modify but also to manipulate into carrying malicious commands that are consumed by a growing number of intermediate and endpoint devices and systems.

If that does not terrify you, you haven’t been paying attention.

Luckily, the use of HTTP headers as a method of both control and data plane are primarily limited to containerized systems. This means they are generally tucked away behind several public-facing points of control that afford organizations the ability to mitigate the threat of their overly generous nature. An architectural approach that combines a secure inbound path (north-south) can provide the necessary protection against exploitation. No, we haven’t seen anyone try that. Yet. But we’ve seen too many breaches already thanks to HTTP headers that it’s better to be safe than sorry.

HTTP is rising, not only to the primary protocol for apps, services, and devices but for telemetry, tracking, and transport of operational commands. It’s an exciting time, but we need to temper that “we can do anything” with “but let’s do it securely” if we’re to avoid operational disasters.

Share

About the Author

Lori Mac Vittie
Lori Mac VittieDistinguished Engineer and Chief Evangelist

More blogs by Lori Mac Vittie

Related Blog Posts

The everywhere attack surface: EDR in the network is no longer optional
F5 Ecosystem | 11/12/2025

The everywhere attack surface: EDR in the network is no longer optional

All endpoints can become an attacker’s entry point. That’s why your network needs true endpoint detection and response (EDR), delivered by F5 and CrowdStrike.

F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift
F5 Ecosystem | 11/11/2025

F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift

F5 collaborates with Red Hat to deliver a solution that combines the high-performance app delivery of F5 NGINX with Red Hat OpenShift’s enterprise Kubernetes capabilities.

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5 Ecosystem | 10/28/2025

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture

F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.

F5 Silverline Mitigates Record-Breaking DDoS Attacks
F5 Ecosystem | 08/26/2021

F5 Silverline Mitigates Record-Breaking DDoS Attacks

Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.

Volterra and the Power of the Distributed Cloud (Video)
F5 Ecosystem | 04/15/2021

Volterra and the Power of the Distributed Cloud (Video)

How can organizations fully harness the power of multi-cloud and edge computing? VPs Mark Weiner and James Feger join the DevCentral team for a video discussion on how F5 and Volterra can help.

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
F5 Ecosystem | 12/08/2020

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies

David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us