There are fifteen gazillion statistics about how pervasive open source software is across enterprises in every industry. Apps are composed of more than 80% open source components, and the Internet basically runs on the open source software, NGINX.
But there are just as many open standards, as well. Standards developed and polished using an open source, community approach that yields incredible ecosystems of supporting products, projects, and infrastructure.
OpenTelemetry is one of those efforts, and it has become the standard for generating, ingesting, and processing operational data, a.k.a. telemetry. Nearly one-third (32%) of respondents to the Observability Innovation Report 2023 indicate that “OpenTelemetry support is required and 50% say it is very important in vendor products. Slightly more than one-third (36%) of respondents use OpenTelemetry within their organization.”
Standardizing telemetry is critical because observability relies on data points from the entire IT stack. That means network metrics, server logs, and traces—all that come from vastly different types of infrastructure and systems. There is no single source of truth because there are simply too many moving parts in even a simple application to ensure you can gather all the data you need to observe the state of the app at a given point in time. Standardizing the way telemetry is generated is one way to normalize digital signals and ensure analysis can leverage all the appropriate data points to deliver accurate, actionable insights.
But even standardizing telemetry does not solve all the challenges associated with reaching the holy grail of full-stack observability.
One of the big hairy problems of dealing with operational data is its volume. The digital signals organizations rely on to keep them abreast of potential problems with performance or attempted attacks are generated faster and more furiously than any other kind of data. We know this on a close and personal level because at F5, we have adopted OpenTelemetry as our standard across our portfolio. The nature and role of our products, like BIG-IP and NGINX, in delivering and securing applications and digital services means that significant volumes of data such as metrics and logs is generated for a variety of reasons. Transporting and processing that data is a significant portion of the cost associated with telemetry pipelines.
To address that challenge, Distinguished Engineer Laurent Quérel got involved with Apache Arrow and began working with the OpenTelemetry project to increase its efficiency with high telemetry volumes.
Our benchmark results show Apache Arrow provides significant advantages for transporting and processing telemetry data, particularly when it can be grouped into batches of several hundred entities or more. The columnar organization of the data enhances compressibility, and this memory layout greatly improves processing speed by optimizing the use of various cache levels and SIMD instructions. Furthermore, the Arrow ecosystem serves as an excellent complement to OpenTelemetry, enhancing its integration with query engines, stream processing pipelines, and specialized analytics file formats.
You can read more about Apache Arrow and Laurent’s work in the first of two articles on our experiences with the technology on the Apache Arrow site.