Telemetry: A Different Kind of Data

Lori MacVittie Thumbnail
Lori MacVittie
Published February 24, 2020
  • Share via AddThis

As an industry, we are willing to debate just about any aspect of technology. Our forte, however, seems to be found in arguing over terminology. Jargon. Words.

Cloud. DevOps. SDN. Open. Controversy still clings to definition of these terms like dog hair to a black couch.

You use your similes and I'll use mine, thanks much.

Today we're here to talk about a term that's bound to set off arguments for another decade or so: telemetry. Isn't it just data, after all?  Are we just using telemetry because it sounds sexier than data?

No. Not at all.

Ultimately both data and telemetry are organized bits of information. To use them interchangeably is not a crime. But the reality is that if you want to be accurate, there is a difference. And that difference will become increasingly important as organizations march into the data economy.

Telemetry is derived from two Greek words: "tele" and "metron," which mean "remote" and "measure". According to Wikipedia, "telemetry is the collection of measurements or other data at remote or inaccessible points and their automatic transmission to receiving equipment for monitoring."

This is the reason we see so much operational data referred to as telemetry—because it's being collected (remotely) and transmitted to a different system. The existence of telemetry data is not new. It's been an innate byproduct of every network and application service for as long as they've existed. Network and application monitoring have used agents and protocols to collect telemetry for decades. Its value has mainly been in troubleshooting issues in the data path. 

But as business progresses through digital transformation and the line between business process and technology continue to blur, telemetry from across the data path will provide insights into both technical and business problems. As organizations are increasingly reliant on applications to execute business—internally and externally, with customers and partners—the telemetry that will be of greatest value is that generated from the application services that make up the data path.

If you look at that path there is at least one—certainly closer to ten—application services providing for scale and security. 

Each application service—and the platforms upon which they are deployed—has valuable information about the state of a given customer experience. Everything from characteristics about the user platform (device type, location, network) to time spent at every individual "hop" along the data path can be used to troubleshoot incidents, identify malicious actors, and detail performance problems. This is not "customer" data or "corporate" data; it is operational data. It is telemetry.

To truly take advantage of that data, however, requires that we find a way to capture and then analyze the massive volume of that can come from application services in the data path. That is where cloud comes in.

Why Cloud is Critical is to Leveraging Telemetry

Today, only some telemetry is captured because to keep it all would require more storage space than is available.

The amount of telemetry that is—and could be—emitted is overwhelming. Most systems cannot store more than a few weeks—or days—of telemetry. Often, it is sliced into time-series to save space. But even that cannot stop the incredible burden on storage. Eventually, it must be deleted to make space for newer, more relevant telemetry data.

This is why you tend to find advanced analytics services hosted in a public cloud. The capacity of cloud compute and storage coupled with machine learning provides the technological foundations needed to collect, store, and process massive quantities of telemetry. With a robust enough set of telemetry, advanced analytics will be able to provide actionable insights to organizations by discovering patterns and relationships between seemingly disparate data points.

But to get there, application services need to emit as much telemetry as a cloud-based repository can ingest. And it needs to come from as many points across the data path as possible. The more information that can be gathered from across a customer experience (the data path) the more valuable it will be to the system searching for patterns and relationships that uncover actionable insights that improve both the customer experience and business performance.