When I was studying security in graduate school, I also took classes on design and human-computer interaction. I quickly realized that I was better suited for security, but I learned about a design principle that resonated deeply for me, and one that I use daily to make sense of the world. That is the principle of affordance—while it doesn’t introduce anything truly new to cybersecurity, it does organize existing ideas in a way that I find both actionable and easy to communicate.
In brief, an affordance is a signal from the environment to an actor that they can take a particular action in a particular place or on a particular object. The term was originally coined by James Gibson, a specialist in ecological psychology, in his 1966 book, The Senses Considered as Perceptual Systems.1 It was modified and repurposed for industrial design by Donald Norman in his seminal book The Design of Everyday Things.2 Norman took Gibson’s original idea about the relationship between the environment and the actor and added the perceptual element. Norman’s work went on to have an enormous impact on industrial design and, eventually, user experience and digital design as well.
The best example I know of an affordance is the rectangular metal plate that indicates which side of a door to push if it doesn’t have a knob. This example incorporates all of the facets that make the concept of affordance specific and useful: it signals an action we can take (open the door) and signals how to do it (push on the side that isn’t close to the hinges). It helps us distinguish between the tool (the door) and the perception of the tool (the plate itself has no moving parts and isn’t critical to operating the door). It also captures that the signaling function of the affordance is influenced as much by social and cultural factors as by technical ones.3
That Vulnerability Is So 2019
So what does all of this have to do with attacking web applications? To attack an application, a malicious actor first needs to perceive an opportunity for action, that is, recognize an affordance. If, from a design standpoint, an app is a collection of affordances that signal what actions users can take, then one significant difference between a malicious actor and a benevolent one is the span of the set of affordances they perceive. You probably see where I’m going with this. We already have a name for malicious affordances: vulnerabilities.
We already have a name for malicious affordances: vulnerabilities.
If we think about vulnerabilities in this way, as a matter of action signaling, then malicious actors are, in their own malicious way, members of our audience. Applications are engineered to function, but they are designed to signal. The specific ways we design apps tell our audience how we expect them to act. When we release applications with vulnerabilities, we are also inadvertently telling this other subset of our audience how they can interact with our application. The problem is that we haven’t yet recognized that, when we release a vulnerability and send signals about potential actions, we must also modify our expectations about how our audience will act.
This might all sound completely obvious to anyone who’s worked in security for any length of time. But we consistently see applications built (or, increasingly, hastily stapled together) on the assumption that everyone who interacts with the app is going to (1) play nice and (2) stay at the user interface level. We consistently acknowledge the existence of only the affordances we want to believe in. This means that we are constantly surprised when attackers go beneath the user interface and inspect applications at deeper layers, such as at the level of source code, HTTP or TCP/IP protocols.
Threat Landscape … or Audience?
The point is that attackers are also members of our audience, and the entire application stack, from bare metal to the most inconsequential plugin, is the affordance space. Since we design an application with a focus on signaling potential actions to users, shouldn’t we recognize that a certain type of audience member perceives, and acts on, a different set of signals? We don’t go to the meanest, filthiest honky-tonk bar around and get surprised when someone throws a bottle. We should not be surprised that a certain kind of audience member always shows up on our networks when those networks are connected to the Internet.
There is, of course, a population that specializes in this perceptual problem: penetration testers. If we think of vulnerabilities as affordances, then penetration testers are actually a sort of design focus group. This niche specializes in recognizing potentials for action against any surface, at every level of abstraction within an application, and communicating that affordance space back to application owners in an actionable way.
My design courses also emphasized the importance of consulting user experience and design specialists as early in the process as possible—just like security. True penetration testing needs a finished, functioning, and fully integrated application to ensure that it covers all of its bases. However, thinking about vulnerabilities as partly a signaling problem implies that we need to consider the potential for malicious human behavior from the earliest stages of business logic planning. This is the practice known as threat modeling, and thinking about it in terms of affordances would, I believe, help coordinate the process between the many different kinds of experts responsible for building complex applications.
Another ramification is that we should be thinking of vulnerabilities as action potentials (that is, impact) first, and coding flaws or misconfigurations second. Of course, we need to know the exact nature of a flaw—which line in which file—to fix the problem. However, fixating on the problem’s location and how it got there can also obscure the actions it lets an attacker take, which is how we get to the problem of three low-risk vulnerabilities making one high-risk vector. Similarly, while verbose error messages or code comments are useful for debugging during development, they inadvertently signal more than is necessary when they make it into production code.
The perception aspect also brings in the question of deception. Deception is a defense principle that cybersecurity practitioners have been experimenting with for decades, from honeypots and honeytokens to network tarpits. Most deception capabilities that I know of remain tactical in scope, but we could intentionally sow deeper and broader confusion below the user interface level than we currently can by considering it in the design phase. App defenders have been dreaming of turning their applications into labyrinths for years—taking a designer’s approach to everything, not just the user interface, would make apps into an attacker’s nightmare.
All of these ideas—deception, threat modeling, impact analysis—have been around for years. But by thinking in terms of affordances, specifically in terms of attacker perception, we can start to see malevolent and benevolent user experiences as two sides of the same coin. This lets us treat vulnerability management, availability, and application architecture as linked aspects of the same essential problem, which is managing risk while connected to the largest, filthiest, meanest dive bar of them all.