F5 Labs recently featured a CISO-to-CISO blog post by an experienced auditor, Kyle Robinson, discussing how most organizations fail audits. I’ve been through quite a few audits myself, including a number by the author of that blog. Here are six ways to avoid the common audit failures he spelled out.
Until you’ve got backing from the C-levels, don’t even bother talking to an auditor or building a security program. Without executive blessing, everything you do will be as difficult as nailing jelly to the tree. So, how do you do this?
A big part of a CISO’s job is to ensure alignment of the security program with the organization’s objectives. This is a two-way street: the program must be right for the organization, and the organization must feel right about the program. Security programs that are forced down the throat of the organization end up being spit back out. For a CISO, this means discovering what an organization values and then explaining cyber risks and controls in those terms. F5 Labs has featured several CISO-to-CISO blogs on this topic, including how to:
Ultimately, the goal is to make sure the security program is prioritized because it reflects the priorities of the organizations. If the CISO can tie risk, and consequently the audit program, to what the organization cares about, then getting buy-in from the top is easy.
In an audit, the burden is on you to provide the evidence that you’ve done the right things. There is no presumption of innocence, and auditors rarely take you at your word. If you did not document something, then an auditor will assume you didn’t do it. Documentation shows consistency (we’ve done it this way every time) and constancy (we’ve been doing it that way for years). It also creates a pathway for training and automation as things are clearly spelled out as opposed to being known only to one key person (who will soon be quitting).
Luckily documentation isn’t hard to do if you write it as you go along. Start with high-level policies describing your intentions (for example, all users will be authenticated before being allowed access), write standards to define how things should be done (access authentication must consist of passwords of at least 10 characters changed every 90 days), and then procedures to describe how they will be implemented (Windows will be configured in the following manner to implement our authentication standard…).
Wherever humans are involved, mistakes and oversights will happen. It’s folly to assume otherwise. There are two simple techniques to manage this: reduced scope and defense-in-depth.
For audits, the scope is everything. If something is not in scope, the auditor will not need to look at it. For example, by leveraging the principle of Least Privilege1 you could remove administrator rights from everyone except a handful of IT professionals who absolutely need it. Then you have fewer people who need to be background checked, use strong authentication, and follow change controls. Fewer moving parts, fewer chances of failures.
Defense-in-depth means adding additional controls to back up the controls that could fail. In Kyle’s audit failure example, an admin forgot to disable a user on that user’s last day of work. A secondary control could be added for periodic review of users to HR records to ensure accounts match up. This is what I did in my last job on a weekly basis until we built an automated HR link to a user accounts tool.
Since everything you do in security should be based on risk, a complete risk assessment is a must. But, what is a good risk assessment? Some people confuse a list of failure scenarios with a risk assessment. Stating that a DDoS attack could cripple your organization is not a risk statement, it is a statement of impact. Risk statements must include probabilities of occurrence of the threat such as: “It is highly likely in the next year that we will experience a DDoS attack that cripples our Internet services.” Conversely, the chance of a threat occurring alone is not a risk statement. Receiving lots of password guessing attacks against your SSH services is not a risk. However, if you say there is a high likelihood of an SSH attack succeeding with an attacker gaining access to confidential data, that is an actionable risk statement. Lastly, statements about the weakness of existing controls are not risk statements, either. Saying your firewall is obsolete is not a risk unless you give it context, such as, “The firewall cannot detect web application attacks, which are common and highly damaging to the website.” F5 Labs provides more information on doing robust risk assessments in this blog: Can Your Risk Assessment Stand Up Under Scrutiny?
One of the principal tenets of security is to assume breach, which means to expect your security system to fail at some point and plan accordingly. A corollary to that concept is to assume control failure. It’s one thing to deploy a control like a new security policy or strong authentication. It’s another for it to be used properly to reduce risk and meet audit objectives. The best way to know if it is functioning properly is to test it regularly. This is where you can leverage internal auditors, external consultants, vulnerability scanners, and penetration testers. Any large system is going to have occasional lapses, so follow up and keep improving. It’s better for you to test something and find it failing than for an auditor or attacker to do the same.
When I was in school, I saw there were really two ways to pass the final exam: study all semester long and do the homework, or slack off all semester and then cram at the last minute the night before the exam. Some people approach their audits in the same manner. The ones who cram before the auditor arrives are the ones who have weak security most of the time, except for when they know it’s going to be examined.
Putting aside the huge security risk this creates, you can also consider this a significant compliance risk. Even if an organization is currently subjected to only periodic point-in-time audits, there is the ever-growing likelihood that unexpected audits and assessments could suddenly spring up. Right now, there are dozens of situations that could entail a surprise visit from regulators. The FTC has become notorious for performing pop quizzes for U.S. organizations with potential security problems.2 Many business contracts and third-party agreements often contain “right to audit” clauses that allow business partners to conduct or demand independent assessments of security programs.3 I’ve seen contractual clauses that only give the audited organization 48 hours warning before the audit team arrives onsite with their checklists and magnifying glasses. With the prevalence of third-party security failures and regulatory guidance regarding testing them,4 these are only going to increase. It’s safe to assume that even if you aren’t being audited now, you will be soon.
MODIFIED: Sep 01, 2017