It’s inevitable. Every organization needs externally-developed applications to some degree or another. Increasingly, these apps are web-based and accessed over the Internet. As part of a forthcoming report on protecting applications, F5 commissioned a survey with Ponemon. In it, we asked security professionals what percentage of their applications (by category) were outsourced. The top answers were:
- Financial apps, such as banking and ecommerce: 45%
- Document management and collaboration: 41%
- Office suites: 40%
- Backup and storage: 32%
Since these are externally hosted applications, it stands to reason that organizational data is now being stored, processed, and accessed outside of the organization’s defensive perimeter. Therefore, in that same survey, we asked what percentage of those applications also store critical data. The results were:
- Financial apps, such as banking and ecommerce: 36%
- Document management and collaboration: 60%
- Office suites: 52%
- Backup and storage: 67%
So, essential apps are holding essential data, and they’re out of direct oversight. Obviously, it’s not enough to just trust that things are going to be fine.
Assessing the Security of a Third-Party App
Assessing the security of your outsourced applications requires multiple steps. The first is to assess the security of the organization itself. A key way to look at the organization is through its third-party audits and security assessments. We explored that in our article Can Audits Help Us Trust Third Parties?
The second step is look at the security of the app itself, but before we delve deeply into that, it’s worth looking at how the third-party itself handles vulnerability submissions from outside. If the vendor’s response to reports of security holes is to ignore the submitter or worse, sue the submitter,1 then you know that security isn’t going to be a high priority for them. If they are open to security submissions, is there a formal mechanism for collecting and scoring discovered vulnerabilities? Is there a dedicated department or a named point of contact? Even better, is there a bug bounty program? At the very least, you want to make sure that externally discovered holes are investigated and risk analyzed.
How Does the Vendor Handle Vulnerabilities?
As a customer, you should expect to have some say in the vulnerability remediation process. One thing to inquire about is how the organization prioritizes discovered vulnerabilities. Do they use a standardized risk-based process? Common Vulnerability Scoring System v3.0 (CVSSv3)?2 OWASP risk rating?3 Or is the process arbitrary? Do you get a vote in the prioritization? How are status updates on remediations communicated? Are all open vulnerabilities available for your review? What are the escalation paths if you’re not satisfied with this process? It would be best if this is spelled out in the contract or support agreement for the greatest clarity amongst all parties.
Regarding testing, are you (as a customer) allowed to run vulnerability and penetration tests against the application or hire external firms to do this? If so, can you share the results with other customers? Can you disclose the findings to your auditors or regulators? These are all important things to have ironed out before committing to a critical external application contract.
Assessing a Vendor’s App Security Program
Now that we have some idea of how the vendor handles security vulnerabilities in their app, it’s time to look at how they prevent them from occurring in the first place. Beyond the aforementioned review of their audit reports, you’ll want to assess the security of their secure development process. Many development programs aren’t audited, at least for security, so you may on your own. Here are some characteristics of a development process that includes a strong security effort:
- Developer security training – Programmers should be very familiar with web app risks like the OWASP Top 10 as well as common mitigation strategies such as input sanitization and session management.
- Security standards – Developers know and follow published policies and procedures for secure development. An excellent example is the OWASP Secure Coding Practices.4
- Threat modeling – Attacks against the app are mapped out and neutralized in the development design. This is where you can generate “abuse cases” which describe attacker behavior you intend to prevent.
- Code review – There is a review (either automated or manual) of new code before it is compiled into the production application.
- Static application security testing (SAST) – More advanced than a code review, this is security-specific scanning of the codebase for potential security problems.
- Security functionality testing – Security features are tested for use and abuse cases (see threat modeling) to ensure they function as expected.
- External code library vulnerability management – Many security problems come not from core code but from third-party libraries and frameworks. These should also be tested and patched on a regular basis.
- Vulnerability / Dynamic Application Security Testing (DAST) – Compiled, running code is tested against simulated hacks.
This list doesn’t include the application vulnerability management processes mentioned earlier such as vulnerability identification, prioritization, tracking, and mitigation.
Assessing a Vendor’s Operational Security Program
Beyond the application development program, it’s also important to know how IT operations deploys, manages, monitors, and maintains the application. Key operational practices related to application security include:
- Application inventory and version tracking – What versions are Internet-visible, holding what data, and subject to what compliance requirements?
- Application hardening that includes removing default accounts as well as turning off debugging messages.
- Standardized deployment process – a through change control (automated or manual) process is in place to ensure the right code is deployed and works as expected.
- Formal patching process with a reasonable cadence and includes emergency patching procedures when zero-day attacks strike.
- Least privilege access – How many people are given access to the production application environment, including any third-parties?
- Application security monitoring – Not just monitoring error events but also security events, including watching for brute forcing attacks.
- Application reliability – Making sure uptime mechanisms are in place, including data backup as well a denial-of-service response strategy.
- Application lifecycle tracking and review – Ensuring that old versions of the application are upgraded or retired.
If an organization does have an audit report, it’s likely that some or most of these processes will be assessed as control objectives.
Security Features of the Application Itself
One area we haven’t focused on yet are the security features of the app itself. Of course, you need to look at the security features, starting with how users log in. Does the app support enhanced authentication beyond a local username and password? Single sign-on, multifactor authentication and federation support are all basic expectations for application access. And regarding access, what do the password reset mechanisms look like? Hopefully they don’t rely on something weak like secret questions.
Beyond access, what does the encryption look like? What forms of transport encryption are in place and are they up to date? Are the TLS certificates issued from a reputable authority? Are good TLS practices in place such as HTTP Strict Transport Security (HSTS) and Online Certificate Status Protocol (OCSP) stapling? Beyond transport encryption, is user data stored in an encrypted format?
All of these questions are important to answer before you find yourself wedded to a critical outsource application that holds critical data. It’s just as important to revisit these questions at least annually to ensure threats are being mitigated to an acceptable level. In fact, it would be a good idea to keep raising the bar on security expectations because attackers are always evolving to work around our defenses. These kinds of assessments are critical not only to maintaining the security of your outsourced apps but to improving the app community as a whole. If enough customers insist that a vendor improves their security, then everyone benefits. Whatever you do, don’t assume app security is “just taken care of.” You rarely get what you expect—you get what you inspect.
More to come in our Application Protection Report July 25th
As noted in the intro, the perspective to write this article was actually derived from a Ponemon survey as part of a forthcoming Protecting Applications Report, which will be published here on F5 Labs on July 25, 2018!