Why Every Federal Agency Needs An Enterprise Application Strategy And How To Build One

Introduction

IT executives within U.S. federal agencies must walk a fine line to deliver more value while still reducing waste and minimizing losses due to cyberattacks. On the one hand, IT leaders must embrace the latest technology to unlock greater efficiencies, deliver more value, and save taxpayer money. On the other hand, they face an evolving and expanding threat landscape requiring significant resources to mitigate. In short, with every initiative and every decision, they must both innovate and protect. To navigate these seemingly opposing goals, federal IT leaders should develop an enterprise application strategy, tailored to their organizational mission.

Ushering in the Age of Applications

65% of respondents surveyed in the F5 2019 State of Application Services report said that their organizations are in the midst of a digital transformation. Digital transformation is a new way for organizations to use technology to enable profound improvements in performance and user productivity.

In short, applications are becoming the business or mission itself.

To take advantage of this radical shift, organizations are making sweeping changes in pursuit of new business and IT operational models, like:

  • Relying on SaaS services such as cloud-based email and productivity tools
  • Containerizing custom applications to simplify management and reduce cost
  • Using cloud security services to determine attacks quickly and apply policy depending on organizational rules.

Applications are being delivered to a number different types of devices. Users accessing applications do not reside within traditional enterprise environments—they’re mobile or work from home. The delivery of business-critical functions like CRM, ERP, and everything in between must allow for access outside the traditional enterprise network.

The 2018 F5 Labs Application Protection Report revealed that the average public sector organization uses 680 applications, 32% of which are considered mission critical.

On top of this tremendous growth in applications, organizations are deploying those applications into new architectures and services. The 2019 State of Application Services research indicates that 14% of organizations surveyed have made containers the default application workload isolation approach and 87% of respondents are adopting a multi-cloud architecture. The Office of Management and Budget (OMB) itself is advocating a "Cloud Smart Strategy"1 to migrate to a safe and secure cloud network.

Understand the rapidly expanding threat surface area

Government and commercial organizations aren’t the only ones using technology to reduce friction, supply new services, and enhance value. Cybercriminals, militant groups, and nation-state threat actors are also innovating their cyberattack capabilities at breakneck speed. As we’ve gotten better at protecting our networks and infrastructure, cyberattackers are shifting their sights to softer targets.

The soft targets are used to infiltrate high value assets which may store personnel or classified information. F5 Labs research shows that 86% of all cyberattackers either target applications directly or steal user identities, usually by phishing.2

Attackers are learning that it’s sometimes easier to attack indirectly. One way is by going after less important applications, such as IoT devices, and leveling up their access. Such was the case when criminals hacked a U.S. casino through a lobby aquarium thermometer and used it as a foothold into the high roller database.3 Similarly, the devastating Target breach of 2013 began as an exploit of Target’s HVAC vendor and ended with a breach of the point of sale system and a loss of 40 million credit cards.4

Left unmanaged, open source and other third-party components and services can also introduce risk. The Sonatype State of the Software Supply Chain 2018 notes that one in eight open source component downloads contain a known security vulnerability.5 Organizations, including in the public sector, take advantage of open source because it speeds up development and delivery. However, these same organizations often fail to include open source components in security scans and review processes. These components become part of the application portfolio and should be treated with the same scrutiny as custom code. This includes components on both the server side (e.g., NPM packages, libraries) as well as the client side.

With applications now playing a more critical role in fulfilling an organization’s goals, the growth of these apps is out pacing the ability for most organizations to scale their operations. The traditional model of having security at the perimeter does not scale and is limited in its effectiveness to prevent threats. Once a soft target has been compromised, high value assets are at risk. New architectures and deployment models further challenge traditional security measures. Apps are being staged in dispersed locations and are using multi-cloud application services to improve performance. Meanwhile, the threat surface area associated with applications is expanding exponentially.

Continuous monitoring close to applications and soft targets is important to provide security while maintaining performance. Integrating intelligent cloud security can work with the application to mitigate threats quickly and protect sensitive data. Small data types can be sent to the cloud to identify potential threats by comparing the data with millions of stored artifacts within the cloud. This can be used to determine if a client is a bot or an actual user.

fraud reduction service can work with an application to interact with malicious actors, obtaining more data types and preventing large scale fraud. The full fraud profile can be delivered to the organization’s continuous monitoring system to take action, prevent the spread of the attack, determine the location of compromised systems, and provide steps to remove the threat from infected machines. Analysts can provide the information to operations for further analysis.

Utilize multi-cloud and SaaS to your advantage

As if things weren’t complex enough, federal agencies are moving to multi-cloud and SaaS deployments to leverage greater flexibility, gain higher availability, reduce vendor lock-in, and—in some cases—take advantage of lower costs. The quandary is: how do they deliver a standardized, secure, and seamless application experience across hybrid, multi-cloud, and SaaS architectures without exceeding their budget allocation?

Agencies may use their perimeter solution at trusted internet connection points to secure connectivity to SaaS and multi-cloud environments. This can cause performance issues since agencies have limited connection points. Public facing applications are required to pass through these points for security which introduces latency. The benefits of having scalable, high performance applications are eliminated since traffic is required to "trombone" for security.

Perimeter solutions also rely on static signatures. If a new threat is developed and has been profiled within a security cloud provider, the information will not be relayed to the perimeter system to prevent the attack. Signature updates may occur within a day or a week. High value assets may be compromised when the signature is finally developed and updated within the perimeter security system.

Standardizing application services

Standardizing on multi-cloud and SaaS application services—the solutions put in place to secure, manage, and optimize your applications—reduces the operational complexity that comes along with a multi-cloud architecture.

For example, it's easier to manage—and more powerful to have—a service that functions at an enterprise application level, such as code-based invocation of APIs and event-driven systems, than a service that is platform-specific, such as a having to use a vendor-specific message queuing service. Enabling one set of features across all applications cuts down on operational overhead. By adopting a standard platform for as many application services as possible, organizations can leverage more automation and reuse more code to realize consistent, predictable, and repeatable operational processes.

Developing an Enterprise Application Strategy

In an age in which many born-in-the-cloud organizations don’t even have an IT department, traditional IT architectures and operational processes fall noticeably short of application developers and DevOps teams’ expectations. The desire to drive applications out faster often leads to bypassing traditional networking and security teams along with the associated security and operational processes. Indeed, protecting the enterprise application portfolio has as much to do with people and processes as it does with technology.

To manage performance and, more importantly, risk across this multi-cloud sprawl, organizations are desperate for solutions. Wherever the applications live, solutions must support the deployment of consistent policies, manage threats, provide visibility, and allow for monitoring of app health and performance. A poor-fitting solution could easily diminish any benefits from a digital transformation if the innovation and agility of application development and DevOps teams are hindered.

Given the increasing value—and risk—profile of applications, every federal agency needs to develop an enterprise application strategy that addresses how applications in the enterprise portfolio are built/acquired, deployed, managed, and secured.

Step 0: Align application strategy goals with the organization mission

Summary: Use this opportunity to create an enterprise application strategy that aligns with the mission and goals of your federal agency.

The whole point of digital transformation is to replace unwieldy and manual processes with efficient, data-rich applications. Therefore, the overarching goal of an enterprise application strategy should be to directly enhance, speed up, and protect the organization’s digital capabilities as they relate to delivering against the mission. Any applications or associated application services that are incongruent to this goal should be deprioritized. High priority applications should be given additional resources and security, while deprioritized assets can use shared resources.

This alignment also means taking the status-quo into consideration, which includes looking carefully at the current enterprise data strategy, compliance requirements, and the overall risk profile of the organization.

In many cases, the constraints imposed by these different sources and the impact on the agility of app development teams is likely not well understood. Your enterprise application strategy should clarify the balance your agency is willing to strike between the often-competing forces of innovation, agility, and risk.

Step 1: Build an application inventory

Summary: Before you start modernizing, you need to build a complete application inventory.

When it comes to an enterprise application strategy, most teams aren’t lucky enough to start afresh. Nearly everyone in the IT industry inherits a technology architecture that is a result of decades of disparate systems mashed on top of legacy systems coaxed to keep functioning. This problem can be especially severe within the U.S. federal government, as compared to the commercial sector. It is rarely easy to cleanly migrate these incongruent pieces of technology to a desired target state. Therefore, more discovery and analysis must be done.

While it may sound overly simple, in order to adequately protect something, you must first know that it exists, and then be able to accurately monitor its health. And yet, with few exceptions, most organizations are not able to report with confidence the number of applications they have in their portfolio, let alone if those applications are healthy and secure. The F5 Labs 2018 Application Protection Report discovered that 62% of IT security leaders have low or no confidence in knowing about all the applications in their organization.

What to include in your application inventory

An application inventory is the most foundational element of any application strategy. This is a catalog of all applications, whether delivered internally, laterally (e.g., to other government entities), or externally (e.g., to the public), that includes:

  • A description of the function the application or digital service performs
  • The origin of the application (e.g., custom developed, packaged software, or third-party service)
  • The key data elements the application requires access to or manipulates
  • Other services the application is communicating with
  • Open source and other third-party components that are part of the application
  • The individual(s) or group(s) accountable for the application
How to build the inventory

Building the application inventory for the first time is often painstaking and time consuming work. One approach for easily smoking out rogue applications is by making the application inventory an allowlist; applications that aren’t on the allowlist simply do not get access to enterprise resources (e.g., network). To chase down applications outside of your organization, a tool like a cloud access security broker (CASB) can be very helpful. CASBs sit between your users and the Internet, monitoring and reporting on all application activity. They can not only tell you which applications your employees use most (and how they access them) but also give insight into shadow IT application usage.

By employing a DevOps architecture model and integrating your applications into it, you can simplify the inventory process moving forward. When an application’s priority is determined and it is placed within your multi-cloud or SaaS environment, developers can package the deployment using open source tools (i.e., Ansible, GitHub). The deployment is then managed by these tools which include security services, patch management, and code. Inventory information is centralized and can be provided quickly. Therefore, the application can exist within a cloud or SaaS environment but still be identified by the organization.

FedRAMP considerations

Every moment spent ensuring the accuracy of your application inventory has a direct positive impact on your ability to quickly define FedRAMP system boundaries. It also allows you to have a much more accurate way to quickly and precisely find the responsible party for an application’s infrastructure—or even an individual component of an application—when asked to do so by the certifying body.

Lastly, your FedRAMP efforts require that this is a continuous exercise, possibly more than once a year. It’s a constant process that involves keeping an eye on what applications and data repositories are in play, monitoring what users need to do, and evaluating how your development environments are evolving.

Step 2: Assess cyber risk for each application

Summary: Take a look at each application’s individual risk level and combine that with all applicable compliance considerations when determining security measures.

Cyber risk is a significant and growing concern for IT leaders within the U.S. government. To combat it, you should begin by evaluating cyber risk for each of your applications.

Each application in your inventory should be examined for four primary types of cyber risk:

  1. Leakage of sensitive internal information (e.g., military secrets)
  2. Leakage of sensitive customer/user information (e.g., personnel records, tax history)
  3. Tampering of data or applications
  4. Denial of service to data or applications

With cyber risk, the importance of a digital service should be measured by considering the financial or reputational impact to that service from the cyberattack per the categories above. Different organizations will place differing levels of potential loss for certain services than others, so each organization should do their own estimations based on their defined mission. FISMA, for example, asks you to determine agency-level risk to the mission or business case. But it’s often practical to also examine risks at an application level to prepares for when your mission’s compliance standards inevitably deepen.

Compliance considerations

Sometimes included with organizational risk calculations are the risks of non-compliance with applicable rules, guidelines, and contracts. Federal entities are steeped in regulations and standards, and these should all be accounted for when evaluating applications and digital services. Establishing an application-linked model to assessing cyber risk helps ease the process of satisfying the CDM/ConMon requirements to FedRAMP compliance by letting you narrow the boundaries to manageable and appropriately granular groups of services.

Simplifying compliance with SaaS

Using SaaS services can help reduce organizational compliance concerns. The application risk and compliance concerns are the responsibility of the SaaS provider. This helps an organization to concentrate on custom applications.

Look at integrating cloud security services within the application to protect user accounts, financial data, and personal information. Cloud security services have millions of data points which can be used by the application to determine if a client is being malicious or is an actual user. Since the data points are continuously being updated, new threats can be identified. Rules can be applied to the application in real time and can work with an existing an enterprise security strategy.

Step 3: Determine which application services are needed

Summary: Take stock of what application services—the solutions that run your applications behind the scenes—are necessary to your organization.

Applications rarely stand alone, so along with the application inventory, the application services running your apps should be managed and tracked. Application services are packaged solutions for application builders that improve the speed, mobility, security, and operability of an application. App services bestow several important benefits to the application workload:

  • Speed: The performance of an application workload and the ability to deliver quickly.
  • Mobility: The easy movement of an application workload from one physical or logical hosting site to another.
  • Security: The protection of the application workload and the data associated with it.
  • Operability: The assurance that an application workload is easy to deploy, easy to keep running, and easy to troubleshoot if it fails.

A good way to locate dependent application services is by examining the Controls section of the FISMA or FedRAMP System Security Plan for your environment. This will often point out the presence of both security-focused application services and other services that are dependent to them. 

While every application can benefit from application services, not every application requires the same application services.

Common application services include:

  • Load balancing
  • DNS delivery
  • Global server load balancing
  • Web application firewalls
  • DDoS prevention/protection
  • Application monitoring and analytics
  • Identity and access management
  • Application authentication
  • API gateways
  • Container ingress and egress control
  • SSL encryption
Balancing cost, security, and performance

All application services involve some level of cost, both directly (in terms of the service itself) and indirectly (in terms of operational upkeep). Low priority applications can use shared capabilities to help reduce cost. Shared capabilities provide security and maintain performance. High value assets require more resources since they are very important to the organization’s productivity.

It’s worth noting that many app services are specifically designed to support narrow categories of applications. For example, only applications designed to serve IoT apps need an IoT gateway. Applications delivered in a traditional architecture don’t require app services targeting containerized environments, so ingress control and service mesh application services may not be applicable.

In some cases, new application services may need to be acquired to ensure compliance or to reduce risk. Although it may technically meet compliance standards, you should always resist the temptation to select the minimum baseline controls and insist on selecting application services that enhance your ability to deal with cyber risk.

Step 4: Define application categories

Summary: Logically group applications based on their type, priority, and requirements.

Once the application inventory is complete, the next step is to group your applications into logical categories based on the characteristics that need different management and application services approaches (e.g., access to sensitive data, exposure to more threats).

Once categorized, the enterprise application policy should specify the performance, security, and compliance profiles that are to be applied to different application types, based on the criticality and enterprise classification of the application itself.

We recommend starting with four basic tiers.

 

 

Tier 1

Application characteristics
Applications that you consider your high value assets that are also mission critical digital services that collect and transform sensitive data

Required application services
Load balancing, global server load balancing, web application firewall, DDoS protection/prevention, bot detection, SSL encrypt and decrypt, user identity and access management, application / service identity and authentication, application visibility/monitoring

Additional characteristics
Application services are placed close to the application and are incorporated within the application deployment. Protect sensitive data using intelligent cloud services to protect user accounts, credit card information, and personal information.


Tier 2

Application characteristics
Mission critical digital services that provide access to sensitive data

Required application services
Load balancing, global server load balancing, web application firewall, DDoS protection/prevention, bot detection, SSL encrypt and decrypt, user identity and access management, application/service identity and authentication, application visibility/monitoring

Additional characteristics
Protect sensitive data using intelligent cloud services to protect user accounts, credit card information, and personal information.


Tier 3

Application characteristics
Mission critical digital services that do not collect or provide access to sensitive data

Required application services
Load balancing, global server load balancing, DDoS protection/prevention, application visibility/monitoring

Additional characteristics
You can use shared services to help reduce cost. Protect sensitive data using intelligent cloud services to protect user accounts, credit card information, and personal information.


Tier 4

Application characteristics
Other digital services

Required application services
Load balancing, application visibility/monitoring

Additional characteristics
You can use shared services to help reduce cost.

 

The value in categorization

As the threats facing applications vary based on the environment in which they are hosted, this categorization can be further expanded to differentiate based on deployment environment (e.g., on-premises, public cloud).

Prioritizing your goals in this fashion also helps you pre-classify applications that you deploy into proper FISMA/FedRAMP levels easily. Spending a little time here developing a structure to your mission goals allows you to spend much less time talking to an auditor later.

No organization has enough resources to do everything they want in an acceptable timeframe. By prioritizing your applications, you can take a triage approach to what apps need bolstering with application services, which apps should be modernized or replaced, and which apps aren’t worth the effort. For apps in the latter category, make sure they’re segmented off in your network and avoid the scenario in which an innocuous IoT thermostat leads to a full network breach. This process also includes looking at new applications that could unlock new value streams and should therefore be either developed internally or sourced from a third-party.

Step 5: Define parameters for application deployment and management

Summary: Develop deployment and consumption parameters.

A foundational part of any IT strategy has always been deployment and operational management, and a modern enterprise application strategy adds a few new twists (e.g., the importance of the end-user experience). This includes looking at:

  • Which deployment architectures are supported (e.g., hybrid cloud, multi-cloud)
  • Deployment model options for each of the application categories
  • Which public clouds can serve as access points for applications
  • To what extent public cloud native services can be leveraged versus third-party

Different applications have different needs in terms of deployment and consumption models. During this phase of developing your application strategy, you should strive to gain a clear understanding of the different deployment options, each of which might have different consumption models, cost impacts, and compliance/certification profiles.

In selecting deployment models, it is also prudent to inventory available skills and talent to factor into the decision. For example, choosing to deploy on AWS when you have insufficient in-house talent to manage it and lack access to contract-based skills can slow you down and introduce risk.

Never forget that your deployment and management mechanisms may, themselves, be subject for an authorization, whether it be under FISMA or FedRAMP ATO/P-ATO, whichever your agency uses as a standard for your mission.

Step 6: Clarify roles and responsibilities

Summary: Establish clear lines of accountability for each element of your enterprise application strategy.

In addition to articulating your goals and priorities, the enterprise application strategy should also include elements around roles and responsibilities.

You should know:

  • Who has decision rights around optimizing and securing the application portfolio (e.g., technology selection, application disposition, user access management)?
  • Who has Privileged User Access to each application?
  • Who is responsible for deployment, operations, and upkeep of each application in the various environments?
  • Who is responsible for compliance with the enterprise application policy?
  • Who is going to monitor for compliance to the enterprise application strategy goals? And who will they report metrics to?
  • Who is going to monitor vendors—including open source and third-party component/service providers—for compliance?
  • Who is going to ensure all applications and application services are accounted for (as applications and services continue to change and be added/removed)?

These responsibilities could fall to an individual, multi-departmental committees, or even to entire departments. Regardless, they should be spelled out clearly. They may, in fact, need even more definition than how your compliance regime requires you to define them.

More advanced organizations will adopt operational processes and automation to assign these accountabilities early in the development process, at the time of application inception. In a multi-cloud world of hundreds or even thousands of applications supporting critical functions, the application strategy and corresponding policies should establish clear lines of accountability.

Enforcing the Enterprise Application Strategy

Once the enterprise application strategy is developed, to serve its purpose, it must be enforced. Enforcement mechanisms should include “hard” guardrails built into the automation of processes (e.g., user access control, code vulnerability scans at check-in) as well as “soft” measures such as employee training and capability- or awareness-building.

Implement robust access controls

Your access control policy should support the operational roles and responsibilities defined in the enterprise application strategy and extend to all applications both on premises and in the cloud. Special attention should be applied towards privileged user access because of the risk they pose to the application, including being targeted by sophisticated APTs because of their administrative or root permission to the application.

Special measures recommended for privileged users include:

  • Privileged users should always be in separate groups. They should be defined as “high risk" inside of your access control solution that require security controls you might not choose to implement for all users, or all application classification tiers. 
  • Multiple factors of remote authentication should be required. If access attempts are made with valid credentials that fail the second authentication requirement (in the case where attackers have collected valid credentials from a breach where credentials were shared), or from a location not physically possible based on last valid login (such as a successful login in the U.S. two hours prior to the same user attempting to log in in eastern Europe), the account should be locked until further security review is completed.
  • Administrative access should only be authorized for appropriate, trained personnel that require this level of access on a regular basis to perform their job. Any temporary access granted for emergencies or special projects should be in a different user group set up with automated use monitoring that will remind system administrators if they forget to remove the access. Review of access appropriateness should be performed on a regular basis and completed independently of the team responsible for the application, or granting access to the application, to avoid any conflicts of interest. If a privileged user does not access an account for an extended period, it should be questioned whether they truly require the access.
  • Proper accounting of all privileged user access to the applications should be logged. This includes any changes made by the user account.   

Getting this level of visibility and automation around access control in the cloud can be challenging and costly as these features are generally not available natively. It is however possible with third party licensing and given the importance, it’s an investment well worth making.

Continuously train employees and relevant stakeholders

With the steady growth in applications, and the abundance of data available in the media that attackers use to figure out what applications to target and who has access to them, security awareness training has never been more important.

As spear phishing is the modis operandi of adversaries, phishing training should be a large focus. The 2018 F5 Labs Phishing and Fraud Report found that training employees more than 10 times can reduce phishing success from 33% down to 13%. Yet rarely is security awareness training conducted enough, and with the right material. Canned awareness training services designed to check compliance boxes run the risk of employees not understanding their role in information security, and not having a personal sense of duty to it. If the goal is to reduce the risk of a breach, frequent training—personalized to your organization—is the way to go.

There is no downtime for attackers, and thus employees must always remain vigilant. A continuous culture of curiosity should be the norm for all organizations, especially in the federal space or any businesses supplying the federal government with products and services. Employees should be aware that they are a target because of their access to applications and data. They should also be aware of how that access or data is used by adversarial nation-states or sold by for-profit cyber criminals (that is then purchased by adversaries).

Conclusion

To ensure success in their digital transformations, all organizations should adopt an enterprise application strategy and corresponding policy, and train employees on them. Within the federal space, with its large mixture of legacy, hybrid, and modern applications, this is especially critical. Your success in delivering a reliable, secure, and authorized digital service requires it.

Applications are the heart of any organization’s digital transformation and, with the rapid change in the way software is developed and deployed, they are both an organization’s greatest source of value and the greatest source of vulnerability. The application strategy and policy components outlined here provide the essential foundations to secure any organization’s digital aspirations. With the risk profile of their application portfolio only increasing with each day, organizations must move quickly to formalize their strategy and policy.


Published January 11, 2021
  • Share to Facebook
  • Share to X
  • Share to Linkedin
  • Share to email
  • Share via AddThis

Connect with F5

F5 Labs

The latest in application threat intelligence.

DevCentral

The F5 community for discussion forums and expert articles.

F5 Newsroom

News, F5 blogs, and more.