Let’s begin slightly off-topic by discussing the world’s favorite video on-demand service, Netflix. Between 2009 and 2016, Netflix observed a staggering increase in subscribers, from 12 million to just over 48 million, adding 1.43 million new customers to its books in the last 3 months of 2016 alone. If you were lucky enough to be a chief stakeholder in Netflix during that period you were probably laughing all the way to the bank, but were you part of the IT team tasked with providing the infrastructure to support this explosive growth, then you may have been scratching your head.
Scaling their own data center to meet the rise in demand simply wasn’t feasible due to the associated capital and operational expenditure, not to mention the management headaches that would have arisen. So, around that same time they took a step into the unknown and embarked on a 7-year project that would see them transition all operations to the AWS cloud, enabling them to leverage the seemingly unlimited scalability the cloud vendor could afford. And as you may know, they recently completed this feat.
'Job done’ some might think… but they’d be wrong. You see, the ability to scale up to match increased demand in the long term is one thing, but what happens when that demand fluctuates greatly in the short term? A recent survey showed that during the internet rush hours (7–11 p.m.), Netflix was responsible for 35.2% of all downstream traffic in the U.S. – yet unsurprisingly, outside of these hours they were responsible for a much smaller fraction of this, causing cyclic oscillations in demand as shown below in Figure 1, borrowed from this post.
To combat this huge variation in demand, Netflix implemented their own auto scaling algorithm, Scryer, which enabled them to provision sufficient AWS EC2 instances to handle the rush hour demand while also revoking any instances that were deemed surplus to requirements. This ensured a premium quality of service for its users during peak hours while also minimizing their computational overhead during the off-peak period.
While this is just an example and bears no specific connection to F5’s solutions, it gives an appreciation of the need for auto scaling solutions in the current day and age, and leads us on to the heart of this post – F5’s new auto scaling Web Application Firewall (WAF) solution.
Related, did you know that roughly 40% of data breaches in 2016 came as a result of application layer attacks? Although the demand for an application may vary day to day and hour by hour, one thing stays constant – the need to ensure its protection. What good is a solution that scales to support varying throughputs if the security of the application and its data are jeopardized in the process? F5’s new auto scaling WAF solution for AWS provides the enterprise-grade security that applications require regardless of traffic levels, while ensuring you only provision and pay for the public cloud resources you need.
At its core, the WAF solution is powered by the industry proven and ICSA-certified BIG-IP ASM and BIG-IP LTM, which when combined, provide comprehensive protection against sophisticated attack vectors including L7 DDoS attacks, OWASP top 10 threats, and malicious bot attacks. Using automated learning capabilities, dynamic profiling, and risk-based policies, BIG-IP ASM is also able to impose additional security precautions to prevent even the most complex attacks from reaching application servers.
But what about the auto scaling component? Everything required to run the solution has been bundled into an AWS CloudFormation Template, from BIG-IP Virtual Editions and S3 Buckets to Auto Scaling groups and CloudWatch alarms, such that these services instinctively interact with one another to deliver a completely autonomous solution. Before launch, the template requires a few parameter inputs from the user, such as the minimal number of VE instances that should be operational at any time, or the throughput thresholds that when crossed, tell the auto scaling group to launch or revoke instances (usually 80% and 20% of the maximum instance throughput). VE instances that are spun up are spread across AWS availability zones, as shown in Figure 2, to further increase availability.
The solution has been designed and fully tested by F5 experts following BIG-IP and AWS best practices to simplify the deployment experience and enable users to deploy their F5 platforms confidently. BIG-IP ASM instances within the solution are deployed either with pre-configured security policies that have been crafted by F5, or with customized policies that are specific to individual applications.
And that’s all there really is to it, a complete WAF solution that scales to match an applications’ demand to ensure its continued protection against the most complex of layer 7 attacks. Deployable direct from the AWS marketplace in a matter of clicks, this whole set-up can be implemented and operational within an AWS VPC in less than an hour.
At this point I’ll add that although we’re primarily focused on the auto scaling WAF solution here, through the same AWS Marketplace offering, standalone WAF images can be deployed to protect applications with more predictable and consistent traffic flows. Both the auto scaling and standalone solutions leverage PAYG (pay-as-you-go) VE instances with flexible throughput options of either 25Mbps, 200Mbps or 1Gbps per instance.
For more information, you can check out the autoscale WAF repository on GitHub which provides greater technical insight on how the CloudFormation Template operates, or visit the marketplace page here.