BLOG

Why Public Cloud is Embracing FPGAs and You Should Too

Lori MacVittie 축소판
Lori MacVittie
Published March 06, 2017

In case you didn’t see the announcement amidst the rush of consumer-facing ads and releases during the 2016 holiday season, Amazon made quite a stir when it announced it was embracing hardware. As Deepak Singh, General Manager for the Container and HPC division within AWS, noted: “There is a certain scale where specialized hardware and infrastructure make a lot of sense and for those who need special infrastructure, we think FPGAs are one clear way to go.”

Singh lays out a number of “use cases” where such “special infrastructure” is particularly useful, including security and machine learning, while giving a head nod to the most widespread use of specialized hardware today, that of graphics acceleration.

The use of FPGAs and special hardware – often called purpose-built or custom-built – is not new even in the data center. The advantages of hard-wiring certain functions are well-understood. A network switch is really, at its core, a purpose-built piece of hardware. It does one set of things, and it does them at high speeds and at scale. The use of FPGAs, too, is not unusual to find in the data center. Many security devices – particularly those dedicated to DDoS protection – employ FPGAs specifically configured to very quickly and at the scale required perform the inspection of inbound traffic needed to detect and reject incoming DDoS attacks.

Entire markets exist that provide chips and boards designed to handle the complexity associated with the cryptographic processing required for SSL and TLS, both of which are used to secure web apps and APIs, enable remote access, and secure connections with the cloud. As threats have evolved and security solutions adjust to meet them, the need for such targeted cryptographic processing via hardware has become invaluable to ensuring the speed and scale needed to remain competitive while protecting sensitive consumer and corporate data.

operational-risk

The common thread between the traditional use of hardware and FPGAs with that of the public cloud is scale, i.e. capacity. But there are also performance and cost consequences (of the good kind) that make the use of hardware appealing to public cloud providers. The use of FGPAs (particularly those that are reconfigurable by its users) and hardware actually has three distinct (but related) benefits that make them a good choice for public cloud and should be reason to consider the same for your private cloud (or traditional data center) initiatives.

1. Speed. The ability of hardware to perform a function faster and with fewer resources is inarguable. A hardwired function can be executed without the internal latency required to load and execute the code required to replicate the function in software. Purpose-built hardware can execute the highly complex mathematical functions required for encryption and decryption faster than that delivery guy from Jimmy John’s. Not kidding.

For enterprises and private cloud, this means faster apps for customers, which improves overall engagement (and one hopes, conversion) rates leading to higher satisfaction and internally, productivity. Speed helps address one of the three prime components of operational risk: performance.

2. Scale. Scale, as noted by Singh, is one of the primary drivers of FPGA and purpose-built hardware research and adoption. It is, in part, enabled by the speed. Consider a server akin to a table with a limited number chairs (capacity). The faster someone can sit down and eat, the more people can be fed. The same relationship exists between connection capacity (which determines how many users can be served by a single resource) and the speed with which transactions execute. Aiding in scale is what the industry calls “offload”. Offload is a simple way to describe the shift of processing burden from general purpose CPUs to purpose-built hardware, which leaves general compute resources available to process other functions, thereby increasing overall speed and thus, capacity.

For enterprises and private cloud, this means doing more with less, which enables IT to grow less disruptively along with the business and reducing the complexity of the networks required to support that growth.  Scale helps avoid one of the three prime components of operational risk: availability.

3. Cost. By improving speed and scale, the cost per transaction (and thus per user) is lowered.  Lowering costs means a faster return on investment, but more importantly it lowers the cost per customer (and conversely improves the revenue per user). Service providers know this impacts key performance indicators like ARPU (Annual Revenue Per User). A cloud provider, who also relies on volume (scale) to produce profit, knows that increasing ARPU is an important part of the business.

For enterprises and private cloud, this means better margins on customer-facing digital sales, and a great cost-benefit analysis score when viewed against increasing productivity for internal apps.

It goes without saying that employing FPGAs and purpose-built solutions to address the need for security also makes apps safer. Which addresses the third component of operational risk: security. The advantage of hardware is that it does so without sacrificing speed (it can actually make apps faster) or scale. As you’re building out your own private cloud infrastructure, consider purpose-built hardware and FPGA-enabled platforms as a path to a more robust environment.