Second-Generation Amazon EC2 instances powered by FPGA are now available (F2).
The new F2 instances, which come in two sizes, are prepared to speed up your workloads in the areas of genomics, multimedia processing, big data, satellite communication, networking, silicon simulation, and live video. They have AMD EPYC (Milan) processors with up to 192 cores, up to eight AMD Field-Programmable Gate Arrays (FPGAs), High Bandwidth Memory (HBM), and up to eight TiB of SSD-based instance storage, etc.
A Brief Overview of FPGAs
When we saw the initial generation of Amazon Elastic Compute Cloud (Amazon EC2) instances powered by FPGA, here is the FPGA paradigm.
An FPGA, or Field Programmable Gate Array, is one of the more intriguing paths to a unique hardware-based solution. An FPGA has greater flexibility than a purpose-built chip, which is created with a certain function in mind and then hardwired to deliver it. Once it has been plugged into a PC board socket, it can be programmed in the field. There are a set, limited number of basic logic gates in every FPGA.
Connecting them to generate the appropriate storage components (flip-flops and shift registers) or logical operations (AND, OR, XOR, and so on) is “simply” how to program an FPGA. In contrast to a CPU, which is primarily serial (with a few parallel components) and has fixed-size data channels and instructions (usually 32 or 64 bits), an FPGA can be configured to carry out numerous operations in parallel, and the operations themselves can have nearly any width, no matter how big or little.
Since then, F1 instances have been utilised by AWS customers to host a wide range of services and applications. For highly parallelizable, compute-intensive workloads, the new F2 instances offer an even superior host because to their upgraded FPGA, increased processor power, and increased memory bandwidth.
2.85 million system logic cells and 9,024 DSP slices (up to 28 TOPS of DSP compute performance when processing INT8 values) are features of each AMD Virtex UltraScale+ HBM VU47P FPGA. 16 GiB of high bandwidth memory and 64 GiB of DDR4 memory per FPGA are provided via the FPGA Accelerator Card connected to each F2 instance.
Inside the F2
AMD EPYC (Milan) third-generation CPUs power F2 instances. They provide up to three times as many processing cores, twice as much system memory and NVMe storage, and up to four times the network bandwidth compared to F1 instances. Each FPGA has 16 GiB of High Bandwidth Memory (HBM) with a bandwidth of up to 460 GiB/s. The specs and instance sizes are as follows:
Instance Name | vCPUs | FPGAs | FPGA Memory HBM / DDR4 | Instance Memory | NVMe Storage | EBS Bandwidth | Network Bandwidth |
f2.12xlarge | 48 | 2 | 32 GiB / 128 GiB | 512 GiB | 1900 GiB (2x 950 GiB) | 15 Gbps | 25 Gbps |
f2.48xlarge | 192 | 8 | 128 GiB / 512 GiB | 2,048 GiB | 7600 GiB (8x 950 GiB) | 60 Gbps | 100 Gbps |
In order to reliably stream uncompressed live video between applications, the high-end f2.48xlarge instance supports the AWS Cloud Digital Interface (CDI), with instance-to-instance latency as low as 8 milliseconds.
Developing Applications for FPGA
Your hardware-accelerated FPGA applications can be developed, simulated, debugged, compiled, and executed using the tools included in the AWS EC2 FPGA Development Kit. An F2 instance can be used for final debugging and testing after the FPGA Developer AMI kit has been launched on a memory-optimized or compute-optimized instance for development and simulation.
The developer kit’s tools offer a wide range of debugging options, accelerator languages, tools, and programming paradigms. In the end, you will produce an Amazon FPGA Image (AFI) that includes your unique acceleration logic together with the AWS Shell, which is responsible for implementing access to the PCIe bus, external peripherals, interrupts, and FPGA memory. AFIs can be shared with other AWS accounts, published on the AWS Marketplace, or deployed to as many F2 instances as you like.
Before moving on to F2 instances, you must upgrade your development environment to utilise the most recent AMD tools, rebuild, and validate any applications you have already written that operate on F1 instances.
Examples of FPGAs in Operation
The following are some awesome illustrations of how F1 and F2 instances may accommodate particular and extremely taxing workloads:
AstraZeneca A multinational pharmaceutical and biotechnology corporation, developed the quickest genomics pipeline in the world, processing over 400K whole genome samples in less than two months using thousands of F1 instances. They will use Illumina DRAGEN for F2 to improve performance at a reduced cost and speed up the diagnosis, treatment, and discovery of diseases.
Satellite Communication — Satellite operators are shifting to software-defined, FPGA-powered solutions instead of rigid, costly physical infrastructure (modulators, demodulators, combiners, splitters, etc.). In order to accommodate new waveforms and adapt to evolving needs, these systems can be field-reconfigured using the digital signal processor (DSP) components on the FPGA. Important F2 characteristics that allow processing of several complex waveforms in parallel include support for up to 8 FPGAs per instance, ample network capacity, and compatibility for the Data Plan Development Kit (DPDK) via Virtual Ethernet.
Analytics: NeuroBlade’s SQL Processing Unit (SPU) offers market-leading query throughput efficiency and speedier query processing on F2 instances. It interacts with Apache Spark, Presto, and other open-source query engines.
Things to Keep in Mind
Finally, here are a few things you should be aware of about the F2 instances:
AWS Regions: F2 instances are now accessible in the US East (N. Virginia) and Europe (London) AWS Regions, with plans to expand availability to other regions in the future.
Operating Systems: F2 instances are only Linux-based.
Options for Purchase: F2 instances are available in the following configurations: On-Demand, Spot, Savings Plan, Dedicated Instance, and Dedicated Host.