Amazon launched the compute-optimized Amazon EC2 C6a instances in February 2022. These instances were powered by third-generation AMD EPYC (Milan) processors and clocked at up to 3.6 GHz.
New Amazon EC2 C7a Instances
New compute-optimized Amazon EC2 C7a instances are now available to the general public. These instances provide up to half as much performance as their predecessors, the C6a instances, and are powered by 4th Generation AMD EPYC (Genoa) processors with a maximum frequency of 3.7 GHz. Today, they are announcing the general availability of these instances. You will be able to minimize the cost of ownership, consolidate workloads, and speed up the processing of data thanks to this improvement in performance.
Performance enhancements of up to fifty percent are possible when comparing C7a instances to C6a instances. These instances are ideal for executing computationally intensive operations such as high-speed web servers, batch processing, ad serving, machine learning, multiplayer gaming, video transcoding, and high-performance computing (HPC) applications such as scientific modeling and machine learning.
C7a instances are able to handle the AVX-512, VNNI, and brain floating point (bfloat16) instruction sets. These instances make use of memory with a Double Data Rate 5 (DDR5), which not only allows quick access to data that is kept in memory but also offers 2.25 times more memory bandwidth for a reduced overall latency compared to instances of a generation that came before this one.
You are now able to more accurately right-scale your workloads thanks to a newly introduced medium instance size that provides 1 vCPU and 2 GB of memory. C7a instances come with a maximum of 192 virtual CPUs and 384 gigabytes of memory. The following is an itemized list of the requirements:
Name | vCPUs | Memory (GiB) | Network Bandwidth (Gbps) | EBS Bandwidth (Gbps) |
c7a.medium | 1 | 2 | Up to 12.5 | Up to 10 |
c7a.large | 2 | 4 | Up to 12.5 | Up to 10 |
c7a.xlarge | 4 | 8 | Up to 12.5 | Up to 10 |
c7a.2xlarge | 8 | 16 | Up to 12.5 | Up to 10 |
c7a.4xlarge | 16 | 32 | Up to 12.5 | Up to 10 |
c7a.8xlarge | 32 | 64 | 12.5 | 10 |
c7a.12xlarge | 48 | 96 | 18.75 | 15 |
c7a.16xlarge | 64 | 128 | 25 | 20 |
c7a.24xlarge | 96 | 192 | 37.5 | 30 |
c7a.32xlarge | 128 | 256 | 50 | 40 |
c7a.48xlarge | 192 | 384 | 50 | 40 |
c7a.metal-48xl | 192 | 384 | 50 | 40 |
The maximum number of EBS volumes that could be attached to an instance with instances of the previous generation was 28, however with C7a instances, this number has been increased to 128. In addition, C7a instances come with better networking of up to 50 Gbps and increased EBS capacity of 40 Gbps.
C7a instances offer AMD secure memory encryption (SME) and new AVX-512 instructions, which allow for quicker encryption and decryption methods, algorithms based on convolutional neural networks (CNN), financial analytics, and video encoding workloads. In order to provide a higher level of protection, C7a instances support AES-256 rather than C6a instances‘ AES-128.
These instances are constructed on the AWS Nitro System, and they allow Elastic Fabric Adapter (EFA) for applications like as high-performance computing and video processing. These applications benefit from decreased network latency and highly scalable inter-node communication.
[…] EPYC most recent range of EPYC server CPUs, the 8004 series, focuses on power economy rather than sheer CPU performance. This is […]
[…] and Amazon Web Service(AWS) announced a partnership to enable more clients operationalize and benefit from generative AI. IBM […]
[…] Compute widget displays basic metrics like Amazon EC2 instance CPU utilization and AWS Lambda invocations, together with trend charts from CloudWatch that aggregate […]
[…] your business objectives. If you’re running on-premises database servers or self-managed Amazon Elastic Compute Cloud (Amazon EC2) instances, you can migrate to Amazon Aurora MySQL-Compatible Edition, Amazon Aurora PostgreSQL-Compatible […]
[…] the amount produced. New AMD EPYC processors offer up to 107% greater computing capability and 33% more storage than prior generations. This […]
[…] which places extra cache on top of the processing cores, is advantageous to the 4th generation EPYC processors. Larger cache sizes and even lower memory latency are anticipated in further iterations of this […]