Cloud native workloads benefit from Micron and AMD 4th Gen AMD EPYC CPUs with 96GB DDR5
High performance RDIMM solutions are now available, according to a recent announcement from Micron, to aid with workloads for memory heavy artificial intelligence (AI), data analytics, and computationally demanding workloads.
Working together with AMD, our shared objective was to improve high performance computing (HPC) workloads by using Micron DDR5 and the cutting edge capabilities of 4th Gen AMD EPYC processors. Since then, both businesses have made great strides.
In January 2023, additional capacities including 24GB, 48GB, and 96GB DDR5 DIMMs were successfully validated. With the help of 4th Gen AMD EPYC CPUs, this blog post demonstrates the new 96GB DDR5 memory’s amazing performance.
using one’s advantages to make progress
By working with AMD, Micron was able to take advantage of the greater power efficiency and cloud native computing capabilities of these most recent AMD EPYC processors. These enhancements aim to achieve sustainability objectives and are fully in line with the crucial parameters frequently used in the data centre sector, together with great performance per watt.
The following are this combination’s main benefits:
Advanced performance: AMD EPYC 9754 processors are designed to handle the demands of applications that are cloud native. They provide a high degree of parallel computing capacity because to their huge L3 cache sizes (up to 384MB per CPU) and up to 128 physical cores per processor.
This capability offers the scalability needed by cloud native apps and allows for the efficient execution of several activities at once.
DDR5 memory modules from Micron are intended for impressive rates of up to 51.2 GB/s, allowing quick data access and transmission inside the system. Large datasets may be handled easily because to this high bandwidth, which also enables the quick data processing needed for cloud native applications.
Modern processing: Micron’s cutting-edge 1-beta node processing technology offers a number of advantages. It provides a 15% increase in power efficiency, allowing for increased computing power while reducing energy use.
Additionally, compared to generation 1 (1-alpha), there is a 35% increase in bit density with a 16Gb per die capacity, enabling larger memory capacities and better system performance.
Improved data integrity and dependability: Micron DDR5 memory’s embedded error correction code (ECC) parity maintains data integrity by detecting and fixing memory problems.
This capability is essential for cloud-native applications that manage significant volumes of sensitive data because it adds an extra degree of defence against possible data corruption. The system is more reliable and stable overall when ECC parity is included.
Performance and energy economy: AMD’s newest 128 core processor puts a strong emphasis on energy efficiency, providing remarkable power savings while supporting workloads that are cloud native.
The CPU promises extensive x86 hardware and software compatibility as well as demonstrated RAS (Reliability, Availability, and Serviceability) characteristics. According to our tests, there has been a 2.68x increase in performance per watt over the previous generation.
We envision an ideal solution for cloud native applications by using the strength of AMD EPYC 9754 processors, the fast and effective Micron DDR5 memory, and the reliable ECC parity function.
High performance computing, effective data processing, large memory capacities, and dependable operation are all made possible by this setup, which is crucial for cloud native applications in contemporary data centre setups.
Setting up and evaluating cloud based, in memory data storage
We chose the Redis YCSB Proofpoint Workload D to mimic a workload that closely reflects Micron’s real IT cloud-native environment. With 250 million rows and 2 KB records each row in this workload, the database as a whole measures 925 GB.
With an emphasis on speed and scale, the testing strategy entailed running 64 instances with one Redis server and four clients. Operations per second (ops/s) were used to gauge performance, and we escalated the workloads while making sure that the latency stayed the same or decreased from the previous generation.
Testing with DDR4 | Testing with DDR5 | |
Processor | Dual CPU 3rd Gen AMD EPYC 7763 with 64 cores at 3.7 GHz | 1 CPU 4th Gen AMD EPYC 9004 with 128 cores at 3.7 GHz |
Memory capacity | DDR4 3200 1 DIMM per channel 1 TB | DDR5 4800 1 DIMM per channel 1.15 TB |
Memory DIMM | 64GB | 96GB |
Software stack | Alma 9 Linux kernel 5.14 | Alma 9 Linux kernel 5.14 |
Power consumption | 321 watts | 161 watts |
Operations per second (ops/s) | 739,655 | 978,191 |
Ops/s per watt | 2262 | 6064 |
Latency | 0.19 ms | 0.14 ms |
Solutions
During the test, 64 instances of Redis were operating and 1 billion records were loaded, resulting in a throughput of 978,191 ops/s. With an average read latency of 0.14 ms, this result represents a remarkable 32% improvement over the last generation. In particular, a single 4th Gen AMD EPYC powered system in our tests used 47% less power than a twin socket DDR4 system powered by 3rd Gen AMD EPYC processors.
The newest AMD EPYC efficient and high core count CPUs can work in conjunction with Micron DDR5 memory and run at lower voltage levels. Performance per watt has been increased by 2.68 times as a consequence.
Concluding
Although we examined an in-memory database, cloud-native applications may provide comparable performance. Workloads that are cloud-native are often containerized, built on microservices, and employ current DevOps techniques for continuous integration and delivery.
Workloads built for the cloud are intended to make the most of cloud-native platforms for container orchestration, serverless computing, managed databases, and high performance, availability, and resilience.
End users that consume these workloads via public clouds and businesses might reduce their total cost of ownership (TCO) significantly as compared to their present instances or infrastructure.
[…] AMD EPYC 8004 Series CPUs complete the 4th Gen AMD EPYC CPU family of workload-optimized processors. The “Zen 4c” core in these new processors allows […]
[…] the overall cache pool capacity of the processor is a whopping 420 megabytes. In addition, the CPU includes an L3 cache that is 300 megabytes in size, bringing the total to 420 […]
[…] screen that is able to show a range of real-time statistics such as the temperature of the CPU and the load on the memory. ACEMAGIC has also recently announced their brand new S1 tiny PC. In […]
[…] the gaming division of Teamgroup, has officially shown their completely revamped XTREEM DDR5 memory modules to the public. These modules are capable of reaching a maximum speed of 8200 MT/s and […]
[…] the Micron 9400 SSD was created by Micron expressly for throughput-critical tasks like AI training, Scan […]
[…] solutions that are customized for AMD Instinct data center accelerators, Ryzen AI processors, EPYC CPUs, Versal SoCs, and Radeon GPUs. The transaction is perfectly in line with AMD’s artificial […]
[…] Micron introduces high-end 1β architecture for 16 GB DDR5-7200 MT/s memory […]
[…] In order to better serve the demands of general IT and mainstream computing for companies looking to capitalize on the economics of established platforms, AMD announced the addition of six additional models to its 3rd Gen AMD EPYC processor family. For less technically demanding business vital operations, the full line of 3rd Gen AMD EPYC CPUs offers remarkable price-performance, contemporary security features, and energy efficiency, which enhances the leading performance and efficiency of the most recent 4th Gen AMD EPYC processors. […]
[…] 4th Gen AMD EPYC processors are ideal for Electronic Design Automation (EDA) workloads, and AMD’s strategy relies on this claim. They use existing AMD EPYC CPUs to build future ones, therefore optimizing EDA performance is crucial to their company’s success. […]