Intel Data Center Chips are enhanced with new ultrafast memory.
DDR5 MRDIMM provide a novel, effective module design to improve system performance and data transfer speeds. In order to enable applications to surpass DDR5 RDIMM data speeds, multiplexing enables the combination and transmission of several data signals over a single channel, hence expanding the capacity without the need for more physical connections.
How by cleverly doubling the memory bandwidth of conventional DRAM modules, Intel and industry partners were able to release top-end Xeon CPUs with a plug-and-play solution.
System memory, or DRAM, is essential to performance even though Intel’s main product emphasis is on the processors, or brains, that power computers. In servers, this is particularly true, since the number of processor cores has increased faster than the memory bandwidth (i.e., the memory bandwidth available per core has decreased).
This mismatch has the potential to be a bottleneck in heavy-duty computing tasks like weather modeling, computational fluid dynamics, and certain forms of artificial intelligence.
What Is MRDIMM
Intel experts have discovered a way to break through that constraint after years of work with industry partners. They have developed a unique method that has produced the quickest system memory ever and is expected to establish a new open industry standard. The first to take use of this new memory, known as MRDIMM, for improved performance in the most plug-and-play way possible are the newly released Intel Xeon 6 data center CPUs.
According to Intel’s Xeon product manager in the Data Center and AI (DCAI) division, “a significant percentage of high-performance computing workloads are memory-bandwidth bound,” which is the kind of task that MRDIMMs are most likely to help with.
This is the tale of the DDR5 (Multiplexed Rank Dual Inline Memory Module)MRDIMM for narrative efficiency. It sounds almost too wonderful to be true.
Bringing Parallelism to System Memory, with Friends
What Is RDIMMs
As it happens, the most widely utilized memory modules for data center tasks, called RDIMMs, actually feature parallel resources, much like contemporary computers. That’s just not how they’re utilized.
A senior principle engineer in memory pathfinding in DCAI is one of the two ranks for performance and capacity in the majority of DIMMs. It’s the ideal location.
One way to conceptualize ranks is as follows:
Banks: one set of memory chips on a module would belong to one rank, while the others would belong to the other. Data may be separately stored and retrieved across many ranks using RDIMMs, but not at the same time.
The electrical load of each MRDIMM is combined by the mux buffer, enabling the interface to function more quickly than RDIMMs. Additionally, the bandwidth of memory is increased because both ranks may now be read in tandem.
This jump, which would typically require many generations of memory technologies to accomplish, results in the fastest system memory ever constructed (in this example, peak bandwidth climbs by approximately 40%, from 6,400 mega transfers per second (MT/s) to 8,800 MT/s).
Same Standard Memory Module, Just Faster
You may now be asking yourself, “Wait a minute,” whether Intel is returning to the memory market. No. Throughout its history, Intel has abandoned its different memory product businesses, some of which are extremely well-known, despite having begun as a memory firm and inventing technologies like EPROM and DRAM.
The MRDIMM‘s simplicity of usage is what makes it unique. There is no need to modify the motherboard since it uses the same connection and form factor as a standard RDIMM (even the tiny mux chips fit in previously unoccupied spaces on the module).
The reliability, availability, and serviceability (RAS) and error-correcting capabilities of RDIMMs are also present in MRDIMMs. To Vergis, data integrity is preserved regardless of how distinct requests are multiplexed over the data buffer.
All of this implies that data center clients have the option to choose MRDIMMs when placing an order for a new server, or they may subsequently remove the server from the rack and replace the RDIMMs with new MRDIMMs. Enjoy the improved performance without changing a single line of code.
Xeon 6 + MRDIMM
A CPU that is compatible with MRDIMMs is necessary, and the Intel Xeon 6 processor with Performance-cores, code-named Granite Rapids, which was released this year, is the first one on the market.
Two identical Xeon 6 systems one with MRDIMMs and the other with RDIMMs were evaluated in recent independent testing. Up to 33% more work was done by the MRDIMM-equipped machine.
Small language models(SLMs) and conventional deep learning and recommendation system workloads are among the AI workloads that can readily operate on Xeon and benefit from the bandwidth improvement that MRDIMM provides.
MRDIMMs have been released by leading memory suppliers, and more memory manufacturers are anticipated to follow suit. With assistance from OEMs like NEC, high-performance computing facilities like the National Institute for Fusion Science and the National Institute for Quantum Science and Technology are aggressively implementing Xeon 6 with P-cores due to MRDIMMs.