The Compute Express Link (CXL) has emerged as the dominant architecture for pooling and sharing connected memory devices. It was developed to support heterogeneous memory with different performance characteristics as well as to include special purpose processors near to memory.
During the Monday seminars at the 2022 FMS the Open Memory Initiative (OMI) and the OpenCAPI consortium that spawned OMI, said that it was going to become part of the CXL Consortium. In particular, leaders from OMI and CXL met to say that, “we announce that OCC and CXL are entering an agreement , which if approved and agreed upon by all parties, would transfer the OpenCAPI and OMI specifications and OpenCAPI Consortium assets to the CXL consortium.” The image below shows a beefy OMI differential DDR module that was being passed around at the FMS.
OMI offers a combination of faster and higher capacity memory directly connected to a server CPU with low latency, that could replace DDR or HBM memory. We can call this near memory. CXL is an interface running on the PCIe bus that provides an arbitrated access to heterogeneous memory. CXL has somewhat higher latency and allows pooling of memory that can be shared between CPUs. We can call this far memory. CXL was developed to support volatile memory such as DRAM combined with persistent memory, such as Intel’s Optane.
The joining of OMI with CXL provides a solution for near and far memory, going beyond the original plan for CXL.
Several of the larger SSD companies announced or talked about flash-based SSDs with the CXL interface. It seems that NAND flash will try to fill some of the niches that 3D XPoint (Optane) was going to fill.
Kioxia talked about NAND flash for memory expansion with CXL and talked about NAND-based SSDs for performance and capacity storage. The company’s XL-Flash Storage Class Memory SSDs provide higher endurance and performance. The SSD they showed was an NVMe drive, but in their keynote presentation they talked
about providing both BiCS Flash and XL-Flash with CXL interfaces to help with various workloads. This is shown in the image below. XL-Flash is currently SLC but company said that XL-Flash with MLC is coming.
In their keynote talk, FADU was showing a PCIe Gen6 CXL SSD by 2024, see below.
Samsung introduced a memory-semantic SSD based upon the CXL protocol for AI/ML applications, shown below. The product provides lower latency using internal DRAM cache with larger data space using NAND flash over a CXL interface. Small IO’s come out of the DRAM and normal IOs are fed from the NAND flash. They showed a 20X improvement in random read performance compared to a regular PCIe 4.0 SSD.
Samsung was also showing a CXL memory expander with 512GB capacity.
Marvell is also supporting CXL in its controllers with a vision of full data center composibility as shown below. Their presentation also hinted a combined data access using NVMe and CXL, both over the PCIe bus.
Marvell went further to talk about a composable infrastructure with NVMe-oF
and CXL as shown below.
SK hynix was showing a CXL memory expander as well as an Elastic CXL memory solution FPGA prototype in their booth, see below. SK hynix announced the development of the company’s first DDR5-based CXL samples.
Microchip talked about their PM8596 Smart Memory Solutions that supports the Open Memory interface (OMI, now part of the CXL consortium). This product received an FMS award in 2019. The company also announced their first CXL smart memory controller for data center applications. The SMC 2000 controller family delivers DDR4/5 memory bandwidth and capacity expansion for AI/ML and other data intensive applications. The image below was from their keynote talk.
Silicon Motion didn’t announce a CXL product, but their keynote indicated that CXL is on their radar.
Intel’s Debendra Das Sharma gave a keynote talk focused on CXL. He spoke about interconnects and load-store IO and the role of PCIE/CXL as well as the Universal Chiplet Interconnect Express (UCIe), see below. He invited more participants in the UCIe Consortium.
The following image shows what a CXL-enabled environment enables versus PCIe alone. CXL enables expanding memory beyond what DDR allows and sharing it between CPUs. CXL (and perhaps OMI, now part of the CXL consortium) as well as DDR combined provide the memory requirements for advanced data processing applications.
A CXL switch allows pooling and sharing memory between CPUs as shown below. This is enabled by CXL 2.0.
Debendra announced the release of the CXL 3.0 specification. The image below shows the CXL 3.0 enhancements. They include a doubling of bandwidth to 64GT/s with no additional latency, protocol enhancement allowing peer-to-peer to HCM memory and composable systems with spine/leaf architectures at the rack/pod levels. He speculated that over time CXL could become the only external memory attach point and used for on-package memory.
Besides these keynote talks, CXL was covered in multiple FMS sessions and in demonstrations in the exhibits area. In a real sense, CXL was one of the biggest developments at the 2022 FMS.
The 2022 FMS was dominated by CXL, used for DRAM and also NAND flash devices. OpenCAPI and OMI joined the CXL consortium. All the major flash memory companies announced or said they were working on CXL NAND-flash devices. The controller companies at the conference said that they were supporting CXL interface products. Intel announced CXL 3.0 and talked about a future where CXL would be the ubiquitous memory fabric.
India Banned TikTok In 2020. TikTok Still Has Access To Years Of Indians’ Data.
NVIDIA cuLitho Computational Lithography Massively Accelerates Chip Design Using GPUs
What Is Quantum Memory And What Is It Good For?