Samsung has just released a second-generation SmartSSD, aimed at the nascent computational storage market. Even if you aren’t aware of SmartSSDs, you should be. Although members of the Storage Networking Industry Association (SNIA) have been discussing the concept for years, it’s yet to gain much traction. Nevertheless, the concept has a strong foundation.
Computational storage is one of many ideas, including Nvidia’s DPUs (data processing units) and Intel’s IPUs (infrastructure processing units), that aim to offload many non-revenue-generating tasks from server CPUs in data centers. Computing and storage have been separated in data centers for years, but there’s long been a feeling among a growing number of data center architects that a lot of power is wasted moving data back and forth from storage to CPUs where it can be processed. Why not process the data where it’s stored to reduce the latency, power consumption, and costs associated with moving the data around? As data sets get bigger and bigger, this question grows more and more important.
According to the SNIA Computational Storage Web page:
“Computational Storage solutions typically target applications where the demand to process ever-growing storage workloads is outpacing traditional compute server architectures. These applications include Artificial Intelligence (AI), big data, content delivery, database, Machine Learning (ML) and many others that are used industry-wide.”
Samsung’s first-generation SmartSSD was introduced in 2020. Externally, the drive looked like every other Samsung NVMe SSD. Internally, the first-generation Samsung SmartSSD added a Xilinx Kintex FPGA and 4 Gbytes of SDRAM to the 4 Terabytes of SSD NAND storage plus a Samsung SSD controller chip. The idea was to configure custom processors within the Kintex FPGA embedded in the SmartSSD to offload and accelerate data-processing workloads by eliminating the transfer of all that data back and forth over the data center’s internal network. Only the final results of the processing would be returned to the host server.
Clearly, workloads such as a database search can greatly benefit from this type of workload offloading. Further, each new storage drive added to the system also adds to the computational capabilities, because each SmartSSD has its own Kintex FPGA. In other words, computational resources scale with the storage resources when SmartSSDs are being used for storage. Early demo applications using the first-generation Samsung SmartSSDs were able to show as much as one or two orders of magnitude improvements in performance. Because of its clear performance and energy efficiency advantages, Samsung’s first-generation SSD received an Innovation Award at CES 2021.
Less than two years later, Samsung is taking a second cut at computational storage by introducing a second-generation SmartSSD. This time, the SmartSSD augments the internal SSD with an AMD-Xilinx Versal “adaptive SoC,” which incorporates far more processing power than the earlier Kintex FPGAs. While Kintex FPGAs are manufactured using a 28nm process technology, Versal devices are manufactured with a far more advanced 7nm process technology, so the FPGA fabric in Versal devices can be as much as twice as fast, depending on the hardware instantiated in the FPGA. That advantage alone will deliver a significant performance boost for Samsung’s second-generation SmartSSD. In addition, Versal SoCs combine the FPGA fabric with a dual-core Arm Cortex-A72 application processor and a dual-core Arm Cortex-R5F real-time processor for additional compute performance.
However, improved performance by itself will not guarantee success for the second-generation SmartSSD. One of the major obstacles to widespread adoption of computational storage is the effort required to rewrite data center workload applications to take advantage of the added computational resources in the SmartSSDs. Software developers are reluctant to make such changes for specialized and proprietary computing resources that may not be available in all possible data center environments. To overcome this hesitation, Samsung is leveraging work being done by SNIA’s Computational Storage Technical Work Group, which published the “Computational Storage Architecture and Programming Model v0.9” and the “Computational Storage API v0.8 rev 0” for public comment in June. Adhering to standards like these should help computational storage find a foothold in the expanding architecture of future data centers.