loader image

Use Cases Where DPUs Shine

Tim Lieber

2 min read

Share

In DPUs Decoded: Redefining Efficiency in Data Processing, we explained how DPUs contain a set of features that can offload work from CPUs and GPUs, thereby decreasing costs in the data center.

Across the board, a DPU performs the same or more work either cheaper, more simply, or at lower power than its cousins CPU and GPU. Whether we’re talking about scale-out applications, real-time data processing, edge computing, network offload or storage offload, DPUs take advantage of things CPUs and GPUs aren’t necessarily good at. In an efficient data center, the three processor types can form a highly synergetic relationship.

Let’s look at what makes the Kalray DPU stand out as an x86 offloading engine.

Kalray’s DPU is based on a unique MPPA® (Massively Parallel Processor Array) architecture, providing unique capabilities in terms of performance, acceleration, real-time processing, programmability, low power, heterogeneous multiprocessing, and security.

The following are just a selection of the use cases where Kalray DPUs excel for performance and efficiency.

Explore Kalray DPUs

DPU as NIC Replacement 

One use case is embedded in a server that is configured with no NICs (or SmartNICs) and uses the Kalray DPU card in its place. The Kalray card provides both network processing and CPU storage offload functionality, fulfilling all the software-defined network functions expected from the NIC(s) and completely managing all storage processing functions involving data protection (erasure coding, data distribution, etc.), data reduction (compression, deduplication) and data security. Basically, providing all QoS (quality of service) functions required of a software-defined storage environment.


DPU as Fiber Channel HBA Replacement

A similar use case to the NIC replacement is to use the Kalray DPU as a fiber channel (FC) HBA replacement. Where in the NIC replacement use case, the DPU is the interface to the data center fabric, in the FC HBA case the DPU is the server interface to the storage fabric. In a very efficient scenario the two use cases can be combined using one Ethernet port for connection to the data center fabric and the other Ethernet connection to the storage fabric. If two Kalray DPU cards are use, the server would have full front-end and back-end redundancy.


DPU as Sole Processor in Storage Array 

Another use case is the Kalray DPU as the only processor in a storage array environment,converting a dumb bunch of devices into a software-defined storage array providing disaggregated storage pools of devices to the data center using a common fabric shared by all storage and server functions. The Kalray DPU also replaces the CPU complex (CPU, memory, PCIe cards) in a storage server. In both scenarios, the Kalray DPU gives the same or more functionality at lower cost, half the power consumption and similar or better performance. And as an extra bonus, Kalray devices are good for data analysis. This could be used for computational storge functions where data is analyzed in the array. Only the results of the analysis are moved not the entire data set.


All DPUs Are NOT Created Equal

What would happen if Kalray DPUs were installed in all three use cases above. The data center could move to a single fabric where client and storage traffic were separated by Network technology (VLAN) not by having a separate fiber channel fabric. Consider the cost savings of eliminating all FC from the data center. In addition, having Kalray DPUs in servers and storage arrays allows for data service distribution based on where it makes sense from a performance or topology perspective. The CPUs and GPUs would no longer perform any data service.

There are benefits to having specific resources within the DPU configured the way they are, and these benefits manifest themselves in greater opportunities to optimize total cost of ownership (TCO): better IOPS/W or better MB/s/$ than other DPUs or CPUs.

When efficiency, performance, and TCO matter – choose the hardware that supports your goals.

 

Next Up: Ahead of the Pack: How Kalray DPUs Deliver True Data Acceleration

Tim LIEBER

Lead Solutions Architect, Kalray

Tim Lieber is a Lead Solution Architect with Kalray working with product management and engineering. His role is to innovate product features utilizing the Kalray MPPA DPU to solve data center challenges with solutions which improve performance and meet aggressive TCO efficiency targets. Tim has been with Kalray for approximately 4 years. He has worked in the computing and storage industry for 40+ years in innovation, leadership and architectural roles.

You also may like: