Question AMD aquires Pensando

sandorski

No Lifer
Oct 10, 1999
70,316
5,888
126
Interesting. I wonder if their non-CPU Chip has something to do with it. Adding it to the Epyc line might be the plan. Although having a Complete/Near Complete Cloud solution could permanently keep AMD in the Market and drive more Epyc sales. Perhaps future Epyc processors will be designed specifically to run this particular Software, giving another way to achieve max performance.
 

Saylick

Diamond Member
Sep 10, 2012
3,644
8,222
136
STH has a good write up:

Edit: Just wanted to drop some of my own thoughts here.

Looks like this acquisition is a play to get into the DPU market much quicker than Xilinx could. Pensando already has major cloud vendors as their customers, and their DPUs are more or less analogous to Mellanox's and Intel's offerings. Pensando's biggest pitch is that they are trying to democratize the cloud market, i.e. making it so that small time businesses can compete against AWS, the 800 lb gorilla, by giving them a competitive DPU and software stack against Nitro. As for DPUs, the differences between companies aren't all that much in my opinion, just choice of processor and data packet processing pipeline. Looks like Pensando uses a P4-based programmable pipeline, which is based on an open source language that was created specifically for data processing, while Mellanox's Bluefield 3 uses what they call DOCA, which I imagine is not open source, i.e. proprietary. ServeTheHome says that Nvidia hopes to position DOCA as the CUDA for DPUs. Intel's Mount Evans SOC also supports the open source P4 pipeline.

It appears that Google is a big supporter of P4, which explains why Intel pushed for its support on their Mt. Evans chips. By and by, AMD is clearly gunning for Intel's slice of the Cloud market with this purchase. Meanwhile, Nvidia are off doing their own thing by having more and more AI/inferencing capability baked into their future DPU chips, not surprisingly.
 
Last edited:

marees

Senior member
Apr 28, 2024
578
639
96
New AMD Pensando Salina DPU wants to power the next generation of AI infrastructure

Pensando Salina DPU​

As with previous generations, AMD's latest DPU is split into two parts: the front-end, which delivers data and information to an AI cluster, and the backend, which manages data transfer between accelerators and clusters.

The new Pensando Salina DPU is the company's third-generation release, promises 2x performance, bandwidth and scale compared to the previous generation.

AMD says it can support 400G throughput, meaning faster data transfer rates than ever before, a huge advantage as companies around the world look for quicker and more efficient infrastructure to keep up with AI demands.

From Wikipedia:
A data processing unit (DPU) is a programmable computer processor that tightly integrates a general-purpose CPU with network interface hardware.[1] Sometimes they are called "IPUs" (for "infrastructure processing unit") or "SmartNICs".[2] They can be used in place of traditional NICs to relieve the main CPU of complex networking responsibilities and other "infrastructural" duties; although their features vary, they may be used to perform encryption/decryption, serve as a firewall, handle TCP/IP, process HTTP requests, or even function as a hypervisor or storage controller.[1][3]

These devices can be attractive to cloud computing providers whose servers might otherwise spend a significant amount of CPU time on these tasks, cutting into the cycles they can provide to guests.[1]

More on DPUs:
https://www.servethehome.com/?s=Dpu
 
Reactions: Tlh97 and Mahboi

marees

Senior member
Apr 28, 2024
578
639
96

AMD launches new DPUs to boost AI efficiency and network performance in data centres​

AMD has launched new a range of data processing units (DPUs) to reduce strain on networks for data centre operators.​

Unveiled at the company’s Advancing AI event in San Francisco, the Pensando Salina 400 and Pensando Pollara 400 are designed to enhance AI workload efficiency by improving networking routing to avoid traffic congestion.

AMD’s Salina 400 is designed for front-end networks. Featuring 16 N1 Arm cores, the DPU is designed for hyperscalers, enabling them to support intelligent load balances to utilise the complete bandwidth while minimising network congestion.
The new Salina 400 DPU optimises back-end networks, enabling performance to remain efficient during intense workloads such as AI training.


The new Polara 400 networking adapter, meanwhile, is designed for back-end networks.
the Polara 400 NIC is the first-ever adapter designed to support the UEC standard for AI and high-performance computing data centre interconnects. Developed by the Ultra Ethernet Consortium, it’s seen as an alternative to InfiniBand, an interconnect standard largely used by hardware rival Nvidia.


Sitting at the heart of both of AMD’s new networking solutions is its P4 engine, a compact fully programmable unit designed to optimise network workloads.
The P4 is capable of supporting 400 gigabytes per second (Gb/s) line rate throughput while multiple services run concurrently on the device.

The devices are essentially designed to ensure AI workloads in data centres powered by vast arrays of GPUs operate at peak efficiency. The hardware can effectively manage network congestion to avoid performance degradation by re-routing workloads to avoid network failures and quickly recover from occasional packet loss.


https://www.capacitymedia.com/article/amd-launches-new-dpus


Soni Jiandani, SVP and general manager of the network technology solutions group at AMD said in a press briefing that the Ethernet-based standard can scale to millions of nodes, compared to the foundational architecture of InfiniBand which is not poised to scale beyond 48,000 nodes “without making dramatic and highly complex workarounds.”

The Polara 400 is also programmable, enabling it to support further UEC-developed standards from release.

“Selena 400 and the Polara 400 are solving the challenges for both front-end and back-end networks, including faster data ingestion, secure access, intelligent load balancing, congestion management and fast failover, and loss recovery,” Jiandani said.
 
Reactions: Tlh97 and Mahboi

marees

Senior member
Apr 28, 2024
578
639
96
The DPU Market Landscape:

DPUs are at a really interesting crossroads. They are immensely popular in hyper-scale data centers.

  • Fungible was acquired at a low cost by Microsoft.
  • Intel has yet to move its Intel E2000 / Mount Evans DPU IPU beyond Google in an appreciable way.
  • AMD Pensando is out and is mostly focused on the networking side at the moment, with an integration even with VMware.
  • NVIDIA BlueField-3 cards are much better than BlueField-2, but they are in short supply given the demand for NVIDIA’s AI servers and how NVIDIA is pushing those with its GPUs.
  • Marvell has the Octeon 10 DPU in card form now.
https://www.servethehome.com/talking-dpus-this-week-with-achronix/
 

marees

Senior member
Apr 28, 2024
578
639
96
Intel vs AMD vs Nvidia:

Pensando, Intel and Nvidia each approach DPUs from a different perspective. Pensando, founded by former Cisco CEO John Chambers, leans toward networking, Nvidia artificial intelligence, and Intel machine learning and real-time network telemetry.

The Pensando packet processor uses an ARM core to control and supervise packet paths while allowing smartNIC manufacturers to use the specialized P4 programming language to tailor the data plane for customers' use cases.

Intel started pursuing a similar technique with the 2019 acquisition of Barefoot Networks, a chipmaker for switches.

Nvidia's smartNIC technology is the programmable BlueField DPU. The company has partnered with VMware to deliver its ESXi hypervisor on BlueField.

Distributing services in smartNICs to boost application performance started with public cloud providers like AWS and Microsoft Azure. AMD, Intel and Nvidia want to work with enterprises that plan to mimic the architecture in their private clouds.

https://www.techtarget.com/searchne...-up-DPU-competition-with-Pensando-acquisition
 

marees

Senior member
Apr 28, 2024
578
639
96
AMD vs Nvidia

Speed
AMD has one major advantage over NVIDIA: speed. At the time that we started writing this in May 2023, NVIDIA BlueField-2 was only supported on 25GbE cards while AMD Pensando is supported on 100GbE cards.

Architecture
NVIDIA effectively is accelerating Open vSwitch and has an architecture with BlueField-2 DPUs that has a ConnectX-6 and an Arm/ accelerator co-processor linked with a PCIe switch.

The AMD Pensando Elba has a fast NOC and then runs traffic through its P4 engines with some offload engines and its Arm complex as almost ancillary features.
there is an Arm Cortex A72 complex with memory and its own onboard storage.

User Experience
The user experience is VERY different between NVIDIA and AMD. With NVIDIA, one logs into the DPU almost like a Raspberry Pi (using Ubuntu) with a giant 100GbE NIC attached.

AMD Pensando you can log into the DPU, but really the way you are meant to program the part is through the P4 engines.
AMD has a lightweight Linux distribution at its base. It then has API tools to allow one to do the packet processing in P4.

If this sounds a bit tricky, it is. To be clear, AMD is currently targeting larger customers such as VMware, or hyper-scalers to enable many machines to effectively use the silicon. NVIDIA has a model that feels more like its Jetson boxes where if you have a card, you can log in and start playing. Indeed, we already did a piece on Building the Ultimate x86 and Arm Cluster-in-a-Box just for fun because NVIDIA’s model allows for this.

That is why VMware is so important here. Bringing DPU support to VMware means that VMware hides the complexity of learning P4, and instead one buys a system with a Pensando DPU and VMware, and there is a single toggle to enable high-performance UPT networking using the DPU. It is actually the same toggle that lets you select NVIDIA BlueField-2 and AMD Pensando DPUs. From a VMware end-user perspective, the process is very similar.

https://www.servethehome.com/hands-...a-secret-lab-arm-nvidia-dell-vmware-esxi-upt/
 
Reactions: Tlh97

marees

Senior member
Apr 28, 2024
578
639
96
AMD vs Nvidia

Speed
AMD has one major advantage over NVIDIA: speed. At the time that we started writing this in May 2023, NVIDIA BlueField-2 was only supported on 25GbE cards while AMD Pensando is supported on 100GbE cards.

Architecture
NVIDIA effectively is accelerating Open vSwitch and has an architecture with BlueField-2 DPUs that has a ConnectX-6 and an Arm/ accelerator co-processor linked with a PCIe switch.

The AMD Pensando Elba has a fast NOC and then runs traffic through its P4 engines with some offload engines and its Arm complex as almost ancillary features.
there is an Arm Cortex A72 complex with memory and its own onboard storage.

User Experience
The user experience is VERY different between NVIDIA and AMD. With NVIDIA, one logs into the DPU almost like a Raspberry Pi (using Ubuntu) with a giant 100GbE NIC attached.

AMD Pensando you can log into the DPU, but really the way you are meant to program the part is through the P4 engines.
AMD has a lightweight Linux distribution at its base. It then has API tools to allow one to do the packet processing in P4.

If this sounds a bit tricky, it is. To be clear, AMD is currently targeting larger customers such as VMware, or hyper-scalers to enable many machines to effectively use the silicon. NVIDIA has a model that feels more like its Jetson boxes where if you have a card, you can log in and start playing. Indeed, we already did a piece on Building the Ultimate x86 and Arm Cluster-in-a-Box just for fun because NVIDIA’s model allows for this.

That is why VMware is so important here. Bringing DPU support to VMware means that VMware hides the complexity of learning P4, and instead one buys a system with a Pensando DPU and VMware, and there is a single toggle to enable high-performance UPT networking using the DPU. It is actually the same toggle that lets you select NVIDIA BlueField-2 and AMD Pensando DPUs. From a VMware end-user perspective, the process is very similar.

https://www.servethehome.com/hands-...a-secret-lab-arm-nvidia-dell-vmware-esxi-upt/
On VMware integration:

VMware administrators are familiar with one of the biggest basic networking challenges in a virtualized environment. One can use the vmxnet3 driver for networking and get all of the benefits of having the hypervisor manage the network stack. That provides the ability to do things like live migrations but at the cost of performance. For performance-oriented networking, many use pass-through NICs that provide performance, but present challenges to migration.

With UPT (Uniform Pass Through) we get the ability to do migrations with higher performance levels close to pass-through.

The challenge with UPT is that we need the hardware and software stack to support it, but since AMD Pensando DPUs are supported, we finally have that.

https://www.servethehome.com/hands-...secret-lab-arm-nvidia-dell-vmware-esxi-upt/2/


Final Words​

Taking a step back, requiring a P4 programmable packet processing engine, a complex of Arm cores, onboard memory, NAND, and running VMware ESXio on the NICs is a lot to just enable UPT performance. Just looking at the cards, one can see just how much is needed to accomplish that task. On the other hand, this is really the start.

After using both BlueField-2 and Pensando in VMware, and also seeing how one logs into the cards and what can be done one there, I have a fairly strong opinion about both. If you just want something that is easy to log into and play around with, NVIDIA BlueField-2 is the hands-down winner. We have shown this many times, but the experience is like logging into a fancy Raspberry Pi with a 100GbE NIC attached over a PCIe switch. AMD Pensando, on the other hand, is the DPU that has roughly 4x the performance as we were writing this piece since it has 100GbE networking for VMware versus 25GbE for BlueField-2 (this will change over time.) NVIDIA has BlueField-3 now, although not supported yet for VMware so things will change over time. Perhaps the most intriguing part is that while there is a huge gap between a non-P4 developer’s experience logging into the cards and running things, VMware hides all of that. VMware is doing all of the low-level programming to the point where vastly different architectures can be used with the only difference being selecting one drop-down menu item for either solution.

The vision of adding DPUs to VMware is simple. Order servers with DPUs today, and you can use them as standard NICs. As you are ready, you can bring a cluster’s capabilities online with NSX and UPT to increase the manageability and performance of your VM networking. VMware is going to keep building functionality in the future and has talked about things like vSAN/ storage, bare metal management, and other capabilities coming to DPUs. VMware is building those capabilities so you do not need to have DPU or P4 programming experts on staff. If you are a hyper-scaler, that is not an issue, but for most enterprises, it is easier to wait for VMware to deliver these capabilities.

In 3-5 years, you are going to want DPUs in your environment as VMware adds each incremental feature set. If you are installing servers without supported DPUs today, then your cluster will not get those same features.

https://www.servethehome.com/hands-...secret-lab-arm-nvidia-dell-vmware-esxi-upt/5/
 
Reactions: Tlh97

marees

Senior member
Apr 28, 2024
578
639
96
Summary: (I still haven't figured out how this fits into AMD's hardware stack)

DPUs used to be called “offload engines.” For busy information technology and networking groups struggling to keep pace with the explosion of data and the need to move large chunks of it regularly, AMD is an excellent approach for DPUs.

Stop using servers to do things they’re not good at: How DPUs can change the data game​


Servers are effective at many things but were never designed for the rigors of software-defined networking, storage and security. Servers excel in processing massive amounts of data, but they should not be performing functions such as network address translation, telemetry, firewall, storage-related services and, down the road, artificial intelligence functions such as AllReduce.

This is where data processing units, or DPUs, can add significant value. DPUs disaggregate data processing from servers and SDNs, freeing servers to do what they are best at. There are currently many DPU providers, but they are not equal. The AMD Pensando DPU is a fully programmable, Ethernet-based platform that delivers, at scale, cloud, compute, network, storage and security services.

The Pensando distributed services card was one of the first DPUs to support VMware vSphere 8, formerly Project Monterey. The company says this partnership aims to help companies cut ops costs with unified management of workloads while offloading processing from CPUs and adding a layer of security by isolating infrastructure services from tenants.

https://siliconangle.com/2024/02/05...gs-theyre-not-good-dpus-can-change-data-game/

One of the interesting questions regarding DPUs going forward is support for Ethernet. There has been great debate in the industry regarding when and if Ethernet would displace Infiniband. The latter is fast and lossless and has been the gold standard for connectivity within compute clusters, but it also presents scheduling issues, security challenges, and management problems.

Considering the high demand for AI training and inferencing in hyper scalers, programmable congestion control, standardization on transport, telemetry and support for both scale-up and scale-out are key requirements; Ethernet now presents a better option. For Pensando, AMD uses Ethernet, the winner in every other part of networking. Ethernet has all the features that Infiniband lacks and now has the market acceptance that you need in a technology, with support from the likes of Amazon Web Services Inc., Alibaba Cloud, Google Cloud, Meta Platforms Inc., Oracle Corp. and others.

Also, a new standard for Ethernet is on the horizon – Ultra Ethernet, which closes any performance gap between Ethernet and Infiniband. AMD is a key contributor to the Ultra Ethernet Consortium. In addition to AMD, the consortium has many strong supporters, including Cisco Systems Inc., the market leader in networking; Broadcom Inc. and Intel Corp., the largest silicon manufacturers; Arista Networks Inc., the leader in high-performance networking; and Hewlett Packard Enterprise Co., one of the biggest computing vendors
 
Reactions: Tlh97

marees

Senior member
Apr 28, 2024
578
639
96
Why DPUs ?

“In the same way GPUs accelerated graphical capabilities in entertainment and gaming applications, DPUs can help organizations create architectures to support next-gen reliability, performance, and security requirements. Think of scaling these next-gen applications to support hundreds of thousands of active users, and the potential advantage is using SmartNICs and DPUs to ensure high performance, reduce compute costs, and lower energy consumption.”

“With the explosion of data, demand for speed, and complex analytics, the CPUs at data centers can be heavily constrained. By offloading data storage and optimizing the network, the DPU frees the CPU power for mission-critical applications. In other words, adding DPUs is like de-bottlenecking at data centers, providing more fire power and performance with the same CPU architecture.”

“In an increasingly data-centric world, offloading critical infrastructure services allows for businesses to have more control of their storage, computer processing, and management. . . . And the strategy of offloading and isolation certainly will help fortify cybersecurity.”

“The exponential generation of organizational data has made data collection, input, processing, and output extremely costly. This has led to the greater adoption of DPUs that bring the data processing closer to the data. A data-centric architecture avoids dated paradigms of moving data to where the compute is located and alternatively moves the compute function to the data.”

https://www.cio.com/article/401773/...ges-of-deploying-dpus-in-the-data-center.html

According to Steve Guilford (LinkedIn: Steven Guilford), CEO of AsterionDB, “Offloading network and security services to a dedicated DPU has distinct advantages in the AsterionDB architecture. We turn middle-tier devices into elastic security isolation components that focus specifically on networking requirements without any concern for data or business logic. This allows for the implementation of a generic API interface to centralized resources in the AsterionDB architecture. Generic APIs typically reveal a single entry-point. This single entry point thusly does not reveal any information to an infiltrator, which obviously greatly enhances security.”

Adam Stern (@iv_cloudhosting), Founder and CEO of Infinitely Virtual, said that when deployed wisely and well, DPUs can prove to be a godsend. “Operations performed in silicon are faster and, in the main, safer than software-driven tasks,” he added.

Nikolay Ganyushkin (LinkedIn: Nikolay Ganyushkin), CEO and co-founder of Acure, sees the rise of DPUs in the context of a number of other recent trends.
“We see it now in the development of corporate strategies and organizational structure, where agile teams greatly reduce the overhead of administrative staff,” he said. “We also see this in the development of enterprise applications, where we are moving from monolithic structures to microservice architectures that are much more stable and easier to scale. We see the same trend in hardware: DPUs allow you to move the center of decision-making to the periphery and provide reduced loads on critical infrastructure areas.”
 

marees

Senior member
Apr 28, 2024
578
639
96
Pensando vs. Xilinx

the compute engines DPUs can be used to offload networking (sometimes with literal packet processors) and storage workloads from the CPU while the software stack in the platform offers software-defined networking, storage and security services along with policy and management functions.

In Pensando’s case, its SoC includes 16 Arm cores and a P4 MPU that delivers Pensando’s services at 400Gb/sec. It is installed in any standard server with the company’s Distributed Services Card, providing the services at the server edge and network visibility via its hardware bi-directional flow streaming and traffic monitoring. The Policy and Services Manager delivers lifecycle management, security and visibility throughout the software stack, as well as integration to analytics, orchestration and management tools through APIs.

With Pensando, AMD will deliver a high-performance collection of silicon and software that can support a range of services simultaneously without the enterprise having to do any FPGA-like programming.

https://www.nextplatform.com/2022/04/04/amd-makes-a-big-dpu-move-with-1-9-billion-bid-for-pensando/

The programmability and SmartNIC capabilities of Pensando’s platform strike a similar chord to what Xilinx offers. Norrod notes Xilinx SmartCards, saying they’ve been adopted by some of AMD’s larger enterprise customer. However, he says, the approaches taken by each company are different.

“Both solutions have a set of hard-coded accelerators for very common features,” he says. “The difference is, for anything else, with Pensando, we’ve got a programmable, highly customizable P4 packet processor that can do many, many things and can be easily programmed in software to do those many things. Xilinx has very high performance and, in Xilinx, you can do a lot of that and you can actually do it at even higher performance, but you have to program the FPGA. The way that we look at this is, for the ultimate in performance for a narrow set of functions, we can address that with the Xilinx technology and offer the pinnacle of performance for certain classes of workloads and certain customers.”

It also comes at a time when Intel – with its infrastructure processing unit (IPU) and SmartNICs – and Nvidia (Bluefield DPUs and DPU-based SmartNics) both are expanding their portfolios. What they don’t have is the system software that Pensando delivers or the broader ecosystem, Jiandani says. Along with Aruba, Hewlett Packard Enterprise, and VMware and the cloud providers, the vendor also works with Splunk, ServiceNow, Guardicore and Elasticity to help make Pensando foundational to enterprises’ and cloud providers’ zero-trust network architecture.
 
Reactions: Tlh97

marees

Senior member
Apr 28, 2024
578
639
96
NIC vs DPU vs FPGA



although many may run Linux and have their own CPU cores, the function of a SmartNIC is to alleviate the burden from the host CPU as part of the overall server. In that role, SmartNICs differ from DPUs as DPUs seem to be more focused on being independent infrastructure endpoints.

When we surveyed what is being called a “DPU” today, offload and programmability are certainly key capabilities. The big difference was that vendors are designing the DPU in the spirit of the AWS Nitro platform to be infrastructure endpoints. Those infrastructure endpoints may attach storage to the network directly (e.g. with the Fungible products) those endpoints may be a secure onramp to the network (e.g. with the Pensando DSC products/ Marvell Octeon products) or they may be more of general-purpose endpoints to deliver compute, network, and storage securely to and from the overall infrastructure.

The category we are currently calling Exotic NICs are solutions that generally have enormous flexibility. Often, that flexibility is enabled by utilizing large FPGAs. With FPGAs, organizations can create their own custom pipelines for low latency networking and even applications such as AI inferencing being part of the solution without needing to utilize the host CPU.

https://www.servethehome.com/dpu-vs-smartnic-sth-nic-continuum-framework-for-discussing-nic-types/
 

marees

Senior member
Apr 28, 2024
578
639
96
NIC vs DPU vs FPGA

View attachment 109374

although many may run Linux and have their own CPU cores, the function of a SmartNIC is to alleviate the burden from the host CPU as part of the overall server. In that role, SmartNICs differ from DPUs as DPUs seem to be more focused on being independent infrastructure endpoints.

When we surveyed what is being called a “DPU” today, offload and programmability are certainly key capabilities. The big difference was that vendors are designing the DPU in the spirit of the AWS Nitro platform to be infrastructure endpoints. Those infrastructure endpoints may attach storage to the network directly (e.g. with the Fungible products) those endpoints may be a secure onramp to the network (e.g. with the Pensando DSC products/ Marvell Octeon products) or they may be more of general-purpose endpoints to deliver compute, network, and storage securely to and from the overall infrastructure.

The category we are currently calling Exotic NICs are solutions that generally have enormous flexibility. Often, that flexibility is enabled by utilizing large FPGAs. With FPGAs, organizations can create their own custom pipelines for low latency networking and even applications such as AI inferencing being part of the solution without needing to utilize the host CPU.

https://www.servethehome.com/dpu-vs-smartnic-sth-nic-continuum-framework-for-discussing-nic-types/

More on DPU:

a DPU is designed for disaggregating the infrastructure and application resources in the data center. The DPU is designed to be an infrastructure endpoint that both exposes network services to a server and to devices and at the same time securely exposes the server and device capabilities to the broader infrastructure.

key characteristics that DPUs share. Among them are:

  • High-speed networking connectivity (usually multiple 100Gbps-200Gbps interfaces in this generation)
  • High-speed packet processing with specific acceleration and often programmable logic (P4/ P4-like is common)
  • A CPU core complex (often Arm or MIPS based in this generation)
  • Memory controllers (commonly DDR4 but we also see HBM and DDR5 support)
  • Accelerators (often for crypto or storage offload)
  • PCIe Gen4 lanes (run as either root or endpoints)
  • Security and management features (offering a hardware root of trust as an example)
  • Runs its own OS separate from a host system (commonly Linux, but the subject of VMware Project Monterey ESXi on Arm as another example)
https://www.servethehome.com/dpu-vs-smartnic-sth-nic-continuum-framework-for-discussing-nic-types/

 

marees

Senior member
Apr 28, 2024
578
639
96
STH has a good write up:

Edit: Just wanted to drop some of my own thoughts here.

Looks like this acquisition is a play to get into the DPU market much quicker than Xilinx could. Pensando already has major cloud vendors as their customers, and their DPUs are more or less analogous to Mellanox's and Intel's offerings. Pensando's biggest pitch is that they are trying to democratize the cloud market, i.e. making it so that small time businesses can compete against AWS, the 800 lb gorilla, by giving them a competitive DPU and software stack against Nitro. As for DPUs, the differences between companies aren't all that much in my opinion, just choice of processor and data packet processing pipeline. Looks like Pensando uses a P4-based programmable pipeline, which is based on an open source language that was created specifically for data processing, while Mellanox's Bluefield 3 uses what they call DOCA, which I imagine is not open source, i.e. proprietary. ServeTheHome says that Nvidia hopes to position DOCA as the CUDA for DPUs. Intel's Mount Evans SOC also supports the open source P4 pipeline.

It appears that Google is a big supporter of P4, which explains why Intel pushed for its support on their Mt. Evans chips. By and by, AMD is clearly gunning for Intel's slice of the Cloud market with this purchase. Meanwhile, Nvidia are off doing their own thing by having more and more AI/inferencing capability baked into their future DPU chips, not surprisingly.
Created a thread in Networking forum (where it belongs IMO)

I still can't wrap my head around the fact that AMD, Intel, Nvidia & Microsoft are now competing with Cisco in networking

AMD launches new DPUs to boost AI efficiency and network performance in data centres​

AMD has launched new a range of data processing units (DPUs) to reduce strain on networks for data centre operators.​

Unveiled at the company’s Advancing AI event in San Francisco, the Pensando Salina 400 and Pensando Pollara 400 are designed to enhance AI workload efficiency by improving networking routing to avoid traffic congestion.

AMD’s Salina 400 is designed for front-end networks. Featuring 16 N1 Arm cores, the DPU is designed for hyperscalers, enabling them to support intelligent load balances to utilise the complete bandwidth while minimising network congestion.
The new Salina 400 DPU optimises back-end networks, enabling performance to remain efficient during intense workloads such as AI training.


The new Polara 400 networking adapter, meanwhile, is designed for back-end networks.
the Polara 400 NIC is the first-ever adapter designed to support the UEC standard for AI and high-performance computing data centre interconnects. Developed by the Ultra Ethernet Consortium, it’s seen as an alternative to InfiniBand, an interconnect standard largely used by hardware rival Nvidia.


Sitting at the heart of both of AMD’s new networking solutions is its P4 engine, a compact fully programmable unit designed to optimise network workloads.
The P4 is capable of supporting 400 gigabytes per second (Gb/s) line rate throughput while multiple services run concurrently on the device.

The devices are essentially designed to ensure AI workloads in data centres powered by vast arrays of GPUs operate at peak efficiency. The hardware can effectively manage network congestion to avoid performance degradation by re-routing workloads to avoid network failures and quickly recover from occasional packet loss.


https://www.capacitymedia.com/article/amd-launches-new-dpus


Soni Jiandani, SVP and general manager of the network technology solutions group at AMD said in a press briefing that the Ethernet-based standard can scale to millions of nodes, compared to the foundational architecture of InfiniBand which is not poised to scale beyond 48,000 nodes “without making dramatic and highly complex workarounds.”

The Polara 400 is also programmable, enabling it to support further UEC-developed standards from release.

“Selena 400 and the Polara 400 are solving the challenges for both front-end and back-end networks, including faster data ingestion, secure access, intelligent load balancing, congestion management and fast failover, and loss recovery,” Jiandani said.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |