Question What is the bandwidth between chiplets in EPYC 7742?

t1gran

Member
Jun 15, 2014
34
1
71
Does anyone know the bandwidth between chiplets (via I/O die) in EPYC 7742?

And just to make clear - there are 2 types of connections in EPYC 7742:

1) Infinity Fabric - connects 8 chiplets (CCDs) with I/O die
2) PCIe 4.0 128 lanes - connects I/O die with motherboard (the other socket, memory etc)

Is it correct? Or may be additional memory channels?
 
Reactions: misuspita

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
Going by that and the fact that CCX now don't talk to each other internally but thru the IO Chip, I would say that based on Infinity Fabric first gene we can use the Die To Die performance numbers


Die-to-Die, Same Package 42.6 GB/s, so Infinity Fabrics doubles this to about 85-90 Gb/s
 

t1gran

Member
Jun 15, 2014
34
1
71
Die-to-Die, Same Package 42.6 GB/s, so Infinity Fabrics doubles this to about 85-90 Gb/s
I've read this review among others when searched answer to my question, but doesn't 85-90 GB/s seem too small for die-to-die bandwidth, while 8-channel DDR4-3200 bandwidth (between I/O die and motherboard's DDR4-3200) is 297 GB/s?

Besides bus width doubled in PCIe 4.0 (vs PCIe 3.0) - Infinity Fabrics only supports PCIe 4.0. But it's quit different interface itself, which connects 8 chiplets (CCDs) with I/O die. Is it not?
 

t1gran

Member
Jun 15, 2014
34
1
71
Also:
1) Nvidia NVSwitch's bandwidth between two GPUs is 300 GB/s
2) Quadro GV100 memory bandwidth is up to 870 GB/s.

How can AMD's die-to-die bandwidth be so small (85-90 GB/s)?
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,712
3,932
136
Also:
1) Nvidia NVSwitch's bandwidth between two GPUs is 300 GB/s
2) Quadro GV100 memory bandwidth is up to 870 GB/s.

How can AMD's die-to-die bandwidth be so small (85-90 GB/s)?
They don't want to blow their entire power budget on chiplet-to-chiplet communication. Already their Infinity Fabric is set to 18 GT/s transfer rate, not their theoretical maximum 25.6 GT/s as it is on MI50/MI60 GPUs.

Some discussion about the matter here:
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
Interesting topic.
I've read this review among others when searched answer to my question, but doesn't 85-90 GB/s seem too small for die-to-die bandwidth, while 8-channel DDR4-3200 bandwidth (between I/O die and motherboard's DDR4-3200) is 297 GB/s?
Giving full 8-channel DDR4-3200 bandwidth to every single die would be insane.

Note that while Rome is UMA, each group of CCDs is technically still linked with specific memory controllers (though the latency difference is by far not as high as on Naples).

From WikiChips:
"Due to the performance sensitivity of the on-package links, the IFOP links are over-provisioned by about a factor of two relative to DDR4 channel bandwidth for mixed read/write traffic. They are bidirectional links and a CRC is transmitted along with every cycle of data. The IFOP SerDes do four transfers per CAKE clock."
(As Gideon mentions above the over-provisioning in IFv2 moved from ~100% to ~40%.)

Also the IMC bandwidth use can be further increased from your quoted 297 GB/s by turning on 4 NUMA, making each 2 channel memory controller local to the two adjacent CCDs. From the same AT article:
"AMD can reach even higher numbers with the setting "number of nodes per socket" (NPS) set to 4. With 4 nodes per socket, AMD reports up to 353 GB/s. NPS4 will cause the CCX to only access the memory controllers with the lowest latency at the central IO Hub chip."

The same text also explains why even more bandwidth than this isn't necessary:
"Those numbers only matter to a small niche of carefully AVX(-256/512) optimized HPC applications. AMD claims a 45% advantage compared to the best (28-core) Intel SKUs. We have every reason to believe them but it is only relevant to a niche."
 
Reactions: IEC and t1gran
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |