Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 777 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Jul 27, 2020
19,613
13,477
146
Not sure why AVX512 should be special in this regard.
It's possible if the FP units crunch through data so fast that they waste much of their time waiting for data from RAM because the L3 cache is too small to hold enough data to keep them fed and busy. If this is true, the X3D parts will be something special in AVX-512 workloads.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,989
440
126
It's possible if the FP units crunch through data so fast that they waste much of their time waiting for data from RAM because the L3 cache is too small to hold enough data to keep them fed and busy. If this is true, the X3D parts will be something special in AVX-512 workloads.
Yes, but this assumes the amount of data per time unit that is being processed is big. Which is not the case for many MT workloads. And not the case for many AVX512 workloads either.

E.g. a typical case is video encoding, where you're processing quite limited amount of data per time unit, but doing lots of calculations on that limited data. Then you need high MT performance, but not high memory bandwidth.

That said, I agree that for MT workloads that process more data per time unit, X3D parts will help.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,062
15,199
136
I'm doubtful any consumer Zen 6 implementation will enjoy full speed AVX 512.
Everybody keeps denying the change in avx-512 applications. First it was in PURE avx-512 benchmark show 98% faster than Zen 4. Now we have at least one application where it is 30-40% faster that is actually used (in the application areas). See the benchmarks in the DC forum here. These are not imaginary gains. In scientific and other areas, it should see these huge gains. This being the case, I doubt that it will die at Zen 5.
 

yottabit

Golden Member
Jun 5, 2008
1,482
513
146
DDR5 is roughly twice as fast as DDR4. Are the new cores on DDR5 twice as fast as the latest cores on DDR4? Do they require twice the memory bandwidth?
Yes, at least when running in AVX-512 mode - Zen5 is twice the throughput of Zen4, let alone Zen3 which only had AVX2!
Depends on what kind of workload and data you're using AVX512 for. If the amount data being processed is limited, it'll be kept in cache which fast and has very high memory bandwidth. Same as for other operations BTW. Not sure why AVX512 should be special in this regard.
It's special because it's chungus. AVX is all about loading huge amounts of data and processing them quick. Then the more cores you add into the mix, the more likely it is applications that weren't memory bandwidth dependent before will become memory bandwidth dependent

Why do you think even Zen4 Epyc has 12 memory channels per socket?

Source for this claim?
My source is the sporadic lackluster benchmark results.

Actually the best source is the breakdown from the y-cruncher developer, who microbenchmarked to show 100% uplift for AVX-512 Zen5 over Zen4 when membw was not a factor. Obviously, we aren't seeing those gains in most real applications that take advantage of AVX512. http://www.numberworld.org/blogs/2024_8_7_zen5_avx512_teardown/

The next best source will be the performance of these same applications on Turin with more membw. My guess is a lot of the software that are struggling on 9950x will perform great on Zen5 Eypc & Threadripper.

Also, not all MT workloads use pure AVX512 anyway. Instead it's probably very few actually.

Right, but I'd argue almost all AVX512 workloads use MT. Once we see more apps compiled with AVX512, we'll see more lackluster gains from 9950x.

Because for ideal MT workloads, you can double performance by doubling number of cores.
**ideal MT workloads that aren't memory bandwidth bound, of which there will be less and less as you add more cores

I get it - there's plenty of cases where more cores is better and not much memory bandwidth is needed (encryption, encoding, CPU based photorendering, hypervisor spam)

But for each of those there's plenty of cases where it is membw bound too, so the already niche idea of "moar cores" becomes increasingly so when combined with the ridiculous high throughput AVX512 9950x has.

Anyway, my main point is it would just generally be "imbalanced". 16 P-cores is already a lot for a non-HEDT part with only 2 channels of memory. Then there is also the power budget to worry about.

Base clocks will drop, and for software that isn't perfect MT scaling but also isn't smart enough to not spawn a bunch of worker threads, performance will suffer.
 
Last edited:
Reactions: marees

Fjodor2001

Diamond Member
Feb 6, 2010
3,989
440
126
Yes, at least when running in AVX-512 mode - Zen5 is twice the throughput of Zen4, let alone Zen3 which only had AVX2!
That's a cherry-picked special case. What's the average perf increase when comparing the fastest DDR4 Zen core vs fastest DDR5 Zen core?

And how much more memory bandwidth does the latter need? Is it really twice as much? If not, it should be possible to increase core count above 16 on DDR5, since we already had 16 cores on DDR4.
It's special because it's chungus. AVX is all about loading huge amounts of data and processing them quick. Then the more cores you add into the mix, the more likely it is applications that weren't memory bandwidth dependent before will become memory bandwidth dependent

Why do you think even Zen4 Epyc has 12 memory channels per socket?
If the active dataset that is being processed is limited, it'll fit in cache which has very high memory bandwidth, so it won't be a problem. As igor_kavinski mentioned, X3D parts will help too, for those workloads where the active dataset is bigger.

Right, but I'd argue almost all AVX512 workloads use MT. Once we see more apps compiled with AVX512, we'll see more lackluster gains from 9950x.
That does not mean all MT workloads are pure AVX512 though. Far from it.

And even for pure AVX512 workloads, memory bandwidth does not have to be a problem anyway if the active dataset is limited, as mentioned above.
**ideal MT workloads that aren't memory bandwidth bound, of which there will be less and less as you add more cores

I get it - there's plenty of cases where more cores is better and not much memory bandwidth is needed (encryption, encoding, CPU based photorendering, hypervisor spam)

But for each of those there's plenty of cases where it is membw bound too, so the already niche idea of "moar cores" becomes increasingly so when combined with the ridiculous high throughput AVX512 9950x has.

Anyway, my main point is it would just generally be "imbalanced". 16 P-cores is already a lot for a non-HEDT part with only 2 channels of memory. Then there is also the power budget to worry about
I don't think 16 cores is a lot for a Zen5 CPU. We've been on 16 cores since Zen2 on DT. Intel DT CPUs are already on 24 cores, while also using DDR5.

Regarding power budget I agree though. But then the better solution is to use P + E cores. E.g. 8P cores for max ST perf, then X amount of E cores for max MT perf. Where the E cores are designed for lower power consumption, and optimal perf/watt.
 

yottabit

Golden Member
Jun 5, 2008
1,482
513
146
Regarding power budget I agree though. But then the better solution is to use P + E cores. E.g. 8P cores for max ST perf, then X amount of E cores for max MT perf. Where the E cores are designed for lower power consumption, and optimal perf/watt.
I think we both agree on this one. I’m only arguing against 24 - 32 p-cores on dual channel consumer desktop. IMO I think 16 is even too many (and I say this as an owner of 5950x , 5900x , and 7840HS )

I think 8P + E spam is probably the best of both worlds for consumer desktop, provided the scheduler works.

That being said before the inter-CCX latency debacle of granite ridge I did like the simplicity of the “homogenous” design. But I think Intel’s approach has proven heterogenous can be done effectively (so long as you don't let your chips fry)

I only bring up the AVX512 so much because it was such a hyped feature for Zen5. I realize it’s not very widely used at least today (much thanks to Intel dropping it) but it represents the most extreme potential case of membw saturation.
 

Abwx

Lifer
Apr 2, 2011
11,517
4,303
136
.......................
Regarding power budget I agree though. But then the better solution is to use P + E cores. E.g. 8P cores for max ST perf, then X amount of E cores for max MT perf. Where the E cores are designed for lower power consumption, and optimal perf/watt.

That doesnt work and is less efficient as proved by Intel s designs, best is to put to contribution the P cores unused ressources thanks to SMT, that s the very reason why SMT exist as it make better use of the existing exe ressources.

FI Zen 5 use 16 front ends that drive 96 ALUs to do the work for a 200W total power, comparatively Intel will use 24 front ends to drive at least 112 ALUs for a 250W total power, and still, if ARL was also using N4P it would be at 300W.
 
Last edited:

Nothingness

Diamond Member
Jul 3, 2013
3,031
1,971
136
If the active dataset that is being processed is limited, it'll fit in cache which has very high memory bandwidth, so it won't be a problem. As igor_kavinski mentioned, X3D parts will help too, for those workloads where the active dataset is bigger.
...
And even for pure AVX512 workloads, memory bandwidth does not have to be a problem anyway if the active dataset is limited, as mentioned above.
I partly agree with you. But for sure some computational programs such as y-cruncher and Prime95 are limited by main memory BW.

The way I consider it is simple: if you're able to process twice as much data per cycle, you need to double the BW unless you were not BW limited before. BTW people in HPC often use a mem byte/op metric.

This is where I agree with you: not all programs need that extra BW. But the use of full width AVX-512 makes you hit the memory wall twice faster.
 

MS_AT

Senior member
Jul 15, 2024
207
497
96
About why Zen5 SIMD is memory bottlenecked I will let myself cite Y-Cruncher author:
AIDA64 measures Zen5's memory bandwidth to be about 60 GB/s. To the untrained eye, this may seem like a lot. But when you break it down, it becomes clear that it is suffocatingly insufficient for Zen5's computational power.


  • 60 GB/s divided across 16 cores becomes 3.75 GB/s per core.
  • 3.75 GB/s divided by ~5 GHz CPU clock becomes 0.75 bytes/cycle.
  • 0.75 bytes/cycle divided by 512-bit load becomes 0.0117 loads/cycle.
  • 1/0.117 loads/cycle = 85.3 cycles per load.
  • Zen5's 4 x 512-bit execution width means 4 x 85.3 = ~340 instructions/load.

In plain English:


A loop that streams data from memory must do at least 340 AVX512 instructions for every 512-bit load from memory to not bottleneck on memory bandwidth.
 

Timmah!

Golden Member
Jul 24, 2010
1,510
824
136
Right, but I'd argue almost all AVX512 workloads use MT. Once we see more apps compiled with AVX512, we'll see more lackluster gains from 9950x.
If its AVX-512, that has to be used for CPU cores to use-up the available DDR5 bandwith, then we are cool, cause there are pretty much no notable apps on desktop that use it. And i have serious doubts we are going see more of them compiled anytime soon, especially if Intel does not even support it on desktop anymore.

AVX-512 has been at this point available on desktop cpus since what, 2017 and Skylake-X? Yet no adoption of it happened.
 

Nothingness

Diamond Member
Jul 3, 2013
3,031
1,971
136
AVX-512 has been at this point available on desktop cpus since what, 2017 and Skylake-X? Yet no adoption of it happened.
Given how much Intel support AVX-512 in the consumer market, how could it have been different? Intel sucks at supporting their own extensions. It took them 10 years to have AVX2 on all of their CPUs.
 
Reactions: lightmanek

MS_AT

Senior member
Jul 15, 2024
207
497
96
AVX-512 has been at this point available on desktop cpus since what, 2017 and Skylake-X? Yet no adoption of it happened.
It's available on consumer desktop since Rocket Lake so 2021. And for one generation only as AlderLake and its derivatives don't have it due to E cores. Then Zen4 appeared and maintained availability but from different vendor. Skylake-X was providing support only on HEDT machines. And Skylake-X implementation was troublesome due to AVX offsets, AVX512 "cold-start" and other things, so if you were only to sprinkle few AVX512 instructions into otherwise scalar code you would hurt the performance. On laptops you also had TigerLake and probably IceLake but Icelake was short lived on laptops. And since Rocket Lake was a real failure then not many people have bought it, so to be honest we should count consumer availability to start with Zen4.
 

MS_AT

Senior member
Jul 15, 2024
207
497
96
It's special because it's chungus. AVX is all about loading huge amounts of data and processing them quick. Then the more cores you add into the mix, the more likely it is applications that weren't memory bandwidth dependent before will become memory bandwidth dependent
Actually it's only one side of the coin. The other is to save power at the front-end. If you can find few independent operations of the same type you can execute them all with instruction. To give some contrived example. With 512b register you can do 16 32b float additions. You have 2 instructions to load operands, one to do the arithmetic operation (addition) and one to store a result. That's 4 instructions to decode. In pure scalar code you would need to decode 64 instructions in unrolled loop to do the same work. [this contrived example assumes uop cache does not exist, but even if the ops would be served from uop cache, each load takes at least 4 cycles to complete and so on]. That's why AVX512 was giving noticeable benefit to Zen4 and lower power draw than when using AVX2 or below. Also remember that for floating point aritthmetic operations it does not matter if you add only scalar values or full width of 16 values in case of 32b float, the latency is the same.
 

CouncilorIrissa

Senior member
Jul 28, 2023
520
1,995
96
The problem with high core count parts on mainstream platforms is that they raise the power delivery system requirements for motherboard manufacturers, because the CPU needs to have relatively high all-core boost clocks to make sense to begin with. Which in turn means general public would need to pay more for motherboards to essentially subsidise this small portion of the desktop market, which is a relatively small market on its own. Which is why I think AMD is reluctant to increase core counts on desktop: not only such an SKU would serve a relatively small niche of workloads that scale to high core counts AND don't need memory bandwidth, it would also require everyone else to pay for it.

I maintain that nT score is a cope metric that is only brought up because this gen does not provide a meaningful 1t increase in client workloads other than JS, which is admittedly pretty important, but not enough on its own for most people.
 

marees

Senior member
Apr 28, 2024
371
427
96
This I think you'll have to clarify.

A P-core consumes less power than an E-core? And you are talking about ARL DT?
I am not referring to watts (energy / second) but total calories (total energy for the task)

I feel like a loser when my task takes more time on an e-core compared to p-core

(I understand this might change for upcoming e-core)
 
Reactions: igor_kavinski

LightningZ71

Golden Member
Mar 10, 2017
1,783
2,137
136
About why Zen5 SIMD is memory bottlenecked I will let myself cite Y-Cruncher author:
The y-cruncher example is essentially a worst case scenario. It's also next to impossible to achieve. In reality, MOST, but not all, AVX-512 workloads that are not purely synthetic will not be constantly streaming the maximum amount of data continuously. They will digest chunks, manipulate it, test the results, then store the results of the manipulation or the findings of the test, then either wait on the non AVX-512 portion of the code to do things, or move on to the next chunk of data.

32MB of l3 for 8 cores is plenty for most tasks, and represents as much or more l3 per core than any Intel avx-512 enabled product ever produced. The X3d parts will have 3x that amount. Yes, main memory bandwidth is limiting in synthetic or academic scenarios, but it isn't the end of the story.
 

vanplayer

Junior Member
May 9, 2024
23
49
51
There would be a core latency patch by the end of August, likely released with new chipset X870/B860. Typical AMD that release software after hardware launch. LOL.
I guess chipset with correct latency microcode cannot be launched with CPU at the same time due to the fail management. If there's any special reason I think AI is influencing these companies' decision, it's first time AMD launching mobile APUs before DT, and Zen5 APU's design looks like an all-in-AI approach.
 
Reactions: marees

Hotrod2go

Senior member
Nov 17, 2021
349
233
86
I am not a project management expert, but if the information that ZEN 5 has been developed by several completely different teams and faced reworks and delays is correct, it is a miracle that it even powers on.

The inter-CCD latency problem may be somehow strongly related to some hardware flaw or peculiarity, which will make fixing it impossible or very difficult.

I wonder how many engineers will need to work on this "flopper" even after its release to make it less bad. It may have flopped many times already during the development.

As an owner of 5700G CPU I wonder why these CPUs are not much more popular in PCs, how are the 8x00G CPUs selling now?
Ordered an 8600G yesterday. I await its arrival with much delight for my usage case in one of my rigs.
 

JustViewing

Senior member
Aug 17, 2022
216
382
106
It's possible if the FP units crunch through data so fast that they waste much of their time waiting for data from RAM because the L3 cache is too small to hold enough data to keep them fed and busy. If this is true, the X3D parts will be something special in AVX-512 workloads.
Most likely because L3 bandwidth hasn't increased compared to Zen4. In Zen6 they may increase it to 64B/s
 
Reactions: marees
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |