- Mar 3, 2017
- 1,747
- 6,598
- 136
OK, that's a valid point. But couldn't AMD support the higher core count (supposedly 24C/48T or 28C/56T) CPU on only the X670E/X870E mobos? Those are expensive to begin with so they should have the necessary power delivery components already in place. Not everyone is paying $400 for a mobo but those that do, they should get something in return for their dollars, such as support for higher core counts.
This is all, of course, my opinion. Maybe terribly is too harsh. However, if you look at the non-x TDP limited Zen4 (e.g. 7700 or 7900), I don't think that Zen5 is that impressive. ~10% better performance (Maybe a bit more for applications, lower for gaming) with ~10% better perf/watt. If everybody is posting here of how this was rushed, and they're just waiting for a magical driver fix - you know that this is a disappointing product (relatively). You could make the case that this is a server-oriented chip and it's amazing for that, but I'm not 100% convinced.Something went terribly wrong? That's a rather selective view on the matter.
I agree that launch prices are supposedly better than Ryzen 7000, and you could argue that they're even better when considering inflation - but currently, Ryzen 9000 will (IMO) have a difficult time getting adoption. In gaming, they flat-out get beat by 7800x3D. The obvious question is what will happen first - will Ryzen 7000 fade out of the market, or will Ryzen 9000 get a price cut? I think that the latter will happen sooner, which is why it would've been better to just release it with a lower price.They are launch prices. Ryzen 7000 supply still seems to be in the channel, let's see for how long.
I will say, that Zen1 and Ryzen were great, and I even owned a Ryzen 1700 PC. However, saying that bulldozer is "just a lesson" a rose-tinted "winners" look at history. AMD was (IIRC) close to bankruptcy after Bulldozer. Zen1 was great, especially for productivity (although, still slower than Kabylake in gaming and general ST), but it didn't really flat-out beat Intel everywhere.Research and Development often needs to try out moonshot architectures and ideas to see if they can at least be salvaged for future use. What is a constant struggle(in life too) is finding the right balance, not too hot and not too cold. That's the real challenge. In case of Bulldozer they shot too far in one direction, a wrong one.
If AMD benefitted from it and resulted in Ryzen, in the long term it would merely be a lesson.
It's a very fair point.I will say, that Zen1 and Ryzen were great, and I even owned a Ryzen 1700 PC. However, saying that bulldozer is "just a lesson" a rose-tinted "winners" look at history. AMD was (IIRC) close to bankruptcy after Bulldozer. Zen1 was great, especially for productivity (although, still slower than Kabylake in gaming and general ST), but it didn't really flat-out beat Intel everywhere.
Since I use avx-512, and it help many things as well as SMT, why would anyone want to ? Thats like "lets remove 2 tires on this car and test drive it".I'm curious how the average IPC gain for Zen5 with SMT and AVX512 disabled compares to Zen4 with SMT in the context of LionCove without AVX512 and SMT, which gains an average of +14% compared to RedwoodCove with SMT. Would anyone undertake such a test?
I'm curious how the average IPC gain for Zen5 with SMT and AVX512 disabled compares to Zen4 with SMT in the context of LionCove without AVX512 and SMT, which gains an average of +14% compared to RedwoodCove with SMT. Would anyone undertake such a test?
I wouldn't say that's a good analogy. You won't get very far if you take the tires off your car.Since I use avx-512, and it help many things as well as SMT, why would anyone want to ? Thats like "lets remove 2 tires on this car and test drive it".
Are you sure SMT and AVX512 don't work in single core tests?17% for ST.
AMD Ryzen 9 9900X und 9950X im Test: Benchmarks in neuen Anwendungstests
Ryzen 9 9950X & 9900X im Test: Benchmarks in neuen Anwendungstests / Intel-Instabilität und AGESA-Updates sind die (neue) Tagesordnungwww.computerbase.de
Are you sure SMT and AVX512 don't work in single core tests?
In SPECint2017, the average IPC increase is approximately +10%.No SMT in ST tests, it s explicitely mentioned, otherwise that would be a MT test restricted to two threads, just look at the scores, that s one thread perf.
AVX 512 only in GB, and still with very little consequence, if you remove GB from this chart it s 18%.
In SPECint2017, the average IPC increase is approximately +10%.
In SPECfp2017, the average IPC increase is approximately +22%.
one should always favour spec over Cinebench. SPEC is free from influence and it’s THE industry standard for CPUs.Also the improvement in CB R20/R23 and CB 2024 are only 11-12% and 15% respectively despite 23.8% in Spec_FP, so either Spec is not always representative,
or Cinebench is flawed in Intel s favour, the second option is about 100%
sure looking at those numbers.
one should always favour spec over Cinebench. SPEC is free from influence and it’s THE industry standard for CPUs.
(I'd rather not respond with a market response in the arch thread)Problem is influencers gave "celebrity status" to core uarch guys as if the rest don't make a difference.
Mike Clark, the uarch chief architect, leader of the core roadmap, will absolutely say the Zen 5 core is great or in simulation it is great etc.
Nobody bothered to ask Sam Naffziger, the fabric and chiplet lead about what is up with Infinity fabric, or the chiplet tech in Zen 5
Nobody was seeking the SoC guys, or the product guys why their chip performed the way it did.
Mahesh Subramony was a lead SoC guy for Strix but nobody asked him anything, everybody asked only Mike Clark.
The product guys assemble all the IPs together to make the final purchasable product, so they definitely are responsible for the final performance of the product, not just the uarch folks
...A point I didn't make there because it's meant to remain as factual as I can is that Zen 5 may actually spell more sense than I originally thought.
If I'm being direct about it, gheymers are becoming an echo hole of pointless complaints. I run Zen 3, and I am willing to bet with anyone that if I switched to Zen 4 or Zen 5, none of my games would run significantly better. They'd all go from running fine to running finer. Gamers willing to dish out $300, let alone $650, to improve from Zen 3 to 5 or Alder Lake to RPL/ARL are actually a small minority. Outside of the Eternally Displeased Nerd Empire, the amount of people who will actually change their CPUs for a "better framerate" is the same as rats in a cat lady's house.
I personally know ONE guy who is a fairly rabid gamer who will change his Zen 2 CPU into Zen 5. I don't have tons of friends but I think it's pretty telltale...
From the interview with Mike Clark on this very site:
George Cozma: You know, for a single thread of it, let’s say you’re running a workload that only uses one thread on a given core. Can a single thread take advantage of all of the front-end resources and can it take advantage of both decode clusters and the entirety of the dual ported OP cache?
Mike Clark: The answer is yes,
So, indeed, a single thread can use both decoders (in the right circumstances, i.e. branches)
So at this point, I feel like if the two decode clusters do get used for a single thread, it happens rarely enough that it’s not worth mentioning.
You make it sound like it's Game of Thrones and everyone's sharpening their CPU's corners into shurikens so they can stab each other. Chill.
Just a note, It is there in architecture thread because as an engineer I am looking for answers that is all. There is no disappointment or elation in that thread. I am not really interested in whether it is good for gaming or market share etc. I could not care less about those.As I put it in the Zen 5 Info thread, it's difficult to claim that Zen 5 is really a disappointment depending on where you look.
Understandable.Like the post mentioned above, I am not impressed with the answers.
What do you have all that compute for?I have 2K+ Zen 4 cores running Linux and I sure as hell don't care about Windows or Gaming.
Why does it reminds me of this slide from the RDNA3 launch, and how software will "fix" it over time...So at this point, I feel like if the two decode clusters do get used for a single thread, it happens rarely enough that it’s not worth mentioning.
I must say I am not impressed with Mike Clark between his lousy answer on the decoders and then later telling Dr. Cutress saying how software will catch up and the Zen 6/7 guys will get credit for the work Zen 5 did.
And care to explain how exactly its flawed in Intel's favour? Here is R24 review by C&C https://chipsandcheese.com/2023/10/22/cinebench-2024-reviewing-the-benchmark/ it contains profiling data, while not exactly related to Zen5 or RaptorLake it should be able to help you along the way.Also the improvement in CB R20/R23 and CB 2024 are only 11-12% and 15% respectively despite 23.8% in Spec_FP, so either Spec is not always representative,
or Cinebench is flawed in Intel s favour, the second option is about 100%
And care to explain how exactly its flawed in Intel's favour? Here is R24 review by C&C https://chipsandcheese.com/2023/10/22/cinebench-2024-reviewing-the-benchmark/ it contains profiling data, while not exactly related to Zen5 or RaptorLake it should be able to help you along the way.
The difference might simply be due to CB not using enough AVX-512 while a correctly compiled SPEC FP will.Also the improvement in CB R20/R23 and CB 2024 are only 11-12% and 15% respectively despite 23.8% in Spec_FP, so either Spec is not always representative,
or Cinebench is flawed in Intel s favour, the second option is about 100%
sure looking at those numbers and Maxon officialy endorsing Intel since CB 11.5.
As with other CPU/GPU products failing the expectations there is only a single statement you want to hear: "Our internal goal was X% using the traces/tests set Y and we have achieved only Z% mainly due 'foo'".Like the post mentioned above, I am not impressed with the answers.
The difference might simply be due to CB not using enough AVX-512 while a correctly compiled SPEC FP will.
You mean this embree?I mean by selecting a convenient scene, because all renderings using a same soft wont yield the same results between 2 CPU as it depend of the exact arithmetic ops distributions that are used, and independently of the fact that R15, R20 and R23 all use Embree wich is Intel s in house renderer.
Since Cinebench is meant to test the how your CPU will work with Maxon software, I guess they add some additional features over time, and it would be pointless for them to keep the same scene that would not reflect what the actual product behind the benchmark would be doing. Still R15 is available you can use it for CPU comparisons if you would like to.That would be an explanation, but then why such a gap between R15 and R23, FTR R15 use up to SSE 4.2 and R20/23 are no different, so from where this big difference is coming.?..
Notice that they changed the scene from R15 to R20/23, why didnt they simply use the same scene, as another scene wont use exactly the same arithmetic ops distribution, and we can see when performing the bench that all tiles are not rendered in a same time, wich mean that there are tiles that require more heavy computations of some sorts.
The topic is really simple. It is like asking an engineer why flops instead of SRAM. It is not about debating why SRAM is better or worse.As with other CPU/GPU products failing the expectations there is only a single statement you want to hear: "Our internal goal was X% using the traces/tests set Y and we have achieved only Z% mainly due 'foo'".
CB 2024 is no more using Embree, it use Maxon s renderer, so it s not comparable to R23, one more time, how did Intel gain more than 10% in ST from R15 to R23, the fact that we re talking of ST eliminate the cache possibility, and the X3D is no better than the regular chip, also this gain is uniform whatever the AMD CPU comparison up to Zen 4.You mean this embree?
View attachment 105736
[source https://www.phoronix.com/review/amd-ryzen-7900x-7950x-linux/8]
Disclaimer for those unwilling to read the C&C piece, CB R24 is basically not making use of SIMD. Most instructions are executing scalar ops. This score above most likely is.
Once again if you claim with 100% certainty the benchmark is flawed towards Intel, it would be nice if you could back this up with some relevant metrics. Might be Intel is helped by larger private caches since R24 is more memory bound and since the binary is compiled for lowest common denominator, wider execution units are not able to exactly shine. But this doesn't mean the benchmark is malicious.
Since Cinebench is meant to test the how your CPU will work with Maxon software, I guess they add some additional features over time, and it would be pointless for them to keep the same scene that would not reflect what the actual product behind the benchmark would be doing. Still R15 is available you can use it for CPU comparisons if you would like to.