- Mar 3, 2017
- 1,687
- 6,243
- 136
Precisely.And that code has infinite ILP that can always fill the fill set of functional units.
We know AMD has been investing in compilers heavily for some time, to me being fully blunt this testing is yet another gimped ES (double FP latency and no NOP fusion that should be useful to keep) and Strix is already half SIMD and L3 gimped.The problem is that such considerations make the assumption that coding and compiler output is optimal, which we all know it very often is anything but.
Yeah, the Bulldozer comparisons are stupid because Bulldozer was drastically behind Intel on single-thread perf. That's not the case here - not even close.
If they actually managed a huge jump in MT from replicated frontends, and also a small ST bump in the same gen, without blowing out area, that's interesting.
AMD would be screaming from the rooftops if they had 90% of Firestorm's IPC and clocks of 5+ GHz. They aren't.We know AMD has been investing in compilers heavily for some time, to me being fully blunt this testing is yet another gimped ES (double FP latency and no NOP fusion that should be useful to keep) and Strix is already half SIMD and L3 gimped.
Final Si and software is what matters, and regardless of how Z5 is balanced, the fact is that it really is a big departure from the past.
The GPD data is promising for a higher IPC uplift than observed here, and it also points to very strong nT perf via similar or higher SMT yield.
I'm committed to the 3.65Ghz Strix GB6 run being legit, that has <10% less IPC than Firestorm which is where I'm hoping things end up.
I really don't care if I'm wrong anymore, AMD has already made fools of everyone.
Are they on sale yet?AMD would be screaming from the rooftops if they had 90% of Firestorm's IPC and clocks of 5+ GHz. They aren't.
But that was a dismissal in the "april-launched $1k 32% IPC Zen 5" era.
Anyway, introducing heavy reliance on SMT is pretty much in contrast to that ARM vendor's "SMT is bad" claim.
Don’t forget the low pricing and bringing in X3D much earlier.AMD would be screaming from the rooftops if they had 90% of Firestorm's IPC and clocks of 5+ GHz. They aren't.
If the pricing leaks are true that could be for reasons irrespective of relative performance.Don’t forget the low pricing
It comes when it comes, nobody knows the original intended release and whether the ultimate release is early or not.and bringing in X3D much earlier.
No, but we hear rumours of aggressive pricing. Meaning the thing is mid. Other than the iGPU maybe.Are they on sale yet?
Have they released pricing yet?
They showed something and benchmark selection was dodgy to say the least.Have they done a proper breakdown of IPC gains and the uArch as a whole yet?
Why wouldn't they? They could've finished the SDXE off before it was released. Qualcomm teased heir 3.2k Geekbench run on a mythical devkit chip a while ago. AMD showing up with 3400-3500 would've instantaneously rendered the competition DOA.The answer is no, and why would they show all of their cards already when they were patiently waiting for X Elite to go public for a proper comparison?
The assumption is September, ~2 months after vanilla Zen 5 and before ARL-S.CES 2025 is the assumption, not the fact.
AFAIK Intel and AMD don't compare themselves to Apple in PR.AMD would be screaming from the rooftops if they had 90% of Firestorm's IPC and clocks of 5+ GHz. They aren't.
Now it is, it and the former assumption are both backed up by thin air.The assumption is September, ~2 months after vanilla Zen 5 and before ARL-S.
Engineering samples are always within margin of error of final performance, or so people like to believe.View attachment 101629
Wait, how do I get L1 levels of ifetch bandwidth out of L3 area.
This is nonsense.
The result is nonsense, it sustains L1 levels of ifetch b/w out of L3 region which is physically impossible since L3 bandwidth was literally untouched.Engineering samples are always within margin of error of final performance, or so people like to believe
That calls into question the entire benchmark suite.The result is nonsense, it sustains L1 levels of ifetch b/w out of L3 region which is physically impossible since L3 bandwidth was literally untouched.
Well there’s only a month left. Odds are this is going to be among the most thorough look at Zen 5 outside of probably Anandtech and ChipsAndCheese.That calls into question the entire benchmark suite.
You're not getting L1 bandwidth out of L3 region under any circumstances.I don't see what the problem is with the graphs
What is referred to as Bandwidth in the graph is just the rate of instruction fetch per cycle.You're not getting L1 bandwidth out of L3 region under any circumstances.
That's the point.
That's how i$ bandwidth is measured, lmao.What is referred to as Bandwidth in the graph is just the rate of instruction fetch per cycle
And it falls off a cliff at 16384 KB in all cases, your point?That's how i$ bandwidth is measured, lmao.
Use glasses, idk.And it falls off a cliff at 16384 KB in all cases, your point?
You're not getting L1 bandwidth out of L3 region under any circumstances
It has per-thread decoder clusters (better way tho), so it can't be Bulldozer 2 or Piledriver 2, it's clearly Steamroller 2. Or Excavator 2 in Strix (cut down cache, compacted SIMD unit).I still don't see how any IPC increase is Bulldozer 2.
when the Ryzen details were announced with all the upgraded front end resources explaining the 40% IPC increase, I wondered at the time how much better the Bulldozer family would perform if it got the same upgrades.It has per-thread decoder clusters (better way tho), so it can't be Bulldozer 2 or Piledriver 2, it's clearly Steamroller 2. Or Excavator 2 in Strix (cut down cache, compacted SIMD unit).