- Mar 3, 2017
- 1,747
- 6,598
- 136
Actually according to your line in the "Granite Ridge speculation" spreadsheet those troublemakers are still somehow about as close you.My predictions were far more accurate than Kepler/Adroc's 40% crew.
Name | IPC | FMax | Comment |
Jayzen | 5% | 5.1 | The hype train has derailed |
8 days till we get the real numbersActually according to your line in the "Granite Ridge speculation" spreadsheet those troublemakers are still somehow about as close you.
Kepler_L2 overstated Zen 5 Granite Ridge performance by 1.21x
Name IPC FMax Comment Jayzen 5% 5.1 The hype train has derailed
And AMD's 1.16x estimate exceeds your Zen 5 Granite Ridge estimate by 1.23x.
It's possible you may be the only negative person who is as far off as Kepler_L2 et al.
See, being negative and cynical automatically means you're smarter than everybody else.Actually according to your line in the "Granite Ridge speculation" spreadsheet those troublemakers are still somehow about as close you.
Kepler_L2 overstated Zen 5 Granite Ridge performance by 1.21x
Name IPC FMax Comment Jayzen 5% 5.1 The hype train has derailed
And AMD's 1.16x estimate exceeds your Zen 5 Granite Ridge estimate by 1.23x.
It's possible you may be the only negative person who is as far off as Kepler_L2 et al.
It's 5-10% in most benchmarks, besides AMD's cherry-picked tests. And 5.1 is far closer to what these chips will actually run at in real-world conditions than people's delirious 6 GHz fantasies.Actually according to your line in the "Granite Ridge speculation" spreadsheet those troublemakers are still somehow about as close you.
Kepler_L2 overstated Zen 5 Granite Ridge performance by 1.21x
Name IPC FMax Comment Jayzen 5% 5.1 The hype train has derailed
And AMD's 1.16x estimate exceeds your Zen 5 Granite Ridge estimate by 1.23x.
It's possible you may be the only negative person who is as far off as Kepler_L2 et al.
Geekbench is good if you know what to look for and how to analyse it. Remember, AMD used 2 sub-tests from GB to calculate their IPC for Zen5.geekbench. Need I say more.
Had. Zen1 and Golden Cove.We are in the era of diminishing gains.
In 2018 when ARM unveiled the Cortex A76 with a colossal >50% IPC gain, they prophesied that going forward there will be only smaller gains. It has proved true. We haven't seen such a large IPC gain from them since.
Zen 1 was the first Zen architecture and that was 2017. Golden cove was 19%, not 50%. So I guess no one reached that level of IPC again and won’t for a long time.Had. Zen1 and Golden Cove.
Golden cove come in less than a year than Cypress Cove.Zen 1 was the first Zen architecture and that was 2017. Golden cove was 19%, not 50%. So I guess no one reached that level of IPC again and won’t for a long time.
9600x and 9700x reviews are on Wednesday actually8 days till we get the real numbers
IEC said the 14th ????9600x and 9700x reviews are on Wednesday actually
9600X and 9700X embargo is 8/7, with release on 8/8.IEC said the 14th ????
That what I thought, but IEC said embargo on reviews was 14th9600X and 9700X embargo is 8/7, with release on 8/8.
9900X and 9950X embargo is 8/14, with release on 8/15.
IEC said:That what I thought, but IEC said embargo on reviews was 14th
Golden cove come in less than a year than Cypress Cove.
He said 2 chiplet embargo lifts on the 14th. I think he meant the 9900X and 9950X, which are technically 3 chipelts, but only 2 compute chiplets. It wouldn't make any sense for a product embargo to lift several days after the product is on the shelves.That what I thought, but IEC said embargo on reviews was 14th
A user on this forum, go back a page.Who is IEC?
You are not the only one. It has ports on all major desktop and mobile operating systems, though, which makes it popular for crossplatform comparisons. So we're stuck with it, for now.Man I hate Geekbench...
How compiler and optimization options are related to 'run-to-run variance'? A few weeks ago I've posted several runs of GB that were done one after another with wildly different ST scores (200 pts difference between the highest and lowest). This alone shows that GB test runtime is too short for the CPU to boost properly to the max (and it's not a CPU problem, turboreactive boosting is detrimental to performance in most real-world(tm) scenarios)Yes, there is some variability in SPEC. Obviously the main source is the compiler and the optimization options which can change the score dramatically. Other factors that create variability are OS state (often official scores are run on recently booted systems), memory allocation libraries, huge TLB tweaks in the OS, etc.
C&C measured that Zen4 core can go from idle to max boost in 11 ms. I am not familiar with geekbench that much, but I assume any workload will last at least a 1s so the time to reach boost clock can be ignored as 99% of the test duration should run with max possible boost.How compiler and optimization options are related to 'run-to-run variance'? A few weeks ago I've posted several runs of GB that were done one after another with wildly different ST scores (200 pts difference between the highest and lowest). This alone shows that GB test runtime is too short for the CPU to boost properly to the max (and it's not a CPU problem, turboreactive boosting is detrimental to performance in most real-world(tm) scenarios)
I was answering your question about SPEC.How compiler and optimization options are related to 'run-to-run variance'? A few weeks ago I've posted several runs of GB that were done one after another with wildly different ST scores (200 pts difference between the highest and lowest). This alone shows that GB test runtime is too short for the CPU to boost properly to the max (and it's not a CPU problem, turboreactive boosting is detrimental to performance in most real-world(tm) scenarios)
On an X3D cpu, good joke (also, if you read a few posts back, that's precisely what I've suggested for ppl trying to compare different CPUs and achitectures instead of playing haruspex with the .gb6 clocks). In any case, I'm not talking about how a user can 'fix' the benchmark, I'm saying that the way Primate Labs does it now can lead to huge run to run variance in ST scores.You could try to set fixed clock value
As I said, almost everything that you've described is not related to run-to-run variance. What I've meant, is there a discernible variance in SPEC results if you just run the suite (same binary, without recompiling, rebooting, changing anything basically) several times back to back?I was answering your question about SPEC.
If you don't understand how scheduler and power settings affect back to back runs, you should read about it.As I said, almost everything that you've described is not related to run-to-run variance. What I've meant, is there a discernible variance in SPEC results if you just run the suite (same binary, without recompiling, rebooting, changing anything basically) several times back to back?
Oh, I clearly understand it and I even have custom profile for such situations, it is just if power settings and scheduler affect your ST performance benchmark this much, then clearly this benchmark is very flawed, there is no recourse to that. Also, very surprised that I still don't have a clear answer to a purportedly simple question ('fiddle with power settings / scheduling' is clearly not an answer to the question 'what run-to-run variance is there with SPEC(int/fp) suite)If you don't understand how scheduler and power settings affect back to back runs, you should read about it.