- Oct 9, 1999
- 4,961
- 3,391
- 136
With the release of Alder Lake less than a week away and the "Lakes" thread having turned into a nightmare to navigate I thought it might be a good time to start a discussion thread solely for Alder Lake.
Never say never. But I grok. I play old games on my AMD APUs, to let my RTX and GTX cards rest. Not sure when value GPUs will return, so have to preserve the ones I have.
What facts have you used to come up to this conclusion?Given we now KNOW Gracemont is Skylake level IPC w/o HT, how is 8 Skylake cores w/o HT at 48 watts not good? This is not speculating, this is not yesterday. Now these are facts. 8 Skylake Cores @3.7GHz without HT using 48W on the desktop is great.
Given we now KNOW Gracemont is Skylake level IPC w/o HT, how is 8 Skylake cores w/o HT at 48 watts not good? This is not speculating, this is not yesterday. Now these are facts. 8 Skylake Cores @3.7GHz without HT using 48W on the desktop is great. Do the math, 24 of them would use ~5950X power and be as or more performant in highly threaded apps that scale well with cores. The 5950X is still the performance/efficiency champ and e's alone could beat it in specific cases. Or I'm missing something? I'm always willing to learn.
Jay2c was amazed at how low the temps are, there is obviously a bug there somewhere and we will have to wait to see which one turns out to be closer to the truth, it's nothing new, how many years did AMD users think there was something wrong with their CPUs because software would show them the distance from max instead of the temps in degrees?What is up with this chip running like a furnace? Linus' video they used a DH-15 and the thing was hitting 100C under stress tests...
Indeed. I think the 12900K is of very narrow appeal, bursty type loads like someone running Photoshop for work, weekly Premiere renders, etc. For heavy sustained compute, 5950 or TR builds just make more sense. For (very high end) gaming, 12600/700 make more sense, for budget gaming sticking with old Zen2/Coffee Lake is advisable IMHO.
If you are on Zen 3, it's probably a wait and see what comes next scenario.
But if you are on any Intel based system or earlier Zen platform and want to throw a bunch of money at it (probably maybe you got a big GPU already) then it is interesting.
The 12900k is not power efficient when you turn off the e-cores.
Two observations:So, 8+0 consumes more power than 8+8 (6w) while incurring a 38% deficit in CB R23.
I see now my phone decided not to send this image as context to what I'm talking about:Are you sure about that?
Looks to me like at 3.7GHz they're exactly the same perf/W as Zen 3, except that's without taking into account any extra performance you can extract with SMT. ~5W per core for Gracemonts at 3.7GHz vs 6.1W per core average for Zen 3 at 3.775GHz.
For a small core that's surprisingly low perf/W. I want to see what power/perf scaling is like on Gracemont, but it seems pretty sub-par at 3.7GHz.
That's their biggest problem, they were probably hoping for a healthy 3 quarters of leadership in gaming, and that won't work due to the 3D cache stopgap solution AMD has thanks to TSMC.Zen3D is just around the corner.
That's exactly the case, personally will be literally choosing the most comfortable solution between 12700K and 5900X, and the decision will come down to motherboard availability. I'm already committed to DDR4 and partially committed to mITX. Cooler compatibility is another issue, but Noctua has me covered on this front.So for me it seems pretty close to a tie between to two platforms, and depending on your usage you might want to go with one over the other.
Spot on. I've said this before and I'll repeat it here. What AMD did by bringing the 5950x down to mainstream desktop, charging $800 for it while starving it of memory bandwidth (it's clear from ADL DDR4 tests these many cores crave bandwidth in certain scenarios), was a move purely motivated by petty claim to total desktop dominance; both at HEDT, and mainstream. The consequence of it, however, is that AMD is going to be caught up in a core war it doesn't really want or need. What a lot of people are not concentrating on is that the 12600k is performing at 5800x levels at 5600x pricing. That performance (33%?) is a legit generational leap. At the halo level the impact is a bit muted with high power consumption by the 12900k but the same can't be said of the budget level where the performance gap is glaring. Intel has raised the performance/price bar and AMD has to respond, at a time when chip supply is at its highest demand in decades.What Intel really did, is break AMDs MT capacity/capability reserve monopoly in this casual desktop segment and i fully expect them to proceed to thoroughly destroying AMD in this ( cap/cap) niche with those 8 +32 atom monsters in the future. AMD cannot afford to throw so many proper cores at casual desktop while keeping them fed with power, mem/interconnect bw in the same socket as proper casual desktop CPUs and APUs.
Competition is shifting to peak ST performance, SOC level engineering with larger and faster caches, speedier interconnects.
Looks to me like at 3.7GHz they're exactly the same perf/W as Zen 3, except that's without taking into account any extra performance you can extract with SMT. ~5W per core for Gracemonts at 3.7GHz vs 6.1W per core average for Zen 3 at 3.775GHz.
By contrast, in green, the E-cores only jump from 5 W to 15 W when a single core is active, and that is the same number as we see on SPEC power testing. Using all the E-cores, at 3.9 GHz, brings the package power up to 48 W total.
Is there any source on the web that feeds proper voltage for 3.7Ghz and not those "1P core ran at 5.3Ghz @ 1.35V and 8 small cores consumed 666W due to being fed same, while only requiring 1V instead" sources ?
E core investigation is rather hard at the moment, the "best" is this sentence from Anandtech:
That is in PoV-RAY with P cores idle and package ramping up. HARD to speculate, but E cores are 3W affair @3.9Ghz and ZEN3 is 7W affair @ 3.9Ghz when both are fed proper voltages.
How in the hell did you manage to extrapolate package power for the entire CPU going from 5->15W in a single core workload indicating that we're looking at 3W per core at 3.9GHz?
I'm genuinely interested in what kind of maths where involved here.
When one core is loaded, we go from 7 W to 78 W, which is a big 71 W jump. Because this is package power (the output for core power had some issues), this does include firing up the ring, the L3 cache, and the DRAM controller, but even if that makes 20% of the difference, we’re still looking at ~55-60 W enabled for a single core. By comparison, for our single thread SPEC power testing on Linux, we see a more modest 25-30W per core, which we put down to POV-Ray’s instruction density.
Is this the year 3535? Did I take a wrong turn in albuquerque?While i agree with sustained compute part - i do expect every single guy who needs sustained compute in the form of compiles, renders ( quite a few of those guys are already on GPUs ) and some other cpu friendly HPC already noticed how good TR and EPYCs are. They are out of discussion here, irrelevant to casual desktop.
Now for what is the casual desktop - as in personal workstation, where peak speed is of top importance, 12900K is the fastest chip for day to day work in IDEs, in browser, in Adobe suite, CADs etc. There is just no escaping this fact from reviews other than JEDEC burdened Anandtech this is the fastest chip now.
On top of this mostly strong ST driven performance it can also provide amazing burst of MT performance. So when you need to recompile, or run some workload on WSL or virtualization overall - the capability is right there.
The power at high MT load is irrelevant, these CPUs won't be running anything sustained other than stress tests.
What Intel really did, is break AMDs MT capacity/capability reserve monopoly in this casual desktop segment and i fully expect them to proceed to thoroughly destroying AMD in this ( cap/cap) niche with those 8 +32 atom monsters in the future. AMD cannot afford to throw so many proper cores at casual desktop while keeping them fed with power, mem/interconnect bw in the same socket as proper casual desktop CPUs and APUs.
Competition is shifting to peak ST performance, SOC level engineering with larger and faster caches, speedier interconnects.
HEDT chips are completely different matter and different segment and everyone who needs them, already has them.
Even if z3d obliterates them in sterile game benches, the 12900k will run games on the p cores and thanks to the new scheduler the e cores will act as a completely different CPU, gaming without any loss due to OBS or whatever else you run alongside, not even the 5950x does that because it all looks like one single pool of cores so background tasks still hurt performance by a lot.That's their biggest problem, they were probably hoping for a healthy 3 quarters of leadership in gaming, and that won't work due to the 3D cache stopgap solution AMD has thanks to TSMC.
An observation from the compterbase review.
When limited to 88 W, a 5800x is 2% faster than a 12900k in multi-threaded performance.
View attachment 52360
If we then add in 8 E cores to the 12900k and compare that to adding in 4 or 8 Zen 3 cores on top of the 5800x
View attachment 52361
The advantage for Zen 3 grows to 8% for adding 4 Zen 3 cores or 16% for adding 8 Zen 3 cores. In other words, in this power limited scenario, you get a greater performance increase with an additional 4 Zen3 cores than adding an additional 8 E cores for ADL. It would be interesting to see if this comparison changes at all at even lower power levels.
Is this the year 3535? Did I take a wrong turn in albuquerque?
Casual desktop are the guys that buy i3 and below, simple things and some easy gaming, what you describe is already HEDT and what you consider HEDT is server/workstation.
It's funny you see it this way, as I read Intel's move as reactionary still. In terms of being able to pack more cores, think of the following relative size ratios between the current cores involved: Golden Cove 4 units , Zen3 2 units, Gracemont 1 unit. In terms of silicon area 8+32 Coves+Monts would be roughly equivalent to 32 Zen cores. The more Intel gets engaged in this core count war, the more it scales up to Zen versus Gracemont. It won't be pretty, Zen is a nimble, flexible core.What Intel really did, is break AMDs MT capacity/capability reserve monopoly in this casual desktop segment and i fully expect them to proceed to thoroughly destroying AMD in this ( cap/cap) niche with those 8 +32 atom monsters in the future.
You're quite right there, @Saylick I didn't have high hopes for Alderlake myself. And even with those leaks in the weeks leading up to today's release I was left feeling underwhelmed as more info about power and heat came out. To reach parity with a 1 year old release all the while using more power, or beating a 1 year old release while using significant levels of power that make air cooling fiddly on the higher end of the spectrum feels almost like a DOA platform at this point. Even if future BIOS releases improve performance, it's still at parity or have a slight edge that make it not worth the investment, at least not at current hardware prices and relatively immature DDR5.
All eyes on AMD and what Zen3 3D will offer up. I think I'll stick with the cheap 10th gen I bought for now. I can't see myself buying a more newer Intel platform until Intel manages to deliver a wallop of a performance stride compared to AMD.
This feels more and more like historic Intel. Coming out with a half baked idea and pushing the power on it hoping to beat AMD who cruised along with a better product. Up until AMD fumbled, of course.