- Mar 3, 2017
- 1,687
- 6,243
- 136
Client isn't getting higher core count CCDs.
You don't need more cores.
You're not getting more of anything since none of you have the money to pay for it.
That's wayyy too expensive.
Well, if AMD is taking Intel´s route now, no reason to buy them then. Might as well default back to Intel with the next rig in couple of years. Maybe they will put their stuff together in the meantime.
It benefits quite literally everyone since you're getting a more efficient more functional platform.More efficient platform is always nice, but if that higher efficiency does not translate into higher performance, it does nothing for me. On desktop, unlike mobile, i am not power constrained.
The lockstep thing with mobile does nothing for me either. That benefits AMD, not me, i dont need them to be the same
I ain't speculating anything.I guess it makes sense to wait for the die size of Zen 5 8-core CCD before speculating about Zen 6.
Client. Is not. Getting. 16 core. CCD.Maybe when 16 core is under 100mm2 die size, which may not happen until a generation after Zen 6.
Why waste tapeouts on an extremely niche product no one would care about.I wonder how receptive AMD would be to having 2 differenct CCDs, one 8 core and one 16 core.
They already own the DIY space and OEM boxes don't need high core counts at all.but if AMD manages to gain more client share in next 1-2 years, it might be a profitable to release such a CCD, that would, by definition be lower volume...
There is only one assumption of cost and it's dollars per die area yielded.Maybe too expensive under one assumption of costs, and maybe not too expensive under a different assumption of costs.
Cost is the least limiting factor in anything SoIC-X.BTW, at some point, the cost of die savings on the expensive most advance node of the base die vs. cost of stacking + cheaper N6 SRAM will cross over.
I ain't speculating anything.
Client. Is not. Getting. 16 core. CCD.
You don't need that.
Why waste tapeouts on an extremely niche product no one would care about.
There is only one assumption of cost and it's dollars per die area yielded.
Cost is the least limiting factor in anything SoIC-X.
And how does that translate into faster computing, which is exactly what i am looking for? In what way is it more functional and why cant there be a separate desktop part sharing said uncore with mobile for that higher efficiency, while having its own more performant "core" part? And that achieved not just by the virtue of higher TDP?It benefits quite literally everyone since you're getting a more efficient more functional platform.
To whom? I'm talking DiY desktop, specifically.Who cares, luggables and mobile workstations are more relevant.
And how does that translate into faster computing, which is exactly what i am looking for? In what way is it more functional and why cant there be a separate desktop part sharing said uncore with mobile for that higher efficiency, while having its own more performant "core" part? And that achieved not just by the virtue of higher TDP?
If there will be 16 core server CCDs, even if they would not want to give more than those 16 cores on desktop, surely having one of those would be more beneficial to me than having 2x 8core chiplets.
EDIT: Or does this new uncore means return from chiplet back to monolithic design, so both cores and uncore on single die again? This is not quite clear to me. If yes, consider above irrelevant.
More watts go towards your raw compute.And how does that translate into faster computing
No lol, don't even think about that.Or does this new uncore means return from chiplet back to monolithic design, so both cores and uncore on single die again
To people that make those chips and the wider market at large.To whom
It's not compatible with client designs.If there will be 16 core server CCDs
And that would result in what? Higher clocks? Cause you are saying clearly not more cores.More watts go towards your raw compute.
It's magic!
Duh.And that would result in what? Higher clocks?
More cores aren't necessary.Cause you are saying clearly not more cores.
Normalizing 300W CPU ovens is an issue in itself.Zen 4 is more efficient than Intels desktop CPUs - at 65W. At more than 200W, they perform similarly.
Not in desktop segment. And 300W is needless exaggeration.Normalizing 300W CPU ovens is an issue in itself.
Not in desktop segment. And 300W is needless exaggeration.
Thats just Intel taking it to absurd levels to take the performance crown over AMD. And even then, its not really that much of an issue, is it? Since Zen4 CPUs cant still beat it in absolute performance, despite being more efficient.14900K is set at 350W out of the box, Computerbase displayed no other number in their tests since they did them at stock settings, contrary to the 14700K wich is 253W limited at stock.
Yes there.Not in desktop segment
They really run that ootb.And 300W is needless exaggeration.
Thats just Intel taking it to absurd levels to take the performance crown over AMD. And even then, its not really that much of an issue, is it? Since Zen4 CPUs cant still beat it in absolute performance, despite being more efficient.
Pretty sad when they have a 360 watt 64 core server part that can make a 14900k whine even on LN2 and 900 watts (or whatever it was) and can whup any server part on air (using threadripper@980 watts), vs even LN2 Intel @1900 watts..Thats just Intel taking it to absurd levels to take the performance crown over AMD. And even then, its not really that much of an issue, is it? Since Zen4 CPUs cant still beat it in absolute performance, despite being more efficient.
Well they are inferior when it comes to efficiency, we already knew that. Yet they are pretty much on par with them in absolute performance, and they are generally better for gaming - you need 3D cache on AMD side to match/beat them.it s the other way around, those 350W are not enough to beat a stock 7950X in MT, despite 3 benches out of 9 favouring Intel, they only manage to hedge out marginaly the 130W limited 7950X3D by a few percents, and even more barely the 7950X@142W.
Intel Core i9-14900K, i7-14700K & i5-14600K im Test: Anwendungs-Benchmarks
i9-14900K, i7-14700K & i5-14600K im Test: Anwendungs-Benchmarks / Testmethodik / Multi-Core-Leistung / Single-Core-Leistungwww.computerbase.de
Well, the mobile segment suits them, as its power limited. Its exactly the place where they shine thanks to their efficiency advantage. But how that helps in the desktop? They just pump more power into it and match the performance. Its not great, but it solves the problem.Thats why people keep buying it, despite everything, it performs on par with AMD, and thats what matters.Pretty sad when they have a 360 watt 64 core server part that can make a 14900k whine even on LN2 and 900 watts (or whatever it was) and can whup any server part on air (using threadripper@980 watts), vs even LN2 Intel @1900 watts..
I mean they have to try and pick some laptop part that is not even a competitor to make themselves look good.Actually, I thought AMDs latest $1000 laptop handily beat Intel best ?? At least the competitor.
Well they are inferior when it comes to efficiency, we already knew that. Yet they are pretty much on par with them in absolute performance, and they are generally better for gaming - you need 3D cache on AMD side to match/beat them.
My point anyway, if its not obvious already, all is nice that 7950x can match it at 142 PPT, the important thing is, there is really no point to allow more than that, as the performance increase beyond that will be negligible. Its not like 7950x at 350W is going to beat living crap from 350W 14900k, because it can already match it at 142W. is it?
The cIOD power is taken into account for package power limit. As an example if a Threadripper processor has a 350W limit the cIOD is probably taking up ~60W leaving the cores ~300W to consume. If a 7800X3D is using 60W while gaming there’s a really solid chance that more than 1/3 of it is the cIOD (especially if you’re running a DDR5-6000 Expo kit).And how does that translate into faster computing, which is exactly what i am looking for? In what way is it more functional and why cant there be a separate desktop part sharing said uncore with mobile for that higher efficiency, while having its own more performant "core" part? And that achieved not just by the virtue of higher TDP?
*to a single CCD but yea.and increase max theoretical bandwidth
*to a single CCD but yea.
2 read 1 write and they do some write coalescing/compression on GMI3 anyway.over IFoP is somewhere between 1-2 memory channel bandwidth.
You're still gonna be limited by whatever the SDP count on the fabric side of things.But with the new type of connection between CCD and IOD (which looks like Active Silicon Bridge), the connection could accommodate full 8 or 12 or 16 memory channel worth of bandwidth.
Ehhh negligible difference.With better latency too.
For instance, this "nonsense" is about that you don't need 24 very wide cores on a small socket, when they are available to you on the appropriate big socket.What is this "you dont need this, you dont need that" nonsense anyway?
To people that make those chips and the wider market at large.