- Mar 3, 2017
- 1,747
- 6,598
- 136
This would be cool, but do you have any sources on that?9950X3DX2 V-cache per CCD
No one knows. If it doesn't happen then you may have the answer.Are there significant benefits to be had with more V-cache? Isn't 64 MB good enough for most games?
A 8x Z6 with 3D vcache + 16x Z6c would be very nice, if AMD can make that happenA 8 Z5, 16 Z5c desktop SKU might make sense if Intel comes on strong. Would be memory bw limited for a lot of tasks, but not all tasks are bw constrained. The shape of the 3nm Z5c die might not work well on AM5 though.
Are there significant benefits to be had with more V-cache? Isn't 64 MB good enough for most games?
No one knows. If it doesn't happen then you may have the answer.
My uneducated guess is that it would help *more* games but wouldn't help much more for the games that already benefit from it.
It seems I misunderstood the question. I assumed FlameTail was asking about a single CCD with 2V-cache or more cache in general (since the L3 cache on Zen 5 is more dense than before, it stands to reason they could include more - possibly up 96MB).IIRC, in this GN video they were shown a functioning 5950X3D x 2 if you will. I'm not about to rewatch it to try to find it though. And no there certainly weren't benchmarks. However, it does show that AMD knows how much games would benefit vs the loss of frequency.
And then there was the first annoucement where they demonstrated a 5900X3D x 2. They've made them. For some reason or another they decided they weren't worth it. Back then the frquency hit was significant so that likely played a role.
It seems I misunderstood the question. I assumed FlameTail was asking about a single CCD with 2V-cache or more cache in general (since the L3 cache on Zen 5 is more dense than before, it stands to reason they could include more - possibly up 96MB).
I don't see any point to both CCDs having V-cache unless they solve the frequency problem.
the biggest bottleneck right now is DIMMs. We need LPCAMM2 fir Zen 6
Power efficiency shouldn't matter for desktop chipsI'm sure that AMD knew about how mediocre Zen 5 was when they were deciding TDPs. Why did they choose 65W for them? To cripple them even more? It lost in a lot of benchmarks to 7600X/7700X because of that low TDP. I have a couple ideas why:
1. They wanted at least something good out of this gen, so leaned on "power efficiency" story (forgetting that 65W Zen4 parts exist).
2. They wanted X3D to appear a lot more powerful than regular chips.
3. OEMs and SIs asked for 65W chips to be right there on launch, so they can put them in crappy B840 motherbaords.
4. They drank their own kool aid and decided TDPs before knowing performance?
Most likely the first one, but it was interesting to think about.
I don't really see it either. Performance increase, especially in games, is basically nothing. Only heavy multi core workload benefits and efficiency tanks. I think AMD made the right choice here.Can easily bypass the power limit via BIOS don't see what the fuss is.
Almost 90W real power draw for an 8 core CPU is not an extraordinarilly low power. It has been proven that for gaming higher power limits do not bring any extra performance.I'm sure that AMD knew about how mediocre Zen 5 was when they were deciding TDPs. Why did they choose 65W for them? To cripple them even more?
I strongly disagree especially since GPUs have become power pigs the last few years I'll take as much power savings as I can get.Power efficiency shouldn't matter for desktop chips
Why? Have you looked at their benchmark choices?That's not how it works. A sufficiently large sample size of "bizarre" performers is more robust than a very small sample size of "looks about right" performers.
While this sounds plausible I don't suspect they can't adjust their roadmap accordingly.X86-64 basic fp instruction set is SSE2, x87 could be used from x64 but ain't recommended and also not normally used at all. AVX/AVX2 has some support but as it's not supported on all cpu's even sold today support is quite minimally. AVX512 ain't supported pretty much on anything. AMD probably didn't know SIMD workload distribution when they started Zen5 design - Intel did back up AVX512 then pretty strongly. But even with AVX512 main desktop performance priority is on 128 bit SIMD - giving up 128 bit performance for wider vectors is just wrong bet from AMD. Intel goes to opposite direction - their E-core straight doubled 128 fp resources and Lion cove increased 256 bit fp units. Zen5 seems to face quite tough competition from Intel.
These CPU's seems to be targeted to OEM and work-related tasks, where efficiency holds more value than a little extra performance.I'm sure that AMD knew about how mediocre Zen 5 was when they were deciding TDPs. Why did they choose 65W for them? To cripple them even more? It lost in a lot of benchmarks to 7600X/7700X because of that low TDP. I have a couple ideas why:
1. They wanted at least something good out of this gen, so leaned on "power efficiency" story (forgetting that 65W Zen4 parts exist).
2. They wanted X3D to appear a lot more powerful than regular chips.
3. OEMs and SIs asked for 65W chips to be right there on launch, so they can put them in crappy B840 motherbaords.
4. They drank their own kool aid and decided TDPs before knowing performance?
Most likely the first one, but it was interesting to think about.
Probably still being validated. AMD development is SLOW, both hardware and software. I thought Zen 5 AGESA would be in good shape since they were releasing Zen 5 almost a year after it was ready yet it seems they are not done with their AGESA updates and will keep refining them. Any bets that we will see one released on 14th or 15th Aug for the 9950X?And that fabric and IOD, seems they stopped working on it because that one single guy went on paternity leave.
TBH we already know the design rationale. They wanted a new forward-looking base design - >4-wide decode, 6 ALUs, 8-wide dispatch, and 512b-compatible bandwidth.We will know their design rationale, at least some idea, in the upcoming Hot Chips.
I think the answer is simpler: they wanted to move faster to 2.5D and 3D structures so they invested a lot on those but to get these parts economically viable for the consumer market (in terms of cost) is another matter and it's taking more than initially expected. Likewise, it's very probable that Zen5 was conceived for the N3 process, allowing more Xtors to be spent on improving the 1T performance, but costs and availability forced them to use N4.Probably still being validated. AMD development is SLOW, both hardware and software. I thought Zen 5 AGESA would be in good shape since they were releasing Zen 5 almost a year after it was ready yet it seems they are not done with their AGESA updates and will keep refining them. Any bets that we will see one released on 14th or 15th Aug for the 9950X?
How many times does this idea come up that clearly doesn't work. To go higher core counts you need quad channel ram. That's too expensive for a mainstream platform.AMD would have been better off doing their core wars thing they promised generations ago. 8 core, 16core, 24core and 32core Zen 5 would have solved things this generation. Ultra low power for efficiency. I think AMD will fix a lot of the problems with Zen 5 via bios updates. Zen 5 will definitely see a Zen 5+ silicon upgrade to N3P or some variant in 2025.
The reviews out there are very very bad for Zen 5. Add to it that Arrow Lake is all new silicon that will dramatically reduce power consumption with 20A.
I think it might be to incentivise the higher core count parts this gen, lower prices per core but a better ASPI'm sure that AMD knew about how mediocre Zen 5 was when they were deciding TDPs. Why did they choose 65W for them? To cripple them even more? It lost in a lot of benchmarks to 7600X/7700X because of that low TDP. I have a couple ideas why:
1. They wanted at least something good out of this gen, so leaned on "power efficiency" story (forgetting that 65W Zen4 parts exist).
2. They wanted X3D to appear a lot more powerful than regular chips.
3. OEMs and SIs asked for 65W chips to be right there on launch, so they can put them in crappy B840 motherbaords.
4. They drank their own kool aid and decided TDPs before knowing performance?
Most likely the first one, but it was interesting to think about.
The least they can do is give dissimilar CCDs to 9900X, as in one 8 core CCD + one 4 core CCD so at least it wouldn't get ignored so much by gamers. There is only one local retailer in UAE who got Ryzen 9000 CPUs and they are already out of 9700X. 9600X and 9900X are still in stock.How many times does this idea come up that clearly doesn't work. To go higher core counts you need quad channel ram. That's too expensive for a mainstream platform.