That sounds extremely dysfunctional. The obviously un-optimal (in terms of area and performance) dual CCX config is what I'd expect if AMD made the decision to sacrifice quality on the alter of cadence, and not something I'd expect given the opposite. Or do you mean the Zen5 core rather than the...
As nice as 8 CUs sounds AMD isn't just giving us that on desktop from the goodness of their hearts. Aside from the AI theory, it could make sense if the same IOD is being reused for something else where some GPU grunt makes more sense.
Nah. You aim for the crown always and never stop, no matter what hand you have to play with or what kind of architectural deficit you're suffering.
If Nvidia had cancelled GF100 that generation would have resulted in something like a straight duopoly.
If Nvidia had abandoned the high end with...
The right answer.
Yet unless AMD is able to satisfy the market with a sufficiently large quantity of units at (or at the very least near) MSRP over the coming months, then the optics win won't go as far as it could have, and if/when in the future AMD finds itself in a similar situation...
It feels like there's tension here because I can see us getting a G7 version or a 32GB version in the future, but I have a hard time seeing both. It would create a confused product stack, while the main thing G7 would do for a theoretical 32GB "ghetto prosumer" SKU is balloon cost and/or shrink...
None right now, but there are obviously hypotheticals. There's a scenario where NPUs make sense to free up GPU resources when games start to incorporate local AI models while *also* wanting to look pretty. Even if the GPU has enough compute to spare, memory capacity is tricky.
Now, whether on...
GB multi-core scores suck because Primate Labs decided to make them suck.
Phoronix scores are awesome because when you take an enormous number of samples the central limit theorem kicks in and makes them awesome.
PPA gets sort of messed up as a metric when you decide to go with slower memory and compensate with big globs of cache. Apples and oranges versus parts that make the opposite decision.
Which of course only makes RDNA4 more impressive than naive PPA would suggest.
That's one chunky GPU if it's ~200m2 3nm and the structures are even close to accurate.
8+MB of L2 on the IGPU is... interesting to say the least.
But now that I think about it, the NPU is definitely not going to be that close to the perimeter and there isn't any room to move it without...
EDIT: Totally misread what you wrote, somehow. Still, will keep this up since there's some fun speculation in it.
How sure are you that it's a 4-core CCX, rather than you know there are 4 LP cores, which you then infer a 4-core CCX from?
Because the intersection of your 4-core 16MB CCX leak...
Nah.
If the CCD has a cache die by default, then 24MB is still consuming way too much die space. I'd expect something close to 0.
The two most likely scenarios are that they keep L3 at 32 or go up to 48.
Yeah, LP branding is assumed. But are those just Zen6c derivatives with a bit of optimization, or a whole other thing?
The degree to which they are useful for added compute rather than simple background tasks will mostly come down to cache hierarchy.
Cool.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.