- Mar 3, 2017
- 1,749
- 6,614
- 136
Well that's sad. I was really considering one as a good all rounder. I have 0 interest in the game bar nonsense nor productivity performance left on the table (as phoronix shows 9700X vs 9800X3D)well pls don't tell me they are still sticking with that xbox gamebar solution...
The 9950X already has that scheduling solution, as opposed to the 7950X. We knew from the 9950X release that we could expect the same for the 9900X3D and 9950X3D regardless if they had equal CCD's or not. This should not be a surprise.well pls don't tell me they are still sticking with that xbox gamebar solution...
And for Zen5 it was shown by some outlets to degrade performance. YayThe 9950X already has that scheduling solution, as opposed to the 7950X. We knew from the 9950X release that we could expect the same for the 9900X3D and 9950X3D regardless if they had equal CCD's or not. This should not be a surprise.
Wait till reviews before crossing it out, if the x3d CCD will be better binned than it is in 9800x3d then keeping everything scheduled to x3d CCD should be the best universal policy.Well that's sad. I was really considering one as a good all rounder. I have 0 interest in the game bar nonsense nor productivity performance left on the table (as phoronix shows 9700X vs 9800X3D)
AMD is making the choice between 9950x3D and rtx5090 easy for me thenwell pls don't tell me they are still sticking with that xbox gamebar solution...
how hard would it be for AMD to create something Zen 5 like that had a different decode logic (ie it used ARM instructions for RISC vs whatever it is doing now).
The strong point of the AMD ARM SoC would be the GPU. RDNA is a full fledged desktop class GPU architecture, which is much better than mobile based GPU architecture such as Mali and Adreno.Most CPUs have a backend that's largely divorced from the ISA that it understands. It's certainly feasible, but there would still be a considerable amount of work to do and unless they hire a new engineering team to do that it would take time away from what their engineers are doing now. Then they would need to pay for the masks for the new chip and have customers wanting to purchase enough of the chips to justify all of the costs to bring it to market.
Suppose they managed to make an SoC that's 10% better than anything else on the Android market. Does anyone care? They don't have a modem as good as Qualcomm and Samsung will still use their own SoCs. AMD's reward is trying to eke out a small piece of a market that's already operating on thin margins. They'd need a lot of volume.
The updated design was introduced by Mark Papermaster, AMD's chief technology officer, and he began by pointing out that the changes culminated from its collaboration with Samsung, which licenses AMD's graphics tech for the Exynos range of smartphone and tablet processors.
"A lot of the techniques are ideal for notebooks," he said. "They're ideal to give you that same great Radeon graphics experience, but at a much-reduced power and a much higher efficiency."
IMO, that's a broader issue with AMD's product planning. They are very good at penny-pinching, and mostly it's a very smart choice. However, sometimes you just need that no-compromise halo SKU for mindshare, even when it's low volume.AMD is making the choice between 9950x3D and rtx5090 easy for me then
The strong point of the AMD ARM SoC would be the GPU. RDNA is a full fledged desktop class GPU architecture, which is much better than mobile based GPU architecture such as Mali and Adreno.
Even AAA games on iPhones suck. Handheld gaming needs physical controls and active cooling.Does anyone really use or cares about high end GPU in smartphones beyond a meager 5% of 'smartphone AAA gamers' (lol)?
AgreedEven AAA games on iPhones suck. Handheld gaming needs physical controls and active cooling.
The Switch is proof of this.
Yep but this hypothetical SOC is for PCs, not for handheld IMO.Does anyone really use or cares about high end GPU in smartphones beyond a meager 5% of 'smartphone AAA gamers' (lol)?
It's a case where there is no actual demand on the market, any actual use case
why not get an actual mobile gamedevice like steam deck? no point in smartphone gaming lol
Excellent point.Most CPUs have a backend that's largely divorced from the ISA that it understands. It's certainly feasible, but there would still be a considerable amount of work to do and unless they hire a new engineering team to do that it would take time away from what their engineers are doing now. Then they would need to pay for the masks for the new chip and have customers wanting to purchase enough of the chips to justify all of the costs to bring it to market.
Suppose they managed to make an SoC that's 10% better than anything else on the Android market. Does anyone care? They don't have a modem as good as Qualcomm and Samsung will still use their own SoCs. AMD's reward is trying to eke out a small piece of a market that's already operating on thin margins. They'd need a lot of volume.
Excellent Answer!Let's just buy Mediatek
Besides the obvious not serious answer, if AMD one day really wanted to go to the smartphone business (highly doubtful indeed), that's what I'd do if I was AMD.
Even AAA games on iPhones suck. Handheld gaming needs physical controls and active cooling.
The Switch is proof of this.
I just saw some leaks this morning, cant find the ref now, it was Video Cardz I think. Anyway, rumor is the 12 and 16 core skus with have the old vcache configuration, so there may be less overclocking headroom than 9800x3D.
Comments are a salt mine. "No 3D for both CCD= NO SALE!" oklol. That place is like the who's who list of banned trolls from every message board.
Come join me in conspiracy corner: The people that make a big deal out of this and Zen5% were never going to buy AMD. They are the ones that spam the hype incessantly. I grant you there is some hopium in the mix, but it is 1 part hopium, 4 parts Haterade.
Guys, the rumors and speculations thread is over here.Well, the debate will likely be the same as before. Why bother if it is the same hassle of core parking/priority issues, xbox game bar requirements etc.? For most people it will be the basic choice of, I need cores/threads over gaming performance so I might as well get the regular 16 core that will be cheaper and still fast enough in gaming. The only people where it really matters are those who one box for whatever reason. Streamers who use one system, maybe that is more effective but for raw game performance the 8 core chip will be equal and cheaper with less hassles on the software side.
Well, one would no longer have to give hints which threads prefer cache and which threads prefer clock. But one would still have to give hints which threads are to be blessed to get to run on the good CCD and which threads shall take the back seat. Though in many cases such information already is provided, via scheduling priorities, and scheduling classes (interactive work versus batch jobs).Last time 7950X3D' X3D CCD had 200MHz higher freq than 7800X3D iirc. Right know 9800 X3D at 5200 MHz is able to match 9950 CCD at 5700 MHz. It stands to reason that 9950x3D's X3D CCD will also be better binned than 9800X3D. If we assume quite modest 200 MHz higher out of the box clock, it will be able to beat 9950X at 5700MHz. Therefore X3D CCD will be the universally better CCD for majority of applications as the other CCD will likely not exceed 5700MHz out of the box.
Now, these are ifs and maybes so we need to wait for reviews, but at least it does seem plausible.
If the product is built so that the CCD with vcache is also the faster of the two CCDs, doesn't that make the scheduling much easier? Because the vcache ccd with the fastest cores will be allocated first?Stolen from the Ryzen 9000X3D series review thread:
Guys, the rumors and speculations thread is over here.
Apropos "the only people where it really matters": There might be 2 or 3 more people than you thought of. —
The inhomogeneous EPYC 4584PX and 4484PX are an embarrassment, although it's easy to see why they came to be like this. Wouldn't 2025 be a good year for AMD to take the next step and release *homogeneous* EPYC 4585PX/4485PX? AMD could even slap a big markup on it, as some server applications do profit from the extra cache notably. But I don't hold my breath.
Well, one would no longer have to give hints which threads prefer cache and which threads prefer clock. But one would still have to give hints which threads are to be blessed to get to run on the good CCD and which threads shall take the back seat. Though in many cases such information already is provided, via scheduling priorities, and scheduling classes (interactive work versus batch jobs).
(However, even if there were homogeneous CCDs, there still remain some corner cases in which the scheduler needs hints: Namely, which threads should be grouped into a single cache domain, because of intense data sharing between those threads.)
Can you elaborate, I honestly don't follow your logic. If the vcache ccd is faster, when gaming that cc'd will get allocated the threads, except when it spills over into the second ccd. When doing productivity all will be loaded. Having the vcache ccd be faster limits the odd scheduling where a game would prefer the faster non vcache ccd.The CCD with v-cache is slower or there is little reason to have both types
It isn't faster, it's 300-400MHz slower. Peak 1T performance would suffer.Can you elaborate, I honestly don't follow your logic. If the vcache ccd is faster, when gaming that cc'd will get allocated the threads, except when it spills over into the second ccd. When doing productivity all will be loaded. Having the vcache ccd be faster limits the odd scheduling where a game would prefer the faster non vcache ccd.
What would you buy a 16core vcache model to do, that would require you to have faster single thread on the non vcache ccd? What usage case are you envisioning?It isn't faster, it's 300-400MHz slower. Peak 1T performance would suffer.
I browse the web from time to time. Making bloated web apps 5% faster at a lower cost than a dual 3D CCD is a win-win.What would you buy a 16core vcache model to do, that would require you to have faster single thread on the non vcache ccd? What usage case are you envisioning?
I agree AMD is staying the course with one vcache ccd. However, having a faster non vcache ccd helping in web apps vs having gaming run smoothly, which is the selling point of the vcache models in the first place seems like a much better tradeoff than keeping the existing structure of a faster non vcache ccd.I browse the web from time to time. Making bloated web apps 5% faster at a lower cost than a dual 3D CCD is a win-win.
Which is why AMD is staying the course.
No, I included that.Also, it may not be important in the end, but you're making an assumption that they're not saving the best binned CCDs for the higher end 16core vcache models.