Win2012R2
Senior member
- Dec 5, 2024
- 792
- 795
- 96
Given reputation Samsung will most likely have to guarantee yields via charging per working chip rather than wafer.Plus, 4LPP yield issues were solved
Given reputation Samsung will most likely have to guarantee yields via charging per working chip rather than wafer.Plus, 4LPP yield issues were solved
Given that was their contract with Nvidia, it's possible. But 4LPP has been used for multiple Exynos SoC successfully. I wouldn't worry about yields.Given reputation Samsung will most likely have to guarantee yields via charging per working chip rather than wafer.
Low power small mobile stuff, no?multiple Exynos SoC
SS FF yields are fine.Samsung will most likely have to guarantee yields via charging per working chip rather than wafer.
cheap is good, and they've been tinkering with SF PDK's for a while.It sure will be cheap, not good direction for AMD to go cheap again
KGD deals have been required for TSMC at times too.Given reputation Samsung will most likely have to guarantee yields via charging per working chip rather than wafer.
Hey, sounds like something Intel should look intoGiven reputation Samsung will most likely have to guarantee yields via charging per working chip rather than wafer.
Doesn't make any sense though for Medusa Point to be using the older 3.5 µArch if the desktop package uses RDNA4.Olympic Ridge wasn't the name I heard for Z6 Desktop. Interesting
8CU RDNA 4 on the iOD for Desktop would be very, very impressive. Basically would turn any Z6 DT CPU into also a G series. With the extra of FSR4 support.
They'll have to and everyone knows they are desperate.Hey, sounds like something Intel should look into
Olympic Ridge wasn't the name I heard for Z6 Desktop. Interesting
8CU RDNA 4 on the iOD for Desktop would be very, very impressive. Basically would turn any Z6 DT CPU into also a G series. With the extra of FSR4 support.
It's 40 TOPs of INT8.I think I know why it's that big. The MS requirement for the copilot+ sticker is 40 tops of sparse fp16, right? 8 CU of RDNA4 hits that at ~2.5GHz.
They are probably using the GPU as the AI accelerator. This way they are at least not wasting any silicon on an accelerator that's never used.
It's 40 TOPs of INT8.
Do not treat AMD roadmaps as a gospel.Doesn't make any sense though for Medusa Point to be using the older 3.5 µArch if the desktop package uses RDNA4.
More than likely, AMD is getting more Xtors per $ with Samsung 4LPP than with TSMC N4C. We heard about AMD booking S4nm capacity about a year ago, many speculating that it was for a "Mendocino" like processor. As badly as Samsung is floundering, they are likely desperate for customers.As nice as 8 CUs sounds AMD isn't just giving us that on desktop from the goodness of their hearts. Aside from the AI theory, it could make sense if the same IOD is being reused for something else where some GPU grunt makes more sense.
Why would rdna 5 be ready for igpus that far ahead of DGPUs do you think?Here's an idea: How about releasing early RDNA5 silicon with Zen 6 so that it gets into the wild as early as possible and then they can keep fixing the bugs encountered in real world usage and by the time RDNA5 dGPU is ready to ship, the drivers are already fine-wined and ready to go!
@adroc_thurston , please do forward this idea to Lisa. Thank you
Because a tiny iGPU should be easier/quicker to do than a full dGPU.Why would rdna 5 be ready for igpus that far ahead of DGPUs do you think?
the opposite.Because a tiny iGPU should be easier/quicker to do than a full dGPU.
That makes no sense. Once the CU is done, you just add them up. Fewer CUs should mean less time required for validation and performance scaling optimizations. And people won't come out with pitchforks if the iGPU underperforms a bit at launch. They can always improve the drivers later. Not as catastrophic as a dGPU underperforming.the opposite.
Integrating the thingy takes a lot of time.Once the CU is done, you just add them up
That's really not the limiting factor for GPU validation lmao.Fewer CUs should mean less time required for validation and performance scaling optimizations
Drivers are easy (well, for AMD and NV. They have the know-how and the established codebase). Hweng is hard.They can always improve the drivers later
Can you please stop making up excuses and just pass my message to Lisa?Integrating the thingy takes a lot of time.
Also there's a lot more to the GPU than just the shader core.
That's really not the limiting factor for GPU validation lmao.
Drivers are easy (well, for AMD and NV. They have the know-how and the established codebase). Hweng is hard.
See RDNA3. Or Blackwell.
You ain't getting an iGP first GFX shipment from AMD.Can you please stop making up excuses and just pass my message to Lisa?
That might be very reasonable, yes. A few additional CU do not cost that much area. On Desktop you have more benefits from a bigger iGPU and power draw in case of ML/AI Workloads is not limiting here (not with only 8 CU and >=88W PPT). With 8 CU, also the "DT APU" could get obsolete, which would streamline the portfolio (so no Medusa Point for DT). The gaming performance should match or even exceed mobile Strix Point also in raster (890M with 16CU is barely faster than a 880M with 12CU), which would be a huge uplfit compared to Zen 4/5 DT. Another critical point here is SW support and a bigger iGPU would have an edge here as well over an NPU (not only ML/AI workloads but application acceleration in general).I think I know why it's that big. The MS requirement for the copilot+ sticker is 40 tops of sparse fp16, right? 8 CU of RDNA4 hits that at ~2.5GHz.
They are probably using the GPU as the AI accelerator. This way they are at least not wasting any silicon on an accelerator that's never used.
Sorry for being a party-pooper, but iirc MS explicitly requires these TOPS in the form of a dedicated NPU, otherwise I'm sure at least AMD would've preferred to go with some hybrid solution in Strix instead of adding that fat NPU.I think I know why it's that big. The MS requirement for the copilot+ sticker is 40 tops of sparse fp16, right? 8 CU of RDNA4 hits that at ~2.5GHz.
They are probably using the GPU as the AI accelerator. This way they are at least not wasting any silicon on an accelerator that's never used.