Polaris is the name of architecture. So... which one Polaris you are talkin' 'bout, Shintai?
I think there is very real chance that bigger Polaris will have HBM1.
Polaris is the name of architecture. So... which one Polaris you are talkin' 'bout, Shintai?
I think there is very real chance that bigger Polaris will have HBM1.
The name of the architecture is GCN 4, also called GCN 1.3 in layman terms.
Fury has 4 HBM stacks of 1GB each. Perhaps Polaris 11 (?) will have 6 of those HBM stacks. Eight HBM stacks may be a little to much, thinking of all those traces that needs to be layout on the interposer.
Which will arrive first ? Polaris 10 (little polaris) or polaris 11 (Big polaris).
I seem to recall that AMD already had polaris 10 silicon working end of 2015.
Polaris 10 could be outfitted with GDDR5 for cost reasons.
I wasn't aware that AMD actually designated different GCN versions internally - this was something review sites have tacked on to differentiate different feature sets in newer chips.
Please correct me if I'm wrong.
Review sites uses the term GCN 1.x. AMD documentation and developer tools uses GCN 1 to 4.
GCN 1.3/GCN4 is equal to NVidia Pascal. While Polaris 10/11 is equal to something like GP106/104.
You need volume for it to be meaningful. If AMD can only sell a handful og HBM APUs, it doesn't matter how much they can charge.
There must be a demand for 250-300$ APUs before it is possible for AMD to actually release one. Before AMD can ask such prices, the APUs need a complete overhaul. After that they are no longer targeted for customers in 3rd world / developing countries, but the same customers as Intel Iris Pro.
Before any of that can happen, Zen must first succeed.
It would be a new chip design. And not only would Zen have to deliver. Polaris would have to as well. And dont quote PR numbers, they are meaningless.
Mind you all we already have APU with 7870 IGP since late 2013. Don't act like it is something new and beyond reach
An Apple design win alone could provide adequate volume. The first-generation Intel Iris Pro was basically a specialty product made for the 15" rMBP.
Polaris 10 is the bigger gpu meant for desktops
Polaris 11 is the low power/mobile gpu
Yes, if Zen were to flop, then none of this matters. But I don't think it will. I expect roughly Sandy Bridge levels of IPC with clock speeds of 3.0-4.0 GHz on desktop parts (depending on core count and other factors) and lower clocks on laptop and server parts as usual. It's one thing to be a bit cautious about what AMD can do, it's another to not think they can match an Intel design from 2011 with a 2016 clean-sheet release. The fact that matching a 2011 Intel design will be reasonably competitive shows how much the market has stagnated in the last 5 years due to the lack of meaningful competition.
3D stacking mastering(good yields, good prices) will change everything on the RAM world.
i would hope for a HBM2 APu with memory built in, no system memory needed. Saves board space, simplifies for OEMs, lets AMD control how much RAM is present to prevent gimping by single channel solutions,etc.
Cheap OEM products can have more disabled or bad HBM stacks. Price premium would be worth it for laptop or NUC like devices. I also hope they get an apple win to drive down costs so they can offer OEMs something like this.
This all depends on Zen and HBM2 availability and cost.
Perhaps if having two smaller dies (CPU SoC die and GPU die) vs. one large "one size fits all die" allows better yields or advantages in mix and match binning it could help justify the cost of the interposer for consumer APUs?
No, the Interposer solution would never work due to cost. Even at say 5$. Remember all the lower parts that would need it. Selling a 40-50$ APU with a 5$ interposer hurts.
Also be careful with all your extra chip designs, they each cost a lot of money on their own in masks and design. if you listen to the Intel/Asus podcast you will also hear why multiple chip designs doesn't make economic sense.
And you wouldn't be able to fix bad ram and would have to replace the entire thing whenever a single "component" breaks down!
Golden! This would be a win for OEMs AND AMD....let's do this boys! Fk the consumer!!!!
Another benefit to separating the CPU and GPU is that consoles could be improved more incrementally. Say every 2-3 years they offer a new version that has the same CPU but a more powerful GPU to better handle graphics improvements that have happened. This is something I've thought was likely to happen, and recent rumors suggest both Microsoft and Sony are looking towards that even this gen.
If "mix and match" binning (ie, using the best quad core Zen die and the best GPU die) allows AMD to make better APUs then maybe that extra $5 could be worth it?
This in contrast to having dedicated APU dies where maybe some have highest bin CPU cores, but lower bin iGPU while others have lower bin CPU with highest bin GPU.
P.S. As far as all the lower bin CPU SoC and GPU dies go .... those can just become desktop parts (ie, AM4 CPU or desktop video card). So the $5 interposer charge would not apply to those.
I think having an interposer could actually reduce the number of chip designs needed.
Example: Instead of having a specific APU die there would just be the desktop CPU SoC die + one of the dGPU dies.
If you first design something for an interposer, you are going to need it. Both for the CPU or GPU alone. You will need more chip designs. Plus you need interconnects on all chips you wish to use and such. This will also add cost and size.
you mean you would need a physical interface on the GPU and CPU that uses PCI-E x.x based physicals , keying and encoding (like most custom interconnects these days). I can see how that would be a very big problem
Just to add to this raging debate i.e. if it hasn't been posted alreadyAnd that's on a 28nm process. 14LPP allows roughly 2.2x the transistor density. The Playstation 4 APU has eight Jaguar cores and a GPU only slightly below full Pitcairn (18 CUs compared to Pitcairn's 20, with the same 32 ROPs and 256-bit GDDR5 memory bus). Go down to 14LPP and replace the big GDDR5 controller with the smaller HBM2 controller, and you could almost certainly fit in eight Zen cores plus 40 GCN CUs (2560 shaders, the same count as R9 290/390) and 64 ROPs. With execution resources this wide, you wouldn't have to crank up the CPU or GPU clock speeds too high to get competitive performance, thus saving power. (CPU clocks would of course have a turbo feature for lightly-threaded apps.).
http://kotaku.com/sources-sony-is-working-on-a-ps4-5-1765723053Sources: Sony Is Working On A ‘PlayStation 4.5’