Just posting my thoughts on the memory topic for Sienna again.
Looking at this chart from Micron below, G6X offers a very slight decrease in energy per bit transferred over G6, but considering the very significant increase in BW (from 14 to 21 Gbps ) and capacity ( from 8 to 16 GiB),
total energy expended on the memory subsystem will be significantly increased (30-35%).
Now consider HBM2E, those power savings are more pronounced than for G6X. Granted assuming it is run within spec (i.e. 3.2 Gbps or 820GB/s for 2048bit )
This seems like a good design opportunity that AMD capitalized on with Sienna.
Memory subsystem should be running at lesser power compared to the VII when running within spec (3.2 Gbps)
Due to increased capacity, Sienna could make do with two stacks instead of four with the VII. This reduces the complexities of the RDL, die bonding and the interposer. In the end, costs could be shaved off a bit.
Sienna could save premium die space , we have to recollect not only bigger dies are costlier but they are more likely to be hit by defects at the same defect density and yield. Versus a 384 bit G6X bus , 2048 bit HBM2e could reduce around ~75mm2 of die space.
Regarding costs, Radeon VII was available for 699 USD. This is using four stacks. Considering that these top end RDNA2 cards are going to be selling around 1K or probably even higher(if they perform well), the HBM2e is much more justified than it were for the VII/V64/56 actually.
For Sienna, smaller die wasted on G6 PHY, lesser energy expended, cheaper memory than the VII or at worst same, less complex PCB... sounds like a win on all counts imo.
Should help keep that last mile TBP in control.
Aug 2019
The new version of the high bandwidth memory standard promises greater speeds and feeds and that’s about it.
semiengineering.com
“When we built HBM2, we wanted to expand the market breadth the device could attack, but also add in two dimensions—capacity and more bandwidth,” said Joe Macri, corporate vice president and chief tech officer of the compute and graphics division at AMD. AMD is a major partner with Samsung in the development of HBM. “It’s still 1,024 bits wide, but doubled the frequency to two gigachannels and added Error Correction Code (ECC) to get into data center and AI and machine language, since the entire data center market is built on a trusted data model.”
With HBM2E, AMD, one of the co-developers of HBM, is turning the same levers again. “The only bits added to the interface were to increase addressability, but it’s the same interface, it just runs at a higher interface of 3.2 gigatransfers per second,” Macri said.
Dec-2019
Different approaches for breaking down the memory wall.
semiengineering.com
Three years ago, HBM cost about $120/GB. Today, the unit prices for HBM2 (16GB with 4 stack DRAM dies) is roughly $120, according to TechInsights. That doesn’t even include the cost of the package.
Both Hynix and Samsung can run HBM2e beyond JEDEC standard 3.2Gbps. Samsung can run at a mind boggling 4.1-4.2 Gbps. And Hynix at 3.6Gbps. Besides throughput increase, latency is really low at such speeds.