Saylick
Diamond Member
- Sep 10, 2012
- 3,385
- 7,151
- 136
Wasn't sure where to put this, but according to Greymon, Hopper is monolithic and there's currently not an MCM variant.
Quick little update. GH100, the biggest die, may not be in an MCM product but the MCM product may consist of two smaller Hopper family dies? I'm not sure why Nvidia wouldn't MCM the big die if it's possible to MCM the smaller dies. Power consumption limits? If that's the case, why have the big die to begin with?
I suppose it could be similar dies but the MCM die has additional logic for coherency. That sounds like a total waste but perhaps they were unsure they would be able to get it working correctly?
If the single die really is ~33% more FP32 shaders, ~20 bigger die AND 25% more power draw, that doesn't sound so great if you factor in the shrink.
That sounds just plain wrong, especially coming from Samsung 8LPP. Hopefully, more info comes along at GTC.
Oh, duh, yeah. Still, seems wrong - unless GH100 is physically smaller or has some major new functional unit included (or >> cache).GA100 is N7, so it's from N7 -> N5.
Oh, duh, yeah. Still, seems wrong - unless GH100 is physically smaller or has some major new functional unit included (or >> cache).
Fabs work with NV to push the reticle limits to the max. Yields are probably poor, but for GPUs that expensive, it does't matter as much.How can it be slightly less than 1000mm2 and monolithic?
NVIDIA's next-gen GeForce RTX 4090 could use up to an insane 850W+
NVIDIA's next-gen Ada Lovelace GPU uses a chunk of power in new rumors: GeForce RTX 4090 could use up to 850W of power... wow.www.tweaktown.com
The latest rumor is that the top card will draw 850W. I wonder how that would even work with current PC case designs. Maybe the stock FE cooler will be an AIO.
Thats is what we, as consumers, were taught for the past 16-17 years by NV & AMD (remember the GTX8800 Ultra 370W?), but the truth is technology has advanced since then and PSUs are able to pull much more power these days and output it quite reliably over the power rails to whatever hungry GPU might be connected to them. Advances in cooling and fan technology allow to dissipate more heat away from the hardware and out of the case, and silicon technology can produce densely packed chips with billions of transistors that together amount to the 300-400W operating power.250W is about the max I consider reasonable, maybe 300W. Anything higher than that is just too much. It's not practical.
Thats is what we, as consumers, were taught for the past 16-17 years by NV & AMD (remember the GTX8800 Ultra 370W?), but the truth is technology has advanced since then and PSUs are able to pull much more power these days and output it quite reliably over the power rails to whatever hungry GPU might be connected to them. Advances in cooling and fan technology allow to dissipate more heat away from the hardware and out of the case, and silicon technology can produce densely packed chips with billions of transistors that together amount to the 300-400W operating power.
Just as we'll have to adjust to the new price norms, we will need to adjust ourselves to accept more power hungry hardware, because the number of transistors per mm2 is not going to go down any time soon.
That's assuming that the stolen data is legitimate. The only details we've seen are threats to release drivers and firmware, evidence being likely fake leaked "code files" which tell us nothing we don't already know, written in a markup language that I can't even identify. This is all by people who can hardly put two sentences together.Nvidia was hacked and the data stolen is out.
This should give us plenty of information about future architectures.
I would suggest that if a person is worried about the cost of electricity to game and use their computer that they shouldn't do either and concentrate on the true necessities of life.I mean at 400 watts and two hours a day use, it would be costing me $75 AUD a year in power just for gaming. Then throw in the rest of the system, psu inefficiency, and the fact the video card would likely still use more power on the desktop (where it would easily run for another 10 hours a day) and you are starting to get into a pretty high cost per year.....especially when you could turn down the resolution and use upscaling to get similar performance from a much cheaper card.
Fabs work with NV to push the reticle limits to the max. Yields are probably poor, but for GPUs that expensive, it does't matter as much.
Looks like at best we'll see a doubling in performance from AD102 over GA102, which is in line with all of the previous leaks. ~1.7x SM counts and some clock increases as well.
Right, but as you're aware, performance won't scale linearly with TFLOPS. Rumormill says 2x performance increase which I think is realistic.92 TF FP32 would be like 2.3-2.5x compute power. What's interesting is that the lower tier parts don't get that much of an SM increase so their performance increases won't be anywhere near as dramatic.