blckgrffn
Diamond Member
They are just happy to see me is all.AMD doesn't make big dies. They make big packages.
They are just happy to see me is all.AMD doesn't make big dies. They make big packages.
Ok, fine, but Turin could run them in parallel - at the same time on same CUDA core, later in Ampere additional FP32 only was added that could be active only if INT was not used, and now they all support INT but only without FP32 - a clear regression on core that was in TuringIt dual issues a vector FP instruction at the same time as a vector INT instruction.
The demand is too damn high.How is this such a paper launch if they ramped down 40 series, and it’s on the same node?
There is no stock, far less than 40 series which was FAR more desireable by a MILEThe demand is too damn high.
Alternatively, just read it as "Jarred needed to come up with three positives. The fact that this is a pretty weak positive tells you there's not a lot of overwhelmingly awesome things to say about the product."
A couple responses later there is a note from Jarred about “something funny going on in the Blackwell drivers” limiting performance at lower resolutions, which wipes away all performance advantage of the 5080 over the 4080S.Jarred at Tom's basically admits he needs to come up with postives about Nvidia:
I disagree. It would be like if Ford released a New Mustang that was just like the previous Mustang, because that is exactly what happened.
You can disagree, but I think you're factually wrong. As recently as Ampere the x80 cards used the largest die. The size ratio between the largest die and the next step down is the largest it's ever been. The last time it was close to this large of a gap was Maxwell, but the 980 was on the GM200 die which was the largest die.
Over the past ten generations of NVidia cards in 6 of them the x80 card was on the largest die. All of the 4 times it wasn't have been in the past 5 generations. In other words the branding has been watered down over time.
Another way to look at this is where the x80 card occurred in the product stack. Going back all the way to Fermi (400 series) it was the top card and until the orignal TITAN with Kepler the x80 was the top single die GPU with early x90 cards being dual GPU cards.
They've just been doing it slowly enough that you haven't noticed. And for most of the period where this has been happening, AMD has had no real competition so no one seemed to care. The only recent generation where the x80 card was on the big die was when AMD had a full stack RDNA2 architecture that was competitive.
GA103 with 8GB or 16GB? In either case it would have been soundly beat by N21 chops. If it had been intended Nvidia had plenty of reasons to call an audible.IMO, the 3080 used the biggest die partially because of TTM issues and partially because of Samsung 8 nm yields. Probably originally intended to use AD103.
GA103 with 8GB or 16GB?
the 980 was not GM200, it was GM204 - only the 980 Ti and Titan were GM200You can disagree, but I think you're factually wrong. As recently as Ampere the x80 cards used the largest die. The size ratio between the largest die and the next step down is the largest it's ever been. The last time it was close to this large of a gap was Maxwell, but the 980 was on the GM200 die which was the largest die.
Over the past ten generations of NVidia cards in 6 of them the x80 card was on the largest die. All of the 4 times it wasn't have been in the past 5 generations. In other words the branding has been watered down over time.
Another way to look at this is where the x80 card occurred in the product stack. Going back all the way to Fermi (400 series) it was the top card and until the orignal TITAN with Kepler the x80 was the top single die GPU with early x90 cards being dual GPU cards.
They've just been doing it slowly enough that you haven't noticed. And for most of the period where this has been happening, AMD has had no real competition so no one seemed to care. The only recent generation where the x80 card was on the big die was when AMD had a full stack RDNA2 architecture that was competitive.
If that overclock ends up being typical of what you can expect from 5080s, that would get pretty dang close to stock 4090 performance.
HUB tested and MSI 5080 and managed to get it to OC it to 3.2 Ghz at least?
Really does seem like N4P would have helped quiet the haters a bit, even if there are other issues with Blackwell.
Overclocking the RTX 5080 Suprim SOC worked very well, we gained +12% in real-life performance on top of the factory OC. This is much more than what we usually see on modern graphics cards.
Unfortunately NVIDIA is limiting the maximum overclocking for the GDDR7 memory chips to +375 MHz—usually NVIDIA doesn't have any OC limits. At first, I was wondering why NVIDIA left so much performance on the table, especially when the card's gen-over-gen gains are so small, but then I realized that they might want to build a RTX 5080 Super next year. The problem is that RTX 5080 already maxes out the GB203 GPU, so additional units can't be enabled, and they'll have to rely on increases to clock speeds only. Looking at our numbers, higher clocks and memory speeds, some firmware optimizations, maybe even faster GDDR7 chips can definitely yield +10% in mass production—RTX 5080 Super spotted.
You know. It seemed odd the FE is rated for what, 360w? But pulls nothing near that in games. A super with 3gb chips and higher clocks makes a ton of sense. Nvidia is so lame.From Techpowerup's review of the Suprim:
View attachment 115953
And his impressions on overclocking: