[vrworld] Pascal Secrets: What makes Nvidia Geforce GTX 1080 so fast?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
It could have gotten another 20% boost on top of what kind of performance it have if it has aces, but it doesn't have those. so no 20% boost it could have. understand ?

100+20 is more than 100, but 100+ 40 is even more than 100 and or 120, get it ?

Yeah I get it,because AMD runs 100-40 in dx11 that's why it must be a better dx12 card...
/s
 

Vaporizer

Member
Apr 4, 2015
137
30
66
There is plenty of performance boost with nvidia in scenarios for which Dx12 was made,small cores vs huge GPUs,look up the benches,of course you don't get a boost when you bench with a single core monster that just brute forces though everything on a single thread. (dx11)
Interesting. This was already delivered by Mantle API back in 2013. Back then Folks didnt care for the Performance boost on weak CPU. Especially green team cheered that there was only low Performance boost in graphics limit. Seems like the tides and metrics changed?
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
Interesting. This was already delivered by Mantle API back in 2013. Back then Folks didnt care for the Performance boost on weak CPU. Especially green team cheered that there was only low Performance boost in graphics limit. Seems like the tides and metrics changed?

There still is only low Performance boost in graphics limit dx12 doesn't go over dx11 performance if dx11 performance is perfect,or as close to perfect as possible.
You are just able to reach the same limit with slower cpus.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
There is plenty of performance boost with nvidia in scenarios for which Dx12 was made,small cores vs huge GPUs,look up the benches,of course you don't get a boost when you bench with a single core monster that just brute forces though everything on a single thread. (dx11)

Well then provide some benches supporting your claim....
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
So? 20% worse for any single card..in 1440.
You better go search for some 4k benchmarks for single thread games like bioshock.
yeap because games with gameworks are the best example to measure a card potential D:

not lets say
the division
need for speed
and everything almost that doesnt have gameworks on it..
 

hawtdawg

Golden Member
Jun 4, 2005
1,223
7
81
"What makes it so fast"

Nothing. This is the most lackluster new GPU released on a new node (a full node jump no less) in the history of GPU's.
 

kraatus77

Senior member
Aug 26, 2015
266
59
101
So? 20% worse for any single card..in 1440.
You better go search for some 4k benchmarks for single thread games like bioshock.
3% less stock vs stock, and the furyx isn't the only card amd makes. i don't need anything more to make my mind here, or should we include Aots ,dx12 too to show nv'd weakness ? no that would be unfair right ?
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
I think the 1080 will impress more on release. I expect more out of it with proper drivers, too.

I think NV pulled a bit of a surprise with the clock speeds as well.
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
I think the 1080 will impress more on release. I expect more out of it with proper drivers, too.

I think NV pulled a bit of a surprise with the clock speeds as well.

Please! I hope you're not implying nVidia actually underplayed the 1080's performance, the would be so unlike them. They are the king of the mountain when it comes to hyperbole and playing up their products. If anything nVidia would have accounted for future driver tweaks in their little performance chart, which by the way, gave no indication as to which games were tested.

I tend to think the stock 1080 will actually do the opposite and regress in real life to something like 10-15% over 980ti.

People seeming to only want to compare to nVidia's last generation but how will Pascal fare against 390X and Fury? Do you really think it will top FuryX at 4k? Notice how nVidia put VR in the spotlight and not 4k? I would think someone spending $700 on a video card would want 4k performance over anything else...
 
Mar 10, 2006
11,715
2,012
126
Please! I hope you're not implying nVidia actually underplayed the 1080's performance, the would be so unlike them. They are the king of the mountain when it comes to hyperbole and playing up their products. If anything nVidia would have accounted for future driver tweaks in their little performance chart, which by the way, gave no indication as to which games were tested.

That's a stretch.

I tend to think the stock 1080 will actually do the opposite and regress in real life to something like 10-15% over 980ti.

I wouldn't bet on it.

People seeming to only want to compare to nVidia's last generation but how will Pascal fare against 390X and Fury? Do you really think it will top FuryX at 4k? Notice how nVidia put VR in the spotlight and not 4k? I would think someone spending $700 on a video card would want 4k performance over anything else...

I spend tons of money on graphics cards and I don't care about 4K. I have 144Hz 2560x1440 monitors and I want to take full advantage of them.

If you care about 4K and if GTX 1080 ultimately flops at 4K (doubtful) compared to AMD Fury, then I would recommend buying AMD Fury.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Please! I hope you're not implying nVidia actually underplayed the 1080's performance, the would be so unlike them. They are the king of the mountain when it comes to hyperbole and playing up their products. If anything nVidia would have accounted for future driver tweaks in their little performance chart, which by the way, gave no indication as to which games were tested.

I tend to think the stock 1080 will actually do the opposite and regress in real life to something like 10-15% over 980ti.

People seeming to only want to compare to nVidia's last generation but how will Pascal fare against 390X and Fury? Do you really think it will top FuryX at 4k? Notice how nVidia put VR in the spotlight and not 4k? I would think someone spending $700 on a video card would want 4k performance over anything else...

FuryX competitor would be a 1080ti, I would think?

1080 would be in the 980 spot. One notch below the top dog card level.

We haven't seen the 980ti counterpart yet, the 1080ti.

Nevertheless, I think the 1080 will be more impressive as time goes by.
 

linkgoron

Platinum Member
Mar 9, 2005
2,335
857
136
FuryX competitor would be a 1080ti, I would think?

1080 would be in the 980 spot. One notch below the top dog card level.

We haven't seen the 980ti counterpart yet, the 1080ti.

Nevertheless, I think the 1080 will be more impressive as time goes by.

That's not how it works.
The 1080 is the new top dog, and it is launching at 700$ MSRP, which is higher than the 980ti MSRP.

Either way, I agree with Arachnotronic - I bet that the 1080's stock performance will be just a bit better than an after-market 980ti at 4k (i.e. 25% better than the stock 980ti, 5%-10% better than after-market), which easily beats the FuryX. If it OCs, it'll be much better.

The FuryX isn't even close to after-market 980TIs.
https://www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_XtremeGaming/23.html
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
"What makes it so fast"

Nothing. This is the most lackluster new GPU released on a new node (a full node jump no less) in the history of GPU's.

Well, full node would be 20nm, 16nm is a bit smaller than 20nm but the biggest difference is FinFet vs planar. Not everything is NV's fault, even though the die got smaller the cost per transistor did not lower so despite the huge difference in size between GP104 and GM200, the former is not actually cheaper to make but the card as a whole should be, simpler PCB because of the pathetic 256bit memory bus and cheaper power section and yet the card is actually more expensive. That's just NV. But don't worry in a year 1080 is going to be 50% faster than 980Ti thanks to mature drivers, errr I mean thanks to completely ignoring 980Ti in new driver releases making it comparable to 390X in performance.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
That's a stretch.



I wouldn't bet on it.



I spend tons of money on graphics cards and I don't care about 4K. I have 144Hz 2560x1440 monitors and I want to take full advantage of them.

If you care about 4K and if GTX 1080 ultimately flops at 4K (doubtful) compared to AMD Fury, then I would recommend buying AMD Fury.
Nvidia already flops to amd at 4k in both single and multicard situations. Nvidia's terrible sli scaling needs to be fixed but it seems Nvidia is putting even less resources on that end. However if amd also follows then it gets interesting
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
I spend tons of money on graphics cards and I don't care about 4K. I have 144Hz 2560x1440 monitors and I want to take full advantage of them.

If you care about 4K and if GTX 1080 ultimately flops at 4K (doubtful) compared to AMD Fury, then I would recommend buying AMD Fury.

Yeah man I mean if that's how you game then you want the 1080 which is undoubtedly better than anything else out there. Measurably faster than 980ti OC at 1440p and it runs cooler? No brainer for you :thumbsup:

I honestly thought IF nVidia had released their highest end GP104 at $600-$700 it would blow everything out of the water,like 50% faster than 980ti, that's my only gripe. Maybe it will scale well and at 2.2 ghz truly dominate the high end until Vega/P100 launch.

FuryX competitor would be a 1080ti.

I would look at it as in June 2016 you can buy a $500 980ti, a $500 FuryX or a $700 1080. Seems like the 1080 competes only with Fury and gm200.

Nvidia already flops to amd at 4k in both single and multicard situations. Nvidia's terrible sli scaling needs to be fixed but it seems Nvidia is putting even less resources on that end. However if amd also follows then it gets interesting

Actually nVidia has claimed they have made a big leap..

Nvidia reckons the SLI HB bridge doubles the maximum theoretical transfer bandwidth in comparison to current generation Maxwell architecture when in SLI. This obviously helps in the speed at which the two graphics cards can communicate and share data with one another, in theory leading to smoother frame times and better SLI scaling.

The downside is that this particular Nvidia branded bridge is only going to be available for its GeForce GTX 1080 graphics cards at the moment. No dice on the 1070s at the moment, but a standard SLI bridge will still suffice. I’d guess from Nvidia’s viewpoint they’re thinking if you want more power than a 1070 then you’re probably going to opt for a GTX 1080 anyway.

http://www.game-debate.com/news/?news=20148&graphics=GeForce%20GTX%201080&title=Nvidia%20SLI%20HB%20Bridge%20Doubles%20GTX%201080%20Transfer%20Rate%20For%20Better%20SLI%20Scaling%20Performance

So nVidia has done an interesting thing here and beefed up 1080 SLI while 1070 makes due with their older version.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
This is the confusing part. They are the same die, one full, one cut. How is it possible to have 3 & 4 way SLI on 1070 and be limited to 2 way on 1080 no matter if the single bridge is used?

Do you know that the 1070 models will have greater than 2 way SLI?
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
This is the confusing part. They are the same die, one full, one cut. How is it possible to have 3 & 4 way SLI on 1070 and be limited to 2 way on 1080 no matter if the single bridge is used?

Do you know that the 1070 models will have greater than 2 way SLI?

No I'm not sure about that. Seems like nVidia may be artificially differentiating the 1080 this way by locking out the 1070. Maybe its unlockable? Or could be actual electrical traces on the PCB that only the 1080 has.

Still though, its interesting to see a supposedly faster version of multi-gpu.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So nVidia has done an interesting thing here and beefed up 1080 SLI while 1070 makes due with their older version.

I think that NV actually has a real limitation with SLI HB. That link states NV is doubling the SLI bandwidth but yet it's only limited to two 1080 cards. We know that a 1080 has 2x SLI fingers. It's highly likely that NV's new SLI HB bridge is getting 2GB/sec because it combines both of the SLI fingers on 2 cards.



Since the cards require 2 separate SLI fingers working together to double the bandwidth, if you introduce a 3rd card, this type of setup wouldn't work. It means there is no flexibility at all in this type of an arrangement.

It also means old SLI bridges won't work without a performance penalty; and of course NV will start selling these upgraded SLI bridges for a fee. In AMD's case, the upgraded XDMA was a free feature. :sneaky:

What makes it worse, is you have to buy a new SLI bridge depending on the mobo you have because it's not PCIe.

$70/100 premiums for reference coolers + $20-30 for SLI bridge, it all starts to add up. More $$$ to NV, less flexibility to PC gamers.


It's also not as easy anymore to just buy the new SLI HB bridge. Because they are rigid and larger in size, you have to do research as certain after-market cards may not have sufficient space between the SLI fingers and the extended AIB heatsink.




AIBs will need to think carefully now about how they design after-market heatsinks to not interfere with the rigid SLI HB bridge.

SLI BRIDGE (1st gen)
What is the function of the SLI connector?
The SLI connector is a proprietary link between GPUs that transmits synchronization, display, and pixel data. The SLI connector enables inter-GPU communication of up to 1GB/s, consuming no bandwidth over the PCI Express bus.
http://www.geforce.com/hardware/technology/sli/faq

By deducation then => SLI HB = 2GB/sec

vs.

AMD's Crossfire Bridges were limited to 0.9GB/sec:

"In AMD’s current CFBI implementation, which itself dates back to the X1900 generation, a CFBI link directly connects two GPUs and has 900MB/sec of bandwidth."
~ AT

Year 2013 AMD XDMA = 16GB/sec



Shockingly after introducing NV Link, NV's Pascal GP104 has 8X less available bandwidth between 2 cards than AMD 2013 XMDA does. This means AMD has more available bandwidth even in Quad-CF on an X99 platform or Z170 platform with PLX chip than NV has on a dedicated SLI HB bridge with 2 cards.

If PCIe 4.0 comes out, AMD's bandwidth would suddenly jump to 32GB/sec!

this entire sub forum, you, glo, and rs posts the most informative posts :thumbsup:

Thanks a lot! I am nowhere near as knowledgeable as some of the guys on this forum who deeply understand GPUs. I actually have to research and read up on the technology before I post because often I don't even know how the technology works. I am not an electrical engineer either and don't work in the GPU industry. What I do have is pretty good memory which allows me to find key information quickly since I recall reading it on website ABCD for product XYZ.

If you have the interest in tech overall, there are a lot of great articles that help learn. For example, just stumbled on this so I'll read it to see if I can learn something new.
 
Last edited:

Mahigan

Senior member
Aug 22, 2015
573
0
0
Bolded part is not true.

The Maxwell cards do have Asynchronous Compute Engines - they just aren't as good at it as AMD cards are. No reason to suspect Nvidia would remove them from Pascal.

EDIT:

Look at the table near the bottom of this page.

Maxwell could, under CUDA, make use of an onboard ARM chip for management of Graphics and Compute loads. This doesn't work for DX12. So that graph is wrong.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Basically, what NVIDIA have done with Pascal is overclocked Maxwell and split the SMs into two in order to boost compute efficiency. This will help push Pascal past the FuryX with ease under DX12 scenarios.

The question is, what about Vega? Vega will be quite a boost over Fiji.

We know that Polaris is only a mainstream GPU. It likely won't be competing in the same bracket as the 1080. Heck it might be closer to a 1070. Vega, on the other hand, will have two variants. Vega 10 and 11.

Seems to me that buying a 1080 is a huge mistake as AMD have pushed Vega forward to a Sept/Oct launch. If we assume mass availability of the 1080 by July then 2 months later we're going to have a GPU which will bury the 1080 with relative ease. All Vega needs to do is bump up performance by 5FPS over a FuryX in AotS to beat a 1080.

OF course there are other games but I don't see NVIDIA retaining the performance crown for long. Even if they release a 1080ti.

If Polaris 10 can deliver better than 390x performance despite using GDDR5 on a 256-bit bus then we're looking at Vega being quite powerful.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
No I'm not sure about that. Seems like nVidia may be artificially differentiating the 1080 this way by locking out the 1070. Maybe its unlockable? Or could be actual electrical traces on the PCB that only the 1080 has.

Still though, its interesting to see a supposedly faster version of multi-gpu.

I think that NV actually has a real limitation with SLI HB. That link states NV is doubling the SLI bandwidth but yet it's only limited to two 1080 cards. We know that a 1080 has 2x SLI fingers. It's highly likely that NV's new SLI HB bridge is getting 2GB/sec because it combines both of the SLI fingers on 2 cards.



Since the cards require 2 separate SLI fingers working together to double the bandwidth, if you introduce a 3rd card, this type of setup wouldn't work. It means there is no flexibility at all in this type of an arrangement.

It also means old SLI bridges won't work without a performance penalty; and of course NV will start selling these upgraded SLI bridges for a fee. In AMD's case, the upgraded XDMA was a free feature. :sneaky:

What makes it worse, is you have to buy a new SLI bridge depending on the mobo you have because it's not PCIe.

$70/100 premiums for reference coolers + $20-30 for SLI bridge, it all starts to add up. More $$$ to NV, less flexibility to PC gamers.


It's also not as easy anymore to just buy the new SLI HB bridge. Because they are rigid and larger in size, you have to do research as certain after-market cards may not have sufficient space between the SLI fingers and the extended AIB heatsink.




AIBs will need to think carefully now about how they design after-market heatsinks to not interfere with the rigid SLI HB bridge.

SLI BRIDGE (1st gen)
What is the function of the SLI connector?
The SLI connector is a proprietary link between GPUs that transmits synchronization, display, and pixel data. The SLI connector enables inter-GPU communication of up to 1GB/s, consuming no bandwidth over the PCI Express bus.
http://www.geforce.com/hardware/technology/sli/faq

By deducation then => SLI HB = 2GB/sec

vs.

AMD's Crossfire Bridges were limited to 0.9GB/sec:

"In AMD’s current CFBI implementation, which itself dates back to the X1900 generation, a CFBI link directly connects two GPUs and has 900MB/sec of bandwidth."
~ AT

Year 2013 AMD XDMA = 16GB/sec



Shockingly after introducing NV Link, NV's Pascal GP104 has 8X less available bandwidth between 2 cards than AMD 2013 XMDA does. This means AMD has more available bandwidth even in Quad-CF on an X99 platform or Z170 platform with PLX chip than NV has on a dedicated SLI HB bridge with 2 cards.

If PCIe 4.0 comes out, AMD's bandwidth would suddenly jump to 32GB/sec!



Thanks a lot! I am nowhere near as knowledgeable as some of the guys on this forum who deeply understand GPUs. I actually have to research and read up on the technology before I post because often I don't even know how the technology works. I am not an electrical engineer either and don't work in the GPU industry. What I do have is pretty good memory which allows me to find key information quickly since I recall reading it on website ABCD for product XYZ.

If you have the interest in tech overall, there are a lot of great articles that help learn. For example, just stumbled on this so I'll read it to see if I can learn something new.
We could be left with the interesting situation where a 1070 4-way SLI is a lot faster than a 1080 2-way SLI, and close to the same price.
 

flopper

Senior member
Dec 16, 2005
739
19
76
Basically, what NVIDIA have done with Pascal is overclocked Maxwell and split the SMs into two in order to boost compute efficiency. This will help push Pascal past the FuryX with ease under DX12 scenarios.

The question is, what about Vega? Vega will be quite a boost over Fiji.

We know that Polaris is only a mainstream GPU. It likely won't be competing in the same bracket as the 1080. Heck it might be closer to a 1070. Vega, on the other hand, will have two variants. Vega 10 and 11.

Seems to me that buying a 1080 is a huge mistake as AMD have pushed Vega forward to a Sept/Oct launch. If we assume mass availability of the 1080 by July then 2 months later we're going to have a GPU which will bury the 1080 with relative ease. All Vega needs to do is bump up performance by 5FPS over a FuryX in AotS to beat a 1080.

OF course there are other games but I don't see NVIDIA retaining the performance crown for long. Even if they release a 1080ti.

If Polaris 10 can deliver better than 390x performance despite using GDDR5 on a 256-bit bus then we're looking at Vega being quite powerful.

Vega will be a 800$ card also.
Polaris might still be the P/P card if the OC/IPC is good.
 

coercitiv

Diamond Member
Jan 24, 2014
6,403
12,864
136
Seems to me that buying a 1080 is a huge mistake as AMD have pushed Vega forward to a Sept/Oct launch.
We haven't seen Pascal reviews, we haven't seen Polaris reviews, and AFAIK AMD has yet to go on record with Vega being a 2016 release, yet you reckon 1080 is a bad buy. You built so much credibility on this forum by patiently explaining how GPUs work, how (DX12) features are implemented and how they affect performance, and now choose to spend it lavishly on unsubstantiated claims.

I was disgusted by the way media outlets welcomed the 1080 and embraced the fallacy of comparing it to Titan X when aftermarket 980Ti models offered more performance for (relatively) less dough, I literally felt the need to cover my face in shame when "hardware enthusiasts" considered the jump in performance much better than what Intel is offering, but I wouldn't even dream of talking bad about this product until it's properly compared at least with Maxwell.

Think about it again: you're advising people not to buy a product that has yet to be launched (and reviewed) in favor of a product that will launch even later (with no current specs available).
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |