Why GeForce 10 series are using x16 PCIe?

alphajoza

Junior Member
Dec 29, 2016
2
0
1
GeForce 10 series are employing PCIe 3.0 x16 and since PCIe lanes are limited in CPUs and motherboards, users are forced to use 2 x8 configuration if they want to use two GPUs; some wonders what limitation (especially in bandwidth) the GPU will encounter when forcing it to use x8 instead of x16; but tests and benchmarks shows that there's almost no difference in performance when going from x16 to x8!

My questions is that how is this possible? and if the GPU can have the same performance even in x8 mode, then why advertise it as PCIe 3.0 x16?
 

AdamK47

Lifer
Oct 9, 1999
15,318
2,923
126
Both of my Titan Xp cards are running at PCI-E 3.0 X16. There is a small difference between X8 and X16.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,425
8,388
126
GeForce 10 series are employing PCIe 3.0 x16 and since PCIe lanes are limited in CPUs and motherboards, users are forced to use 2 x8 configuration if they want to use two GPUs; some wonders what limitation (especially in bandwidth) the GPU will encounter when forcing it to use x8 instead of x16; but tests and benchmarks shows that there's almost no difference in performance when going from x16 to x8!

My questions is that how is this possible? and if the GPU can have the same performance even in x8 mode, then why advertise it as PCIe 3.0 x16?
because the card's interface is a PCIe 3.0 x16 slot. the fact that PCIe can self configure the amount of lanes and GPUs are perfectly happy to work in reduced numbers of lanes doesn't change that.
 

sinisterDei

Senior member
Jun 18, 2001
324
26
91
Plus, it's marketing. It's been around since... well, forever. AGP at least.

"Well, our new card is AGP4X, it's up to twice as fast as AGP2X!" - When in reality, even back then AGP4X was a minor difference from 2X.

The story repeated at AGP8X vs 4X
And again for first gen PCIE vs AGP
And so on and so forth.

The issue isn't that the faster interface speeds aren't useful- they are. The issue is that a major reason the faster interface speeds were originally invented was to allow DMA (direct memory access) for the graphics card to talk to main system memory. At the time, video cards had relatively tiny amounts of memory and the idea of simply 'sharing' the main memory on the system sounded super amazing.

The problem was, even back then onboard memory on graphics cards was way faster and lower latency than system memory; any card that actually resorted to using system RAM performed *horribly*. This can even be seen today, when you try and play a game on a card with too little VRAM and you configure the settings too high, you get *horrible* hitching and hiccups in the framerate. Congratulations, you've found the spots where your GPU borrows system RAM to make up for its lack of VRAM!

And though system RAM has gotten much faster, and the interface has gotten faster, it's not mattered because of two factors:
1. GPU memory has gotten much faster as well
2. GPUs come with lots *more* memory now, so why would they need to borrow system RAM?

So today, of course they say "Well, our card is PCIE 3.0 X16!" Because if they didn't, someone else would say "Our card is X16, theirs is only X8- obviously ours is faster!"

Though, not everyone does this. The AMD RX460 cards are only X8.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Please allow Sanic to reply on my behalf:



It's really just that simple. Faster = better
 
May 11, 2008
20,055
1,290
126
Plus, it's marketing. It's been around since... well, forever. AGP at least.

"Well, our new card is AGP4X, it's up to twice as fast as AGP2X!" - When in reality, even back then AGP4X was a minor difference from 2X.

The story repeated at AGP8X vs 4X
And again for first gen PCIE vs AGP
And so on and so forth.

The issue isn't that the faster interface speeds aren't useful- they are. The issue is that a major reason the faster interface speeds were originally invented was to allow DMA (direct memory access) for the graphics card to talk to main system memory. At the time, video cards had relatively tiny amounts of memory and the idea of simply 'sharing' the main memory on the system sounded super amazing.

The problem was, even back then onboard memory on graphics cards was way faster and lower latency than system memory; any card that actually resorted to using system RAM performed *horribly*. This can even be seen today, when you try and play a game on a card with too little VRAM and you configure the settings too high, you get *horrible* hitching and hiccups in the framerate. Congratulations, you've found the spots where your GPU borrows system RAM to make up for its lack of VRAM!

And though system RAM has gotten much faster, and the interface has gotten faster, it's not mattered because of two factors:
1. GPU memory has gotten much faster as well
2. GPUs come with lots *more* memory now, so why would they need to borrow system RAM?

So today, of course they say "Well, our card is PCIE 3.0 X16!" Because if they didn't, someone else would say "Our card is X16, theirs is only X8- obviously ours is faster!"

Though, not everyone does this. The AMD RX460 cards are only X8.

In absolute sense, it is the case that faster speeds reduce latency when sending data with serial data transfers like pci-e. And there is indeed a situation where other factors are slowing the system down more than the pcie latency as can be seen with x8 and x16. But indeed, now there is so much vram that almost everything is preloaded into vram before the level can be played. So, the largest sets of data like textures are already present in vram and compressed too. And a list of drawcalls send to the gpu is much smaller. And there is of course tricks like compression to send data over the pci-e to the gpu and let the gpu decompress it. But that would only be useful when doing it at the loading of the level and with static data. I do wonder if compressing and decompressing can now be done in a fashion that it is also useful with a draw call. Is compressing and decompressing less time consuming than the current pci-e latency ? Is that possible ?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |