Ryzen's poor performance with Nvidia GPU's. Foul play? Did Nvidia know?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
May 11, 2008
20,055
1,290
126
Before, Intel had no real incentive to go beyond 4 cores for mainstream because the software was just not there and they could make a lot of money with 4 or less cores. Of course there have been software that can make use of multiple cores, but not much.
Always, single threaded performance was the most important, always highlighted as the most important thing by reviewers, users and programmers and this notion became a self fulfilling prophecy.
Nvidia is not to blame for creating a driver that delivers best performance with 4 or less cores.
Now that AMD is pushing forwards for moar than foar, it becomes clear that Nvidia has to do some work. Which they will have finished when moar than foar becomes popular.
I mean, everybody screams foul, but an I7 6700K or I7 7700K or I5 6600K or I5 7600K have always been promoted as the best option for gaming and not for example a 6900K.
So, this is a non issue. Nvidia will deliver, they always do.
Nvidia is no Intel, Nvidia constantly push the edge of performance to higher levels (just as AMD always does but on a tiny budget) . And do so without mob practices Intel is known for doing in the past.

Move along people, nothing to see...
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
Best thing about this thread:
A GTX1060 and Intel combination is much faster than Ryzen and RX 480 and yet people are blaming nVidia for AMD's new medicore processor.
God. This is part of the issue that the Intel guys just can not look past.

No one is saying the RX480 is a Faster card, or even most cases that a 480 with a Ryzen will be a better setup than a comparable Nvidia card. There are examples where this might happen, but those prove the issue, but doesn't magically make the 480 better or Ryzen better in most games.

The "mediocreness" was always expected Ryzen. Nobody here is expecting Ryzen to do better in games than a 7700 unless it well threaded. The problem is the numbers looked a lot worse than they should have on an IPC and clockspeed level. Which ended being corner cases that really drug the numbers down. When analyzing those it became apparent that DX12 isn't running right. Those issues exist even on 6900. What has become apparent is that Nvidia's software scheduler was optimized for the 7700 core configuration. It's the only setup that see's consistent performance increases when switching to DX12, and the 6900 and R7 see similar drops in performance. When compared to similarly set up reviews from months prior (pcgamehardware.de) you can see that DX12 scaled past 4c in these games. Now the DX12 numbers matches the 6900 with 4 cores disabled.

Basically to erase the overhead/penalty when using DX12 on the most used configuration with their videocards they have capped thread scheduling to that setup and on top of that you still have the overhead that the 7700 (6700, 4700, and so on) doesn't see. It doesn't make the 480 a better card. It doesn't mean a 480 and Ryzen is better than the 1060. Though I would suggest that the 480 and Ryzen is safer bet than 1060 with Ryzen even if the 1060 is generally faster. What it does mean is that Ryzen owners and Intel HDET owners are artificially capped and penalized for their configurations in DX12 and if not fixed if Vega is competitive is going to leave a lot of egg on Nvidia's face.
 

AMDisTheBEST

Senior member
Dec 17, 2015
682
90
61
As of now, I believe you are correct.

In that Nvidia is realizing that what they have known for a while, is now coming back to haunt them. But I think this is much more of a bigger issue, than many may think. This is not a glitch, or an error. It is because of FERMI and CUDA and Nvidia resting on their laurels and being caught by AMD's uber development of their 64bit unified driver and Crimson suite. (Nvidia is going to have to write all new driver modules when Volta spins anyways, from what it sounds.)

AMD's Software advancements, are in part because of their HSA development. Which is a bit of an irony here. Because 4 years or so ago, ATI/AMD (Radeon) was getting hammered by fans & gamers a like, about their drivers, glitches, jitter, or IQ in games, etc. Heck, even the load time for the graphic catalyst suite was an issue. But that was before the gamer, Dr Su took the helm..! (I think a meme of Jen-Hsun Huang with his pants down, would be a satirical insert here.)


Again, I do not think Nvidia's driver can be rewritten easily. Perhaps someone else who is more knowledgeable can chime in, but I believe a complete rewrite of NV's unified suite, needs to be redone. Anyone of knowledge know as to the exact difficulty in rewriting their unified drivers..?
There are plenty of Linux fans and enthusiasts who wrote open sourced drivers for nvidia cards on Linux. They all perform like shit compared to the official proprietary nvidia drivers. Same story for AMD.
 

InfoFront

Junior Member
Jun 23, 2010
4
1
81
There are plenty of Linux fans and enthusiasts who wrote open sourced drivers for nvidia cards on Linux. They all perform like shit compared to the official proprietary nvidia drivers. Same story for AMD.

The AMD open source Linux drivers are actually getting pretty good.
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
Before, Intel had no real incentive to go beyond 4 cores for mainstream because the software was just not there and they could make a lot of money with 4 or less cores. Of course there have been software that can make use of multiple cores, but not much.
Always, single threaded performance was the most important, always highlighted as the most important thing by reviewers, users and programmers and this notion became a self fulfilling prophecy.
Nvidia is not to blame for creating a driver that delivers best performance with 4 or less cores.
Now that AMD is pushing forwards for moar than foar, it becomes clear that Nvidia has to do some work. Which they will have finished when moar than foar becomes popular.
I mean, everybody screams foul, but an I7 6700K or I7 7700K or I5 6600K or I5 7600K have always been promoted as the best option for gaming and not for example a 6900K.
So, this is a non issue. Nvidia will deliver, they always do.
Nvidia is no Intel, Nvidia constantly push the edge of performance to higher levels (just as AMD always does but on a tiny budget) . And do so without mob practices Intel is known for doing in the past.

Move along people, nothing to see...
Well you could always argue if there was no hardware the game/software makers wouldn't be incentivized to program for more cores, kind of a chicken & egg thing. You can easily see how things have moved in the Android realm & multi threading is a big part of the reason why it's so successful, though it's also partly due to necessity with big/little & many cores that phone makers deliberately push in the space.

Perhaps but it doesn't mean they'll do it voluntarily, they generally do things at their own pace & not unless there's a major (consumer) backlash.
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
Nvidia has something to gain.

They have a monopoly on high and upper-midrange GPUs. If you want a high end GPU (GTX 1070+) you have to buy Nvidia. Gimping the performance of their GPUs with AMD CPUs won't hurt their own sales at all, since consumers have no alternatve. It will only hurt AMD, which is definitely beneficial to Nvidia.

Gimping the performance of their midrange (1050-1060) GPUs slightly won't effect sales of those significantly. Most of the sheeple in the market for a mid-range or lower GPU simply extrapolate down the monstrous performance of the high end GPUs. There's some logical gymnastics that take place in consumers' minds. Something like: 1080Ti > All, therefore Nvidia > All. I digress, but the point is that Nvidia can gimp the performance of their mid range GPUs and it won't matter. Case in point: the RX480 is a better value than the GTX 1060, but the 1060 will outsell the hell out of the RX480 any day.

It's all in the minds of consumers. If Nvidia GPUs become known as "The GPUs that don't work well with AMD CPUs", and AMD start selling a lot of CPUs, it's going to be a problem for Nvidia.

No driver developer at Nvidia would deliberately sabotage performance with AMD CPUs in order to try to indirectly hurt AMD's GPU division via their CPU division. Hurting AMD's CPU division (or even GPU division) isn't a goal for Nvidia. Gaining marketshare is. Sabotaging their own GPU performance on one CPU platform does not cause Nvidia to gain marketshare, it causes their products to become less competitive in the market. These are two big corporations run by grown up professionals, not by adolescent fanboys.

Regarding the 1060, it's selling more because there are more board vendors marketing 1060's than 480's, Nvidia has traditionally been known for better drivers (see above), and in DX9-11 it usually performs better (especially at the time the two cards were reviewed).
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
So, how many of these Ryzen reviews are going to be redone with AMD GPU's instead? Probably very few, since they already made all their launch noise. And, it's a problem the review sites should have already known about. But that doesn't make grist for the mill as it were.
How about we just have someone do an in depth review of this "syndrome" first? I'd be curious to see to what extent Ryzen is being handicapped by nVidia drivers that aren't properly optimized for it yet.
 
Reactions: scannall

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
This is stupid, nvidia has nothing to gain from this, i could make the same argument in reverse "there is a foul play on AMD drivers making Intel look bad" and it would make more sence.

Now the issue here is very simple, either nvidia drivers are not CCX aware or it just scales badly over 4 cores. Also i thnk they are still using SSE2?

I'd like to see that argument. I think you give your debating skills too much credit.

I think the most obvious reason is nVidia wasn't included in the loop with Ryzen's development. And understandably why too.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
They kind of have to, I am sure with the Titan Xp now and the Titan XP prior being more than 1k that a large portion of their users (not the largest but a significant portion) are Intel HDET owners, so 6850, 6900, and a handful of 6950 guys. Knowing that the cards are handicapped on their systems isn't going to help Nvidia in the long run (why spend a grand in the future on cards from a manufacturer that snubbed their superior CPU choice (i imagine a lot of these owners the Beemer or Merc equivalent of PC enthusiasts)). Eventually these guys will get past the "it's just a Ryzen optimization issue".

Then if Vega is even 75% of what people think it's going to be the scaling in DX12 will be marvelous and its going to put pressure on Nvidia to fix DX12 scaling even if it means they can't hide the performance penalty that AMD doesn't have with DX12. This issue all starts from there, Nvidia did everything they could to hide the fact that there is a penalty not seen on competitors in DX12.

They definitely have to. If you are going to charge a premium for a product it had better offer a premium experience. It's not good that they aren't premium performance wise with the latest API's. Let their performance lag on the latest CPU's too and that could really PO a lot of their premium customers.
 

scannall

Golden Member
Jan 1, 2012
1,948
1,640
136
How about we just have someone do an in depth review of this "syndrome" first? I'd be curious to see to what extent Ryzen is being handicapped by nVidia drivers that aren't properly optimized for it yet.
That's reasonable as well. Though it seems to be a general issue as it affects Intel processors with more than 4 cores as well as well. A deep dive into it would be a good thing for some smart and capable tech site to do. ;-)
 
Reactions: 3DVagabond

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
No one is saying the RX480 is a Faster card, or even most cases that a 480 with a Ryzen will be a better setup than a comparable Nvidia card. There are examples where this might happen, but those prove the issue, but doesn't magically make the 480 better or Ryzen better in most games.

I havent compared GPUs. Why would it be fair to combine a Intel CPU with a AMD GPU when nVidia delivers better performance? It isnt.

The "mediocreness" was always expected Ryzen. Nobody here is expecting Ryzen to do better in games than a 7700 unless it well threaded. The problem is the numbers looked a lot worse than they should have on an IPC and clockspeed level. Which ended being corner cases that really drug the numbers down. When analyzing those it became apparent that DX12 isn't running right. Those issues exist even on 6900. What has become apparent is that Nvidia's software scheduler was optimized for the 7700 core configuration. It's the only setup that see's consistent performance increases when switching to DX12, and the 6900 and R7 see similar drops in performance. When compared to similarly set up reviews from months prior (pcgamehardware.de) you can see that DX12 scaled past 4c in these games. Now the DX12 numbers matches the 6900 with 4 cores disabled

Basically to erase the overhead/penalty when using DX12 on the most used configuration with their videocards they have capped thread scheduling to that setup and on top of that you still have the overhead that the 7700 (6700, 4700, and so on) doesn't see. It doesn't make the 480 a better card. It doesn't mean a 480 and Ryzen is better than the 1060. Though I would suggest that the 480 and Ryzen is safer bet than 1060 with Ryzen even if the 1060 is generally faster. What it does mean is that Ryzen owners and Intel HDET owners are artificially capped and penalized for their configurations in DX12 and if not fixed if Vega is competitive is going to leave a lot of egg on Nvidia's face.

The issue isnt nVidia and DX12. It is you. You havent understood what DX12 is. This is a huge problem here. A starting point is this presentation from nVidia in which they explain what their DX11 driver is doing and what a DX12 application has to do: http://www.gdcvault.com/play/1023517/Advanced-Rendering-with-DirectX-11
 

crashtech

Lifer
Jan 4, 2013
10,554
2,138
146
I havent compared GPUs. Why would it be fair to combine a Intel CPU with a AMD GPU when nVidia delivers better performance? It isnt.



The issue isnt nVidia and DX12. It is you. You havent understood what DX12 is. This is a huge problem here. A starting point is this presentation from nVidia in which they explain what their DX11 driver is doing and what a DX12 application has to do: http://www.gdcvault.com/play/1023517/Advanced-Rendering-with-DirectX-11
As if we're all going to watch an hour long presentation that is most likely skewed towards those presenting it. Why don't you explain it yourself in text, here?
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
I havent compared GPUs. Why would it be fair to combine a Intel CPU with a AMD GPU when nVidia delivers better performance? It isnt.



The issue isnt nVidia and DX12. It is you. You havent understood what DX12 is. This is a huge problem here. A starting point is this presentation from nVidia in which they explain what their DX11 driver is doing and what a DX12 application has to do: http://www.gdcvault.com/play/1023517/Advanced-Rendering-with-DirectX-11

I don't have to compare DX12 between AMD and Nvidia to see that Nvidia has changed something. An AMD setup is extremely hard not to GPU bottleneck.

http://www.pcgameshardware.de/Battl...attlefield-1-Technik-Test-Benchmarks-1210394/

BF1 720 MP on a 6900 with a 1080

DX11 All 8c16t = 160.3 avg 147 min
DX12 All 8c16t = 156.8 avg 137 min

DX11 4c8t = 139 avg 129 min
DX12 4c8t = 126.4 avg 113 min

This is what we Expect on Nvidia video cards. Scaling performance with a CPU overhead drop on DX12. This was over 6 months ago. Fast forward to Computerbase.de's review that also is done on DX11 and DX12 in MP at 720 with the same settings with a Titan XP instead of a 1080.

https://www.computerbase.de/2017-03.../#diagramm-battlefield-1-dx11-multiplayer-fps

DX11 6900k = 143.8 avg
DX11 6950K= 129.6 avg
DX11 6850K= 120.9 avg
DX11 7700K= 116.4 avg
DX11 4770K= 84.4 avg

DX12 6900k = 122.4
DX12 6950K= 120.9
DX12 6850K= 122.0
DX12 7700K= 127.6
DX12 4770K= 88.4

You can see there the point. Don't try to compare numbers there is to much variables in testing between the two sites. A while ago pcgameshardware.de found scaling in DX11 and DX12 with the 1080 and the drivers they were using. Fast forward to late Febuary DX11 is scaling fine with cores, not as well, but an observable amount. Clockspeed can play a part (which is why the 6850 and 6950 are close together and the 6950 being the best compromise between cores and speed is leading everything). The 7700 shows is prowess compared to the lowly HW. I wish they included a 4c Broadwell to compare with the BW-E line.

The DX12 numbers are then pretty obvious. First all the Broadwell-E chips are basically equal, near statistical error margin. With that error margin lining up with their clock speed difference but not enough frame difference to account for their clock speed difference. The 7700 a 4c chip, actually gains performance when switched to DX12, which has been apparent as not the case with DX12 and nvidia in the past and with most non 4c8t setups. The 4770K and older but still 4c8t, also saw a jump up in performance minor compared to the 7700, but something none of the other chips saw.

Now you see the model of the current issue. When you look at computerbase.de's reviews and you look at the testing with Ryzen (side comment, Ryzen 1800x sees almost the exact same percentage drop from DX11 to DX12 as the 6900k) it becomes pretty obvious when this effect is happening. But I didn't want to make this about Ryzen. This is just Nvidia 6 months ago, to Nvidia now, on all Intel systems. It's useful for Ryzen because it clues us in on the corner cases and with AMD cards we can eliminate game issues assuming we can eliminate GPU bottlenecking. The last is extremely hard considering AMD's lineup. That's where we have to thank adore with his RX480CF analyzation with Tomb Raider. His conclusion was wrong. But is theory was right Tomb Raider was creating an edge case that didn't reflect actual CPU performance in the game and by using the two AMD cards in CF we could see that Tomb Raider in D12 was scaling with cores on most locations and NVidia wasn't. It wasn't perfect because it didn't include a 6900 and specifically a 6900 only running at several core configurations. But he was trying to prove that Tomb Raider shouldn't be used. But that test was the final kick in figuring out what was actually happening.

Edit: When we remove CPU arch, game implementation, windows versions, even the GPU arch itself from the equation and the only major change is driver version, where do you think the problem is?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
As if we're all going to watch an hour long presentation that is most likely skewed towards those presenting it. Why don't you explain it yourself in text, here?
Don't bother. He posted the exact same thing in the Ryzen discussion thread and failed to come up with answers when I asked the same thing you did.
 
May 11, 2008
20,055
1,290
126
Well you could always argue if there was no hardware the game/software makers wouldn't be incentivized to program for more cores, kind of a chicken & egg thing. You can easily see how things have moved in the Android realm & multi threading is a big part of the reason why it's so successful, though it's also partly due to necessity with big/little & many cores that phone makers deliberately push in the space.

Perhaps but it doesn't mean they'll do it voluntarily, they generally do things at their own pace & not unless there's a major (consumer) backlash.

I see your point, but with for example games this is not always the case. Often we see games arise with a next gen 3d engine that needs consumer cpu and gpu hardware that is not available yet to allow all eye candy at high resolutions or with crowded multiplayer levels. Operating systems have supported multi core since forever. Windows also. Android has become popular because google provided an excellent os eco system with excellent support to run on smart telephones from multiple manufacturers(killing any chance of a monopoly) which where single core arm derivates at first. And the clever use of the capacitive multi touch screen and a whole new market that was available for multiple manufacturers. ARM sold licenses to all of them, so nobody could enforce a architecture monopoly.
ARM does the hard part, designing a cpu very powerful but still suited for mobile use.. And provides complete solutions for short time to market and this translates into reduced need for big R&D budgets. It is no wonder, that the ARM architecture is popular.
 
Reactions: DarthKyrie

crashtech

Lifer
Jan 4, 2013
10,554
2,138
146
Consumer-level hexacores have been available since 2010, and have since gained ever more popularity with users that spend the most on PC hardware, so the excuses that Nvidia doesn't need to cater to more than four cores "yet" fall a bit flat; at the very least there shouldn't be a penalty for more cores, and there have been many years to realize and rectify the situation.
 
Reactions: DarthKyrie

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Edit: When we remove CPU arch, game implementation, windows versions, even the GPU arch itself from the equation and the only major change is driver version, where do you think the problem is?

The problem is "you". You have learned something today. But this knowledge has been available since the release of the game.
Look at these Battlefield 1 DX12 test from November:
Gamernexus: http://www.gamersnexus.net/game-ben...cpu-benchmark-dx11-vs-dx12-i5-i7-fx?showall=1
Techspot: http://www.techspot.com/review/1267-battlefield-1-benchmarks/page4.html
Computerbase: https://www.computerbase.de/2016-10...ramm-battlefield-1-auf-dem-i7-6700k-1920-1080

You see the same behaviour in Deus Ex, too.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
The problem is "you". You have learned something today. But this knowledge has been available since the release of the game.
Look at these Battlefield 1 DX12 test from November:
Gamernexus: http://www.gamersnexus.net/game-ben...cpu-benchmark-dx11-vs-dx12-i5-i7-fx?showall=1
Techspot: http://www.techspot.com/review/1267-battlefield-1-benchmarks/page4.html
Computerbase: https://www.computerbase.de/2016-10...ramm-battlefield-1-auf-dem-i7-6700k-1920-1080

You see the same behaviour in Deus Ex, too.

No one is questioning that Nvidia doesn't do well in DX12. It's the cost of the Arch. But that isn't what is being pointed out to you.
 

w3rd

Senior member
Mar 1, 2017
255
62
101
How about we just have someone do an in depth review of this "syndrome" first? I'd be curious to see to what extent Ryzen is being handicapped by nVidia drivers that aren't properly optimized for it yet.

Ryzen isn't being handicapped... NVidia is.

NVidia's software doesn't scale with 6 & 8 core intel, or AMD CPUs. It is most notable on Ryzen, because AMD hasn't released 4 core yet.
 

w3rd

Senior member
Mar 1, 2017
255
62
101
The problem is "you". You have learned something today. But this knowledge has been available since the release of the game.
Look at these Battlefield 1 DX12 test from November:
Gamernexus: http://www.gamersnexus.net/game-ben...cpu-benchmark-dx11-vs-dx12-i5-i7-fx?showall=1
Techspot: http://www.techspot.com/review/1267-battlefield-1-benchmarks/page4.html
Computerbase: https://www.computerbase.de/2016-10...ramm-battlefield-1-auf-dem-i7-6700k-1920-1080

You see the same behaviour in Deus Ex, too.

Plz read Post #75 (http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...-did-nvidia-know.2503650/page-3#post-38841936 )

You seem to be misunderstanding the entire situation here. This is nothing to do with dx12 vs dx11, or anything of that nature. It is how inefficient Nvidias drivers are in Windows 10 on 6 & 8core machines.... VS Radeon software on 6 & 8 core machines.
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
I'd like to see that argument. I think you give your debating skills too much credit.

I think the most obvious reason is nVidia wasn't included in the loop with Ryzen's development. And understandably why too.

Is not really an argument, is just an example on how i could reverse this BS whiout any fact or technical reason to support it, and it still make more sence that the OP argument.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Their driver does poorly on Intel CPU's with more than 4 cores as well. It's a general issue, and one that's a little embarrassing. Since nVidia has supposedly always had better drivers than AMD.
You seem to be misunderstanding the entire situation here. This is nothing to do with dx12 vs dx11, or anything of that nature. It is how inefficient Nvidias drivers are in Windows 10 on 6 & 8core machines.... VS Radeon software on 6 & 8 core machines.

LOL where do you guys get this stuff?

Do you just make things up as you go along? NVidia drivers scaling poorly on CPUs with more than four cores/threads eh? Then by gosh, how do you explain this? As far back as Kepler, NVidia drivers scaling with HT enabled on a 3770K whereas AMD's driver chokes:



Perhaps something a bit more modern then? How about a GTX 1080 scaling all the way to 10 cores/20 threads in Ghost Recon Wildlands:



And just a few months ago, Computerbase.de did a test on CPU scaling with a Titan X Pascal, and look at what they found.



The truth is, you guys have no idea what you're talking about. NVidia's driver scales wonderfully on high core/threaded CPUs, and NVidia's drivers have been native 64 bit for years

Also, CPU scaling has more to do with the game itself than the drivers. If a game is programmed to only use four threads, then no amount of driver trickery will change that.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Also, CPU scaling has more to do with the game itself than the drivers. If a game is programmed to only use four threads, then no amount of driver trickery will change that.

Actually it has to do with the game and the drivers both. While drivers can't magically make a game scale beyond what it's programmed to do, a game can't fix poor drivers either.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
LOL where do you guys get this stuff?

Do you just make things up as you go along? NVidia drivers scaling poorly on CPUs with more than four cores/threads eh? Then by gosh, how do you explain this? As far back as Kepler, NVidia drivers scaling with HT enabled on a 3770K whereas AMD's driver chokes:



Perhaps something a bit more modern then? How about a GTX 1080 scaling all the way to 10 cores/20 threads in Ghost Recon Wildlands:



And just a few months ago, Computerbase.de did a test on CPU scaling with a Titan X Pascal, and look at what they found.



The truth is, you guys have no idea what you're talking about. NVidia's driver scales wonderfully on high core/threaded CPUs, and NVidia's drivers have been native 64 bit for years

Also, CPU scaling has more to do with the game itself than the drivers. If a game is programmed to only use four threads, then no amount of driver trickery will change that.
He was wrong the problem is specific to DX 12 on Nvidia video cards. I posted just earlier today an example of previous scaling and the fact that it stopped somewhat recently in probably one of the best examples of well threaded gameplay.
 
Reactions: w3rd
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |