AMD 6000 reviews thread

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126


Wow, even the number 3 card has 16GB VRAM and is faster than the 2080TI. And the $1000 6900XT matches the $1500 3090 in performance.

The 3000 parts don't look so hot now.

Post reviews edit:
It's astonishing what AMD have managed to achieve with both the Ryzen 5000 and the Radeon 6000, especially given the absolutely minuscule R&D budget and resources compared to nVidia/Intel. Lisa Su is definitely the "Steve Jobs" of AMD with such a remarkable turnaround.

6900XT:
(It's absolutely amazing to see AMD compete with the 3090)


 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,144
6,842
136
What happens when there is a game that you really want to play but whose performance you know sucks on your card? Right now that seems to be the issue with AMD cards, except games where this happens are relatively old so the new hardware can brute force its way into delivering high enough frames per second. But what happens when the issue is with a newer game?

Isn't the whole point of not fixing these issues in older games so that the developers have more time to fix/address them in newer titles instead? You yourself say it's an issue with older APIs, which eventually will fall out of use in favor the newer APIs where this problem doesn't exist. You even point out that a quasi-solution to the problem exists in that newer hardware can just brute force the problem, which is essentially free.

But let's just put all of that aside for the sake of argument. The number of older titles is massive compared to the number of new ones that are coming out over a given time frame. How do you even prioritize which to fix with the limited time budget you have? At what point does performance on newer cards just become good enough that it isn't worth the time investment?


PurePC had some CPU utilization charts:




Also why does one chart show 16 GB RAM and the other 32 GB? I get that the review is showing benchmarks for two different memory setups (and the presentation is atrocious since they're in separate charts with different scales which makes the pictorial representation kind of useless, but that's a complete aside), but some of the translation isn't very good. It doesn't necessarily look like he's getting a detailed analysis of CPU utilization in a controlled way.

You'd want to show the same system configurations with the only difference being the GPU at exactly the same spots/time in a reasonably controlled benchmark. It's hard enough knowing if I reasonably understand the information being presented or the conclusions be reached from that data due to the translation alone, but this really doesn't help me have any confidence in the claims presented by the author.
 

Leeea

Diamond Member
Apr 3, 2020
3,795
5,549
136
Tamz_msc:

One review site on the internet is not going to mean much. Anybody can say anything they want on the internet. The thing is, everyone else is saying it is fine.

Cherry picking a benchmark from a 2nd review site does not mean much either. We can cherry pick benchmarks all day to claim anything we want.

It is also not unknown for drivers on release to have issues. Ex: RTX 3000 series graphics card are unusable at stock settings due to frequent crashing! Pitchforks and torches! (wait, a driver update just fixed the "unfixable" "capacitor" issue ....)

Your approach here is making people call BS.
But even your right, it does not matter. AMD can update drivers just like Nvidia, and we have seen them do it in the past.



If your like me and play old games a lot, check out d912pxy. It converts dx9 to dx12 on the fly, which can result in massive performance gains on both Nvidia and AMD in some cases. dx11 games usually have a dx9 mode with no visual quality loss. It is even compatible with reshade!

Also, check out reshade, it is game changing amazing.

Lastly, I love old games, and I adore my big texture packs. The 3080s 10GB of memory is far more scary to me then any so called fixable driver issue on AMD.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,144
6,842
136
Lastly, I love old games, and I adore my big texture packs. The 3080s 10GB of memory is far more scary to me then any so called fixable driver issue on AMD.

I probably wouldn't worry about it for older games all that much. Keep in mind that some of these were released in or before a time when 4 GB was standard in a top end graphics card. They also faced other system limitations (PCIe bus speeds) that are less of a bottleneck and didn't have nearly as good memory compression as modern cards.

Consider Skyrim, which is probably one of the popular older games (just over nine years old now!) that gets a lot of support with high-res texture packs, modding, and the like. It came out before Kepler and GCN which means it was running on cards that topped out a 2 GB unless you were running SLI/Xfire or had one of the multi-GPU cards.

If the technology and approach used by the PS5 becomes available and pervasive in PC gaming, it also reduces that need for large quantities of VRAM since the needed assets can be fetched from disk and brought into memory so quickly that you shouldn't see jittery performance outside of some extenuating circumstances. The PS5's SSD has enough raw throughput to fill the VRAM of a 3080 in under 2 seconds if it can take it in that fast.

Obviously we're not there yet, but Microsoft is apparently at work bringing DirectStorage to PC, which will provide an API that offers similar functionality to PC developers that they'll be getting on the new consoles. The ever decreasing cost of SSDs and adoption of NVMe to get around bandwidth limitations is going to help as well. I think that by the time we start to see all of this coming to fruition and reaching wider adoption would probably be around the time the 3080 would run into a VRAM will due to the next generation of games developed for the new consoles starting to push up against a 10 GB memory limitation.
 

Leeea

Diamond Member
Apr 3, 2020
3,795
5,549
136
Mopetar, I agree your correct now about v-ram, I disagree it will remain that way in the future.


It was a Skyrim texture pack that pushed me to upgrade out of my rx580 4g . Specifically, the loading the beautiful city mods on top of the 2k texture packs + reshade pulls >4gb. In Skyrim you can however compromise the texture to the 1k packs and not notice to much. Except for the sky. But the game is so epic and I spent so much time in it I just wanted the best visuals. I ended up purchasing a vega56 8gb just for that one game.

One of things d912pxy can do is leave assets cached in VRAM until they get evicted. The more v-ram you have, the better it is.

Another set of games I ran into issues with is total war series texture packs, and I had an rx580 4g down to <4 fps in that game. Total War empire specifically has the most wonderful texture packs. Each regiment gets their historical uniforms, with lots of variation between soldiers! Beautiful, but unplayable without the v-ram once the battles scale up.


Right now most of the mod pack guys aim for about 8GB. So the 3080 will be fine. But having been burnt hard by v-ram before, I guess I am leery of what that will look like 18 months from now. The mod-pack guys are likely going to start aiming for 16 GB because users are going to have that to burn.
 
Last edited:
Reactions: Tlh97 and dr1337

Mopetar

Diamond Member
Jan 31, 2011
8,144
6,842
136
I don't doubt that VRAM usage is going to increase. After all, software is a gas. I'm not the type of person who's ever really run a lot of older games with high-res texture packs, but I'll be honest that talking about Skyrim has kind of made we want to fire it up again, but I'm not sure if the progression to 16 GB will happen that quickly or if there's much to necessitate it.

Probably the best way to get an estimate of how soon we should expect it to happen is by looking back at previous progression as the amount of VRAM available on cards has changed over time. We haven't even reached a point of wide availability of 16 GB GPUs since they're only starting to trickle out just now. I suppose there are a few people who have had a Radeon VII or a Titan card for some time, but that's a tiny number of people. Based on current rumors most people who want any of the new cards in that class are unlikely to get one until March at the soonest.

When you bought a RX580 4 GB, you're talking about an amount of memory that would have first started appearing in high-end cards 5 years prior to that since Polaris 20 cards came out in early 2017 and the earliest cards to launch with at least 4 GB of VRAM were the first high-end GCN cards (7970 6 GB) in early 2012 and high-end Kepler (4 GB 670 and 680 models) a few months after that.

Over the next two years 4 GB would become more mainstream as the GTX 700-series had 4 GB options in cards all the way down to the 730 and AMD having the option available in all of their midrange cards and even a version of of the 240 that offered 4 GB. By 2015 4 GB was becoming the standard option as opposed to a premium choice and the move to 16nm in 2016 meant that you had to get down to the $100 product level for both AMD and Nvidia before you could even buy a card (let us not speak of the 1060 3 GB) with less than 4 GB.

I don't know precisely when the move to 8 GB minimum for textures occurred, but it was presumably sometime after you got your RX 580. Either way, we can just look at the adoption curve for 4 GB and see that it took at least 5 years, and probably closer to 6 before texture packs moved beyond that point and made it obsolete. I suspect that a large part of that had to do with availability and adoption of 4K resolution, but I don't see anything replacing that for a while still.

The lack of 16 GB in most Nvidia cards will also likely keep the progression from starting for a while longer and with the previous Ti cards being much closer to 10 GB than 16 GB, I don't think there will be as great of a push to go much beyond that. AMD's Navi 22 cards which will probably be in the $300 - $450 range and sell in far greater quantities than Navi 21 will have 10 GB and 12 GB variants which will further make that range a popular target.

But if you're looking to invest in a ~$700 GPU (well assuming you can get one at MSRP right now) then it's not wrong to want a little peace of mind that lets you know your investment will last you a while. Even if it just gives you a lot of extra room to cache textures so that you don't need to load anything from disk twice I suppose there's some benefit to it. Maybe talk to other people in that community or some of the people who produce the textures to see what their thoughts are since it's going to be a much more informed opinion.
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Isn't the whole point of not fixing these issues in older games so that the developers have more time to fix/address them in newer titles instead? You yourself say it's an issue with older APIs, which eventually will fall out of use in favor the newer APIs where this problem doesn't exist. You even point out that a quasi-solution to the problem exists in that newer hardware can just brute force the problem, which is essentially free.
First of all, DX11 being an "old" API doesn't mean that its an obsolete API. It is still the go-to choice of API for Windows for most non-AAA games, and even some recent AAA games. It isn't going to fall out of favour and get replaced with DX12 anytime soon, because of the complexities inherent in making a good implementation of the latter.

Secondly, you cannot brute-force all DX11 titles with all cards. For example Sekiro runs at a near-flawless 60FPS on entry level NVIDIA cards of today like the GTX 1060, but has drops to 40 FPS on an RX 580 in places where the GTX 1060 still manages to stay above 50. So until AMD fixes its drivers' behaviour with higher-level APIs in the less popular but still relevant games, brute-forcing isn't a "free" alternative like you claim.
But let's just put all of that aside for the sake of argument. The number of older titles is massive compared to the number of new ones that are coming out over a given time frame. How do you even prioritize which to fix with the limited time budget you have? At what point does performance on newer cards just become good enough that it isn't worth the time investment?
That's AMD's problem to figure out; there is no need to rationalize the fact that the driver team doesn't do the best job when it comes to priorities. If the customer finds out that this is a recurring problem with AMD cards then its just another reason that'll disincentivize him from buying an AMD card.
Also why does one chart show 16 GB RAM and the other 32 GB? I get that the review is showing benchmarks for two different memory setups (and the presentation is atrocious since they're in separate charts with different scales which makes the pictorial representation kind of useless, but that's a complete aside), but some of the translation isn't very good. It doesn't necessarily look like he's getting a detailed analysis of CPU utilization in a controlled way.

You'd want to show the same system configurations with the only difference being the GPU at exactly the same spots/time in a reasonably controlled benchmark. It's hard enough knowing if I reasonably understand the information being presented or the conclusions be reached from that data due to the translation alone, but this really doesn't help me have any confidence in the claims presented by the author.
Did you look at the other link I posted where they test an i5-10400F and a Ryzen 5 3600 with the RTX 2060 Super and RX 5700 XT? If you didn't or are not confident about their numbers, then what about Anandtech's 3dmark API overhead test article from 2015? Granted that it's more than five years old and this test needs a revisit with modern hardware, but the claim the AMD drivers have higher DX11 overhead isn't just about me or some others making noise without reason.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Tamz_msc:

One review site on the internet is not going to mean much. Anybody can say anything they want on the internet. The thing is, everyone else is saying it is fine.

Cherry picking a benchmark from a 2nd review site does not mean much either. We can cherry pick benchmarks all day to claim anything we want.

It is also not unknown for drivers on release to have issues. Ex: RTX 3000 series graphics card are unusable at stock settings due to frequent crashing! Pitchforks and torches! (wait, a driver update just fixed the "unfixable" "capacitor" issue ....)

Your approach here is making people call BS.
But even your right, it does not matter. AMD can update drivers just like Nvidia, and we have seen them do it in the past.



If your like me and play old games a lot, check out d912pxy. It converts dx9 to dx12 on the fly, which can result in massive performance gains on both Nvidia and AMD in some cases. dx11 games usually have a dx9 mode with no visual quality loss. It is even compatible with reshade!

Also, check out reshade, it is game changing amazing.

Lastly, I love old games, and I adore my big texture packs. The 3080s 10GB of memory is far more scary to me then any so called fixable driver issue on AMD.
If it's a fixable driver issue why then does AMD not fix it? It has been a problem for years, ever since DCLs became a thing with Civilization V.

And no, I'm not cherry-picking results from the Internet. PCGH.de shows the same thing, as do a few others.

And finally, using a wrapper is a band-aid fix, that has its own problems like breaking exclusive full-screen, unsupported games, crashes and what not.
 

Timorous

Golden Member
Oct 27, 2008
1,759
3,275
136
If it's a fixable driver issue why then does AMD not fix it? It has been a problem for years, ever since DCLs became a thing with Civilization V.

And no, I'm not cherry-picking results from the Internet. PCGH.de shows the same thing, as do a few others.

And finally, using a wrapper is a band-aid fix, that has its own problems like breaking exclusive full-screen, unsupported games, crashes and what not.

Why doesn't NV fix the driver issue in WoW Shadowlands with RT that makes the 3090 perform worse than the 2080Ti?
 

KompuKare

Golden Member
Jul 28, 2009
1,197
1,507
136
Consider Skyrim, which is probably one of the popular older games (just over nine years old now!) that gets a lot of support with high-res texture packs, modding, and the like. It came out before Kepler and GCN which means it was running on cards that topped out a 2 GB unless you were running SLI/Xfire or had one of the multi-GPU cards.
In the case of Skyrim it used to very hard work getting lots of mods to run together without crashing, but then the Wabberjack modlist organiser came along:
And for Skyrim SE (SE being basically the Bethesda re-write for PS4 and Xbox one, which almost incidentally gave us the 64-bit PC version) many of the mod lists, like Living Skyrim which I played, require 8GB VRAM.
Now, if a similar modable game comes around (maybe Bethesda finally released the follow up), there is no way a 10GB card isn't going to be limiting.
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
How popular is wow vs some random old DX11 game?
How important is RT in WoW Shadowlands that just released a week ago? Besides that the improvement to visual fidelity it brings is so underwhelming that it's not even worth turning on in the first place, even with capable hardware. Why don't you give NVIDIA the benefit of the doubt and see whether a game/driver fix will come out in the future?

In the meantime AMD has effectively stated that they will never implement command lists in their drivers, instead rely on the game to use a single thread to do all the draw call submissions, which will result in performance issues in games that do other things with their draw call thread. And the number of games where this happens is not insignificant, even if they are relatively old.

AMD's issues with (lack of) multithreading DX11 draw calls is an ongoing issue for a number of years, and in comparison reduced RT performance in a week-old game isn't even worth talking about in the same breath.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Triple digit fps in those titles shown, do you have anything that is borderline unplayable because of said dx11 driver woes? I'm sure if anything new comes out dx11, the performance will be were it needs to be.
It's not about whether something is unplayable, but where it performs relative to other cards from the competition. It won't affect high end cards that much for sure, but it doesn't mean that the problem can be brushed aside either.
 

Hitman928

Diamond Member
Apr 15, 2012
6,340
11,280
136
How important is RT in WoW Shadowlands that just released a week ago? Besides that the improvement to visual fidelity it brings is so underwhelming that it's not even worth turning on in the first place, even with capable hardware. Why don't you give NVIDIA the benefit of the doubt and see whether a game/driver fix will come out in the future?

In the meantime AMD has effectively stated that they will never implement command lists in their drivers, instead rely on the game to use a single thread to do all the draw call submissions, which will result in performance issues in games that do other things with their draw call thread. And the number of games where this happens is not insignificant, even if they are relatively old.

AMD's issues with (lack of) multithreading DX11 draw calls is an ongoing issue for a number of years, and in comparison reduced RT performance in a week-old game isn't even worth talking about in the same breath.

From what I understand, DCL is super buggy and badly supported in DX11 which is why AMD does not support it.
 
Reactions: Tlh97 and Leeea

Mopetar

Diamond Member
Jan 31, 2011
8,144
6,842
136
Secondly, you cannot brute-force all DX11 titles with all cards.
You misunderstand the point. You won't get good performance with older cards like Polaris. The point is that even though drivers may not be optimal, newer and future cards will be able to deliver adequate performance in older titles due to more powerful hardware as opposed to a more optimized driver.

Did you look at the other link I posted where they test an i5-10400F and a Ryzen 5 3600 with the RTX 2060 Super and RX 5700 XT? . . . but the claim the AMD drivers have higher DX11 overhead isn't just about me or some others making noise without reason.

No, just the images you posted from the one translated article. Highlights the importance of not having a sloppy presentation though. Even if you're right, no one is going to care to listen to what you have to say.

How important is RT in WoW Shadowlands that just released a week ago?
[\quote]

I don't care about it or RT effects, but for all I know more people are playing it than the old games. Different people have different things that are important to them. What's to stop them from giving you the same treatment for their issues as you seem to have for theirs?

In the meantime AMD has effectively stated that they will never implement command lists in their drivers, instead rely on the game to use a single thread to do all the draw call submissions, which will result in performance issues in games that do other things with their draw call thread. And the number of games where this happens is not insignificant, even if they are relatively old.

AMD's issues with (lack of) multithreading DX11 draw calls is an ongoing issue for a number of years, and in comparison reduced RT performance in a week-old game isn't even worth talking about in the same breath.

Well don't buy an AMD card then if they're not going to address what you think is important. If I want to go back and play old Bethesda titles I'll just do it on a modern system that will get adequate performance despite lack of drivers to deal with their bad game engine.

Im not even sure if the WoW issue is even an Nvidia problem, but assuming it were I'd rather they focus on the new games I'm playing now than the ones I might pull off the shelf for a bit in a few years.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
You misunderstand the point. You won't get good performance with older cards like Polaris. The point is that even though drivers may not be optimal, newer and future cards will be able to deliver adequate performance in older titles due to more powerful hardware as opposed to a more optimized driver.
That future hardware being more powerful will be able to brute force through performance issues is no consolation for less-than-adequate driver optimization compared to NVIDIA.
From what I understand, DCL is super buggy and badly supported in DX11 which is why AMD does not support it.
NVIDIA does it just fine, though the underlying work they must have done to support it must have been non-trivial. I'm just wishing that AMD did the same, which by the looks of it is not happening any time soon.
 

Hitman928

Diamond Member
Apr 15, 2012
6,340
11,280
136
That future hardware being more powerful will be able to brute force through performance issues is no consolation for less-than-adequate driver optimization compared to NVIDIA.

NVIDIA does it just fine, though the underlying work they must have done to support it must have been non-trivial. I'm just wishing that AMD did the same, which by the looks of it is not happening any time soon.

To get DCL to work you have to do a lot of work on a per game basis from what I understand. I don't think anyone sees that as the way forward.

I expect AMD to focus far more on DX12/Vulkan support just as they have the last few years and that includes the support they offer to studios. Almost everyone is moving to DX12/Vulkan and the vast majority of those that aren't are games that run fine on a potato. The number of demanding new games that don't have DX12/Vulkan code paths is becoming very small.
 
Reactions: Tlh97 and Leeea

DrMrLordX

Lifer
Apr 27, 2000
22,126
11,814
136
Guys, AMD is finally getting on board the whole founders edition bait-n-switch game.

Classic AMD, copies NV a generation or two late and gets to invite all the same cynicism and scrutiny without the shield of brand image and rabid fanboys

We may see them copying Intel and nVidia depending on how things go.

Step up your game and find more that favor Nvidia! Maybe start dropping some showing the weakness of AMD's RT....I'm still looking for a 6800XT and maybe it'll drive down the prices!

That's . . . a unique perspective. Perhaps all the anti-AMD trolls we've had over the years have merely been posters hoping to buy discounted AMD hardware. Clever. Ditto for the anti-Intel/nVidia trolls.

You guys seems to be confusing what MSRP is. The MSRP of the AMD reference card is $650 (base model). That's not the MSRP for AIBs.

It is sort of a baseline that consumers expect to pay for a given hardware feature set. When the major differences between a reference card and a card from an AIB are the cooling solution and MAYBE a different board/VRM layout, you kind of wonder: how is AMD selling at $X while the AIB can't?
 
Reactions: Tlh97 and Leeea

Yosar

Member
Mar 28, 2019
28
136
106
That future hardware being more powerful will be able to brute force through performance issues is no consolation for less-than-adequate driver optimization compared to NVIDIA.

NVIDIA does it just fine, though the underlying work they must have done to support it must have been non-trivial. I'm just wishing that AMD did the same, which by the looks of it is not happening any time soon.

Sorry, but you clearly has no clue what you talk about. The first mention of overhead in drivers shows you clearly has no clue (and quoting purepc.pl, these guys really don't know what they talk/write about).
Unfortunately you don't know how GPU works. So here is a link from someone who knows much more from you, and actually cared to explain it to guys like you.


Basically there is no overhead in AMD drivers. AMD scheduler is placed in hardware and nViddia in drivers. Due to this quite contrary there is overhead in nVidia drivers. And thanks to this overhead when game is mostly one threaded nViddia cards works better.
The rest is in the link. Learn something maybe.
In other words there is no problem to be fixed. And AMD won't fix it. Maybe you were asleep lately but we are already in era of Vulkan and DX12 where one threaded games belongs to the past.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,144
6,842
136
That future hardware being more powerful will be able to brute force through performance issues is no consolation for less-than-adequate driver optimization compared to NVIDIA.

If it's an old enough title it won't matter. You eventually reach a point where it runs it faster than the monitor can display the frames. That eventually happens regardless of how good the driver support is or isn't and what your display supports. Go look at the low resolution benchmarks AT did (kind of as a joke) for the Ryzen 5000-series review. Even some of the modern titles run in excess of 400 FPS at the kind of resolutions many older games were built for. No one's monitor can handle that and there are still a lot of people running old monitors that are capped at 60 FPS. At that point no one cares about how much faster it could have been if only there were better drivers.

Also, the people who don't play any of those older games at all won't lose sleep over it, just like they don't over the poor fuel economy of a car they don't own.

NVIDIA does it just fine, though the underlying work they must have done to support it must have been non-trivial. I'm just wishing that AMD did the same, which by the looks of it is not happening any time soon.

Maybe you should contact their marketing department. I'm sure consumers would feel better if there were an "Amazing drivers for 10 year old games!" sticker on the box.

Or better yet, go tell the board at AMD that Lisa Su is doing a terrible job because there isn't better driver support for older games. I'm sure they'd love to hear about how much better AMD could be doing if only gamers who likely don't own AMD cards (I mean, if you cared about the game back in the day at all you probably grabbed an Nvidia card due to the better performance since AMD would have had these same driver problems with the game back then) could get better performance in these older games. I'm sure no one on the driver team has anything better they could be doing with their time and good shout from management will get the lazy gits sorted out.
 
Reactions: Tlh97 and Leeea

tajoh111

Senior member
Mar 28, 2005
309
331
136
That future hardware being more powerful will be able to brute force through performance issues is no consolation for less-than-adequate driver optimization compared to NVIDIA.

NVIDIA does it just fine, though the underlying work they must have done to support it must have been non-trivial. I'm just wishing that AMD did the same, which by the looks of it is not happening any time soon.

.

PCmagazine reviewed the 6800 and 6800xt and they found the 6800 series underperformed in older games vs the RTX 3080 series. The funny thing is all of these titles used to be very AMD friendly. Three being AMD sponsored, and hitman absolution just running really well on AMD hardware.
 
Reactions: Tlh97 and Leeea

Mopetar

Diamond Member
Jan 31, 2011
8,144
6,842
136
.

PCmagazine reviewed the 6800 and 6800xt and they found the 6800 series underperformed in older games vs the RTX 3080 series. The funny thing is all of these titles used to be very AMD friendly. Three being AMD sponsored, and hitman absolution just running really well on AMD hardware.

Back when most of those games were released they would have been optimized for GCN. Is it surprising that any company doing a lot of tuning to wring out extra performance might not get the same results on a newer architecture, particularly if there aren't drivers to automatically handle translating those old GCN optimizations into whatever works best on the new architecture?

I'd bet that's the case since the 5700XT is about no par with the 2060 SUPER in all of those titles and it usually beats that card soundly in benchmarks of more modern titles. It still doesn't change my point though. Look at the Bioshock Infinite benchmarks. 158 FPS in 4K. Are there even monitors 4K displays that go above 120 Hz? The only one that's particularly concerning is Hitman just because the framerate is down in the 40's.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Sorry, but you clearly has no clue what you talk about. The first mention of overhead in drivers shows you clearly has no clue (and quoting purepc.pl, these guys really don't know what they talk/write about).
Unfortunately you don't know how GPU works. So here is a link from someone who knows much more from you, and actually cared to explain it to guys like you.


Basically there is no overhead in AMD drivers. AMD scheduler is placed in hardware and nViddia in drivers. Due to this quite contrary there is overhead in nVidia drivers. And thanks to this overhead when game is mostly one threaded nViddia cards works better.
The rest is in the link. Learn something maybe.
In other words there is no problem to be fixed. And AMD won't fix it. Maybe you were asleep lately but we are already in era of Vulkan and DX12 where one threaded games belongs to the past.
That video has been thoroughly debunked in the beyond3d forums - by actual game developers. I don't usually bother with empty vessels making noises but here you go for reference:
It's hard to even comment on this with people not having a clue about what a scheduler does and which scheduler does what (Yes there's more then one). Fermi vs. Kepler scheduling has been discussed around here quite a bit already. It has to do with how warps are scheduled after kernel dispatch or draw call is issued. That is to say after a shader instruction is executed for a group (warp) of 32 pixels/vertices/threads when can an SM pickup next instruction from this same warp. This was done in hardware in case of Fermi and has been done in software since. The way this software scheduling works is that a shader compiler will emit control codes/hints within shader code. You can read more about it here. This is compile time job. Once shader is compiled CPU doesn't do anything anymore.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |