[bitsandchips]: Pascal to not have improved Async Compute over Maxwell

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Lel, I own a Titan X. Before that, a 970, before that, 3x680's. I have to go back to a 5870 for AMD.

So ah, Try again.

But hey, if you don't believe me, you only need to see the Fury's benchmarks @ 4k. They mostly don't choke.

4GB was never a choke point for a 980 either, but that didn't stop the AMD fans from crying foul about it. Do games suddenly use more texture memory with an NVidia card? What you're saying still doesn't make any sense.

Let me clear it up since you're talking about benchmarks... Why is 4GB enough for a Fury X but not enough for a 980? "because it's HBM" makes no sense. You need what you need.
 

nurturedhate

Golden Member
Aug 27, 2011
1,762
761
136
4GB was never a choke point for a 980 either, but that didn't stop the AMD fans from crying foul about it. Do games suddenly use more texture memory with an NVidia card? What you're saying still doesn't make any sense.

Let me clear it up since you're talking about benchmarks... Why is 4GB enough for a Fury X but not enough for a 980? "because it's HBM" makes no sense. You need what you need.

Please provide a post from someone who posts here frequently that proves your stance of "AMD fanboys say 4gb isn't enough"
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
He's like just arguing with himself. Someone somewhere said this so I must argue with you!


Don't feed the...
 

Det0x

Golden Member
Sep 11, 2014
1,063
3,113
136
Can you link me to these documented facts? Im quite interested because ive stayed away from this whole GW debacle. But to actually say something, one must understand it.

From my minimal knowledge, normally a "GW" title adds one or two visual features e.g. hairworks or VXAO etc. Now these will run better on nVIDIA GPUs (well duh?) but won't be optimised for competing products. Also most of the time, these features can be turned off (need citation - correct me if I am wrong) or there is an alternative option that produce the same or similiar effect e.g. SSAO. Also GW features aren't the core of the engine itself but like a small extension that could be bypassed if the developer wants to produce similiar visual effects(?).

Now unless the game cannot have these turned off and the features are forced to run on the GPU (or are just purely built from ground up to work better with a particular architecture), are they really gimping the performance of their competition or are they just providing more features/options for their customers?

Ive always assume that most games (and most of them are ported from the console) just have a few tagged on GW visual features that if turned off will just result in the game being in a vanilla state. Is this a correct assumption?

Nvidia, stop being a DICK @ https://www.youtube.com/watch?v=ZcF36_qMd8M
 

Spanners

Senior member
Mar 16, 2014
325
1
0
Please provide a post from someone who posts here frequently that proves your stance of "AMD fanboys say 4gb isn't enough"

Yes please do, It's pretty easy to win an augment with yourself. I don't recall any AMD fans "crying foul" about the 980's VRAM either. It would have been an odd argument to make considering (excluding some custom cards) the top AMD cards at the time had 4GB as well.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Please provide a post from someone who posts here frequently that proves your stance of "AMD fanboys say 4gb isn't enough"

Member call outs aren't allowed. Not that it matters, the guy I'm debating with isn't denying that happened, he in fact acknowledged it, but saying "hbm is the reason" which does not make any sense.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
Ive always assume that most games (and most of them are ported from the console) just have a few tagged on GW visual features that if turned off will just result in the game being in a vanilla state. Is this a correct assumption?

No, it's generally wrong. Games with GW tend to run badly at release and well into its lifespan. If it was something that could have been turned off like that, there wouldn't have been this much grief over it ya think?

You completely ignored my reply to your FC question. The facts are right there. FC4 was the poster child for GW. FC Primal is FC4 w/o any GW. FC4 ran like total ass at release and for months. AMD could not even get a handle on it for like a year lol. You realize the code is a black box, so the dev could not even help AMD. Is this good for gaming and consumers? FC Primal on the other hand runs fantastically on all gpus.

What's the difference? Is it still just the developers fault? You seem to circle around to that same assumption every time. Are you playing possum here?
 

C@mM!

Member
Mar 30, 2016
54
0
36
Member call outs aren't allowed. Not that it matters, the guy I'm debating with isn't denying that happened, he in fact acknowledged it, but saying "hbm is the reason" which does not make any sense.

Let me make it simple for you since reasoning and logic fails you

When your coming to market with twice the memory bandwidth, your less likely to heap shit on it.

If I remember correctly however, many reviewers did comment on release however with the 4GB limit they felt that it did limit the card slightly at 4k res, but wasn't overtly noticeable.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
Let me make it simple for you since reasoning and logic fails you

When your coming to market with twice the memory bandwidth, your less likely to heap shit on it.

If I remember correctly however, many reviewers did comment on release however that the 4GB limit with how they felt that it did limit the card slightly at 4k res, but wasn't overtly noticeable.


Hmm, the 4gb on Fury was always questionable. Not many believed that it wouldn't be a problem. AMD said it wouldn't be a problem. We didn't believe them. Then the benches came out and it competed well with the 980ti at 4k. All the grumbling subsided and ppl moved onto other things. Whomever you're having this debate with is pretty much making shit up as he goes along. Don't feed it.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Let me make it simple for you since reasoning and logic fails you

When your coming to market with twice the memory bandwidth, your less likely to heap shit on it.

If I remember correctly however, many reviewers did comment on release however with the 4GB limit they felt that it did limit the card slightly at 4k res, but wasn't overtly noticeable.

HBM's memory bandwidth advantage is between the GPU and the VRAM, not between the card and system memory. So if you need more than 4GB of VRAM your performance tanks regardless of HBM. Having HBM isn't going to make your access to system ram any faster then a card with GDDR5.

You aren't simplifying anything, you're just continuing to show you don't actually know what HBM is.

I'm not even saying 4GB is or is not enough. I'm pointing out the argument bias and goal post shifting and using the 4GB debate that happened with the 980 then again with Fury X as an example of it.
 
Last edited:
Feb 19, 2009
10,457
10
76
I'm not even saying 4GB is or is not enough. I'm pointing out the argument bias and goal post shifting and using the 4GB debate that happened with the 980 then again with Fury X as an example of it.

In the interest of science, please find examples of people bashing the 970 and 980 for having 4GB. I find it hard to believe people can be that silly.

On a related vram talk, I can find many examples of people saying the 390/X didn't need 8GB or that it wouldn't be useful compared to 4GB. Including from myself.
 

Spanners

Senior member
Mar 16, 2014
325
1
0
I'm not even saying 4GB is or is not enough. I'm pointing out the argument bias and goal post shifting and using the 4GB debate that happened with the 980 then again with Fury X as an example of it.

Yes but you've still shown nothing to backup that "AMD fans" here said that the 980's 4GB was insufficient or a bottleneck. You said the person you were debating agreed with you but I can't find any post of his in this thread where he states that. It would be hypocritical but I don't recall it happening.

Just link the thread and page if you are worried about calling people out. If there was a 980 "debate" (and I certainly don't want to imply that I've read every thread on here) around 4GB then that should be simple.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Yes but you've still shown nothing to backup that "AMD fans" here said that the 980's 4GB was insufficient or a bottleneck. You said the person you were debating agreed with you but I can't find any post of his in this thread where he states that. It would be hypocritical but I don't recall it happening.

Just link the thread and page if you are worried about calling people out. If there was a 980 "debate" (and I certainly don't want to imply that I've read every thread on here) around 4GB then that should be simple.

C@mM!'s very first reply to me in this thread was stating why people didn't make the same arguments with Fury X having 4GB. You haven't found it because you didn't look.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
C@mM!'s very first reply to me in this thread was stating why people didn't make the same arguments with Fury X having 4GB. You haven't found it because you didn't look.


lol you baited him into an argument that doesn't exist!
 

C@mM!

Member
Mar 30, 2016
54
0
36
lol you baited him into an argument that doesn't exist!

Probably my fault. The Fury's 4GB buffer and negligible performance difference stopped me switching from a Titan X to a fury, so my own preconceived bias in memory amounts whilst trying to be nuanced led me down a shit parade.

GG 2IS, well trolled.
 

Spanners

Senior member
Mar 16, 2014
325
1
0
C@mM!'s very first reply to me in this thread was stating why people didn't make the same arguments with Fury X having 4GB. You haven't found it because you didn't look.

I already said I read his posts. Is this the post you think is agreeing with you that AMD fans said that the 980's 4GB wasn't enough?

Namely because most people understood that with HBM1, it wasn't possible to cram more on, and the bandwidth gain minimised the impact of 4GB at 1440p+.

And people bitched about the 970 due to 512mb of that 4GB buffer dropping to a 64 bit bus due to the way Nvidia sliced the die.

Because I'm seeing him continue the debate about why, or why people thought, that Fury and HBM is different (to the 980). If you think that's him agreeing with the previously mentioned comments then I just don't agree.

Not the part of my post I was really interested in you replying to though.
 
Last edited:

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Probably my fault. The Fury's 4GB buffer and negligible performance difference stopped me switching from a Titan X to a fury, so my own preconceived bias in memory amounts whilst trying to be nuanced led me down a shit parade.

GG 2IS, well trolled.

Hey, look on the bright side. Now you know where HBM's bandwidth advantage is applicable and where it is not.
 

C@mM!

Member
Mar 30, 2016
54
0
36
Hey, look on the bright side. Now you know where HBM's bandwidth advantage is applicable and where it is not.

No, you still can't wrap your head around a buffer that empties faster can fill sooner, but whatever.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
No, you still can't wrap your head around a buffer that empties faster can fill sooner, but whatever.

Yeah? and where is it going to refill it's buffer from? we will take this one step at a time for your benefit.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
No, it's generally wrong. Games with GW tend to run badly at release and well into its lifespan. If it was something that could have been turned off like that, there wouldn't have been this much grief over it ya think?

You completely ignored my reply to your FC question. The facts are right there. FC4 was the poster child for GW. FC Primal is FC4 w/o any GW. FC4 ran like total ass at release and for months. AMD could not even get a handle on it for like a year lol. You realize the code is a black box, so the dev could not even help AMD. Is this good for gaming and consumers? FC Primal on the other hand runs fantastically on all gpus.

What's the difference? Is it still just the developers fault? You seem to circle around to that same assumption every time. Are you playing possum here?

Reread your post. Half of what you wrote doesn't even make sense because you have no idea yourself on the issue at hand. It comes off as a bunch of incoherent ranting that doesn't mean anything nor does it have any substance or value. Im not calling you out and no offence by the way, but Im not going to think about a reply when I have to spend time and effort deciphering what your saying.

I mean for starters FC Primal isn't FC4 without godrays because I see much more differences than that.. plus it still uses godrays if you didnt know (volumetric fog godrays). And SSAO. Actually it just doesn't provide options for nVIDIA GW options like enhanced godrays and +HBAO. Id also think that because it performs better, the game engine too must have had some optimizations but thats something else. AMD not being able to fix their problem is somehow tied in with GW and nVIDIA?

From memory the game itself produced good framerates but AMD hardware had issues with the framerate being choppy at release which I believe was a game bug? and obviously using nVIDIA options resulted in lower performance.. the thing is why does an AMD user HAVE to use enhanced godrays (a GW feature)? What did you expect to happen here? Why not just use an nVIDIA card if one wants to use nVIDIA features?

Its like not many people like GW features (if AT forums are indicative of the general gaming populace) or visuals that they provide yet they further blame the IHV because performance is low when these features are turned on, but why are they on if initially one doesn't like the features or benefits it provides in the first place??? Im trying to wrap this in my head and it makes no sense to me at all to be so angered. It is definitely an issue if it cannot be turned off or disabled but that doesn't seem to be the case because the game devs aren't stupid (or are they?) and there are options to turn these things off most of the time.

So what other games are like this other than FC4?

I keep hearing nonsensical stuff about GW causing bad performance and bugs yet I see many cases where you can just turn them off.. regardless of being black box or not, if you don't use the feature then it shouldn't affect the rest of the game. Let me rephrases this, its like many seem to understand that the whole source code for the game is sabotaged to some degree yet from what I know and read/researched, these features are just but a small negligible part of the whole game code where those specific parts are like a black box (so no one can see whats actually going on) but can be bypassed if not used (obviously). Plus how do we know it aint a game/driver bug? Benefit of the doubt because it uses GW? Its a very strange thought process.
 

C@mM!

Member
Mar 30, 2016
54
0
36
Yeah? and where is it going to refill it's buffer from? we will take this one step at a time for your benefit.

This would make sense if you were dropping the buffer fully.

However it usually doesn't work like that, and will only drop what isn't needed for the coming pipeline. So the faster your card feeds the pipeline, faster it can clear the buffer, to start streaming new data in.

Otherwise your example would mean there's no difference if you cut a 4GB 980 down to a 64 bit bus.

And just so you don't try to keep twisting this (and as mentioned earlier), I am well and truly aware that having to drop back to system ram, or gasp, hard drive is slower. However you keep ignoring that the cards are not having to dump full buffers with today's workloads, and thus the bottleneck isn't as severe (which with the Fury, is to the point of being negligible).

And at that, I'm done.

So, about that pascal and async compute huh?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
This would make sense if you were dropping the buffer fully.

However it usually doesn't work like that, and will only drop what isn't needed for the coming pipeline. So the faster your card feeds the pipeline, faster it can clear the buffer, to start streaming new data in.

Otherwise your example would mean there's no difference if you cut a 4GB 980 down to a 64 bit bus.

And just so you don't try to keep twisting this (and as mentioned earlier), I am well and truly aware that having to drop back to system ram, or gasp, hard drive is slower. However you keep ignoring that the cards are not having to dump full buffers with today's workloads, and thus the bottleneck isn't as severe (which with the Fury, is to the point of being negligible).

And at that, I'm done.

So, about that pascal and async compute huh?

When you're having to rely on a 30-40GB/s pipe (along with much much higher latency) for your texture data instead of 224GB/s (980) or 512GB/s (Fury X) the benefits of HBM over GDDR5 that you're mentioning are going to range from virtually non-existent to non-existent. Both HBM and GDDR5 cards are going to just sit there waiting for more texture data. It's only "negligible" because luckily 4GB has proven to be enough for now

And yes, it's a damn shame about that async compute. NVidia has it's work cut out for it if true (which it appears to be) it's still rumors now, but there's too many of them to ignore at this point.
 

coercitiv

Diamond Member
Jan 24, 2014
6,403
12,864
136
I keep hearing nonsensical stuff about GW causing bad performance and bugs yet I see many cases where you can just turn them off.. regardless of being black box or not, if you don't use the feature then it shouldn't affect the rest of the game.
It doesn't affect the gaming experience, but affects reviews. People use reviews and performance indexes to make purchasing decisions. You can take it from here.

Plus how do we know it aint a game/driver bug? Benefit of the doubt because it uses GW? Its a very strange thought process.
It's not a strange thought process: if you can turn off said features, you can compare performance in both cases (on/off) and evaluate their impact. Or do you reckon the game/driver bug manifests itself only with GW features turned off?

It's good to remain skeptic in the face of all the GW doom&gloom this forum may paint, but ain't so good to create a thinking pattern that discards any evidence to the contrary.
 

The Alias

Senior member
Aug 22, 2012
647
58
91
When you're having to rely on a 30-40GB/s pipe (along with much much higher latency) for your texture data instead of 224GB/s (980) or 512GB/s (Fury X) the benefits of HBM over GDDR5 that you're mentioning are going to range from virtually non-existent to non-existent. Both HBM and GDDR5 cards are going to just sit there waiting for more texture data. It's only "negligible" because luckily 4GB has proven to be enough for now

And yes, it's a damn shame about that async compute. NVidia has it's work cut out for it if true (which it appears to be) it's still rumors now, but there's too many of them to ignore at this point.

Let's make this painfully simple. Let's say you're in Whiterun both FuryX and 980ti are going to load the city in it's entirety the only difference is the 980ti loaded textures all the way to Paarthurnax and the FuryX only loaded textures to the lookout where you fight the first dragon. Both load what's needed just fine, the 980ti just loads more of what's not needed.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Let's make this painfully simple. Let's say you're in Whiterun both FuryX and 980ti are going to load the city in it's entirety the only difference is the 980ti loaded textures all the way to Paarthurnax and the FuryX only loaded textures to the lookout where you fight the first dragon. Both load what's needed just fine, the 980ti just loads more of what's not needed.

Let me make it painlessly simple. It wasn't 4 vs 6 GB discussion. It was 4GB HBM vs 4GB GDDR5 when you NEED more then 4GB.

Edited for more simplicity
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |