Geforce GTX 1060 Thread: faster than RX 480, 120W, $249

Page 63 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Thanks for asking. I am having a PhD in math and a Master in CS.
Do not waste my time with linking stupid papers written by some students.

A professor with a 187 publications is "some student", seriously mate just stop digging.

Read and try to understand my example from above why arithmetic mean is nonsense. Claiming that from 2 equal graphics cards, one is 25% faster is ridiculous.

Both harmonic mean and arithmetic mean are inconsistent with respect to samples of relative quantities and their reciprocal.
For consistency you would like to have the following property 1/(mean(q1,q2)) = mean(1/q1,1/q2). This translates to, that you want to have consistent results if you answer the question of how much A is faster than B or how much B ist slower than A.

But i have the feeling i am talking to a wall in your case.
.

I understand your example perfectly well, I am perfectly aware of the fact that in extremely contrived scenarios such as that, it is easy to make an arithmetic mean give faulty/useless results, but what you don't seem to understand is that the same thing is just as possible with geometric means.

Since where are not dealing with an such extreme scenario it doesn't matter which mean we use.
 

zinfamous

No Lifer
Jul 12, 2006
110,810
29,564
146
Because there is really no straightforward way of determining what is or isn't appropriate, so by using a mixture of settings you should hopefully even out any outliers.

because garbage data is garbage data. the only way to account for outliers is to eliminate them from your data set. That is what real statisticians do with outliers.


As I just said above, this is actually a strength of meta analysis not a weakness.

no it isn't see above. Any metanalysis that contains widely variable numbers reflecting identical values is a metanalysis that chooses to include bad or misleading data which no statistician considers a strength. garbage in, garbage out. Further, tossing in outliers within that pool is amateur hour. There is no analysis that "compensates for outliers." This is why you simple toss them out on the high end and the low end. The N of your data set doesn't really matter, because outliers will always skew it.


And how do you determine what is bad data?

Well, that's the real problem, isn't it? It always depends on your subject and the question you are trying to answer. I am a biologist and a lot of our work is bioinformatics, though I'm more a molecular biologist...data kinda puts me to sleep. :\
But I can say that testing manufactured chunks of metal like this that behave based on how humans design them, within set/expected limits, is incredibly uncomplicated and really only requires some extremely basic analysis, doesn't it?

You ask yourself what kind of reporting is relevant to these websites: user experience, right? What is the most valid data that will most accurately inform their purchasing decisions, right? I'm intrigued by the fact that I don't have to deal with living cells or fully developed organisms that for whatever bastard reasons, simply don't behave the way they are supposed to when I arrest their heart development or inject a construct that, when meeting optimal conditions, should see a population flies developing with legs growing out of their heads instead of antennae. You know, when that doesn't happen like it is supposed to, it is frustrating, but that is life.

This is different. Hardware has human imposed limits blah blah now I've gone off on a tangent.

Here, you take a collection of products from two vendors representing various designs. YOu standardized a class of games that reflect: popularity (Real use) and when not completely reflected in popularity, you add other games with various APIs that can best reflect both types of cards. For each card and each game, you choose the API in that specific usage scenario that maximizes each card's performance. The set variables would be your resolution, quality settings, this and that...it's really quite simple.

A worthwhile N in this kind of test is not total number of games, it would be total number of chips per card tested, for each game. In this case, with each card best representing itself properly in each game (say DX11 is better for nVidia in the same game that DX12 favors AMD--it is wholly unreasonable to test both cards with the same API in that game. That is garbage data), you test multiple chips of the same card in each game benchmark to average out binning issues--just as you have a bit of a process issue in manufacturing I know that I have the same kind of random output when it comes to testing piles and piles of cells or individual mice and flies, or whatever.

Toss in whatever thermal/noise/power draw/OC potential are relevant to each usage period within games

Take TPU (seeing as this seems to be the one people are hating on), they currently have 2 games in their benchmark suite with DX12, but the DX12 implementation for both those games is more or less broken.

Do you still think they should use DX12 in their testing then?



And as I said above TPU already tests in the API that best represents the cards, so what's the problem?

So they chose 2 games with DX12 (while there are currently ~7?), and yet those 2 games have broken DX12 implementation compared to others? Yet, when you see something like this, you suggest that maybe DX12 as a whole is an invalid variable, and not TPU's specific selection of the 2 broken DX12 games...and for what reason would TPU do this?

I am puzzled by the way you interpret this situation. I doubt you will be able to sufficiently explain your reasoning behind that, but I will try.


As for the final, bolded claim: Either you really aren't paying attention or you are outright lying.

The 2 most obvious examples:
--They chose the known broken implementation of TroTTR in DX12--that is terrible for both cards compared to DX11 in both cards, yet they do it anyway? How does this support your claim?
--They do not test Vulkan in Doom which not only favors both cards above DX11, but highly favors AMD. How does this support your claim?

Further, they still do not use AMD's updated and current drivers which are already known to boost 480 ~3-4% in most cases above the numbers that they still report. Why do you think they do this?
 
Last edited:

Thala

Golden Member
Nov 12, 2014
1,355
653
136
A professor with a 187 publications is "some student", seriously mate just stop digging.

I challenge you to write a letter/email to above professor, which you seem to trust so much, and ask him what mean he is suggesting for our case! He will give you precisely the answer i gave you.

but what you don't seem to understand is that the same thing is just as possible with geometric means.

Yes, for other unrelated metrics geometric means might not be the correct choice. For our particular case it is the only choice as others are at least inconsistent.

Since where are not dealing with an such extreme scenario it doesn't matter which mean we use.

I give up arguing with you. You seem to be learn resistent.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Take TPU (seeing as this seems to be the one people are hating on), they currently have 2 games in their benchmark suite with DX12, but the DX12 implementation for both those games is more or less broken.

Do you still think they should use DX12 in their testing then?

And as I said above TPU already tests in the API that best represents the cards, so what's the problem?

Sorry hitman DX12 is stable and AMD cards benefit significantly from the DX12 version. RoTR DX12 on the other hand regresses performance for Nvidia cards from DX11 version while for AMD cards the perf is almost same after the latest patch. RoTR is the worst DX12 game out there. All of the DX12 games out there work very well for AMD cards and show good gains vs DX11 except RoTR which is a Gameworks title and has a broken DX12 implementation.

Anyway wizzard has agreed that he will moving Hitman to DX12 and adding Doom Vulkan. So that should address a few of the concerns with their game suite.

https://www.reddit.com/r/Amd/comments/4tz24g/rx_480_strix_oc_8_gb_review_techpowerup/

"I rebench from time to time, last I rebenched in May. A full rebench takes me around two weeks plus time to figure out benchmark scenes etc.
After that rebench there was 1080, then Computex, then 1070, 480, 1060. Impossible to stop all graphics card review work for two weeks. It's something I'll do asap, and yes, will include DOOM Vulkan, Hitman DX12. Unsure about Warhammer DX12 which is Beta."
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Oh please, don't try to argue this, your are going to loose this argument since apparently you have no math background. It could not be more wrong to use arithmetic mean.

Want to see what's happening with arithmetic mean?

Assume we have benchmarks of 2 games:
1) RX480: 50 fps GTX1060: 100fps
2) RX480: 100fps GTX1060: 50 fps

Intuitively we would rate both cards equal, right?

Now lets look how TPU calculates:

RX480 GTX1060 100*GTX/RX
50 100 200
100 50 50

arith mean: 125
geom mean: 100


According to arithmetic mean (and also TPU!!) GTX1060 would be 25% faster in average over both (hypothetical) games....cool hu?

And you can flip this around and get the same for the 480.

I don't think this is how TPU calculates their performance summary.
 
Feb 19, 2009
10,457
10
76
Sorry hitman DX12 is stable and AMD cards benefit significantly from the DX12 version. RoTR DX12 on the other hand regresses performance for Nvidia cards from DX11 version while for AMD cards the perf is almost same after the latest patch. RoTR is the worst DX12 game out there. All of the DX12 games out there work very well for AMD cards and show good gains vs DX11 except RoTR which is a Gameworks title and has a broken DX12 implementation.

Anyway wizzard has agreed that he will moving Hitman to DX12 and adding Doom Vulkan. So that should address a few of the concerns with their game suite.

https://www.reddit.com/r/Amd/comments/4tz24g/rx_480_strix_oc_8_gb_review_techpowerup/

"I rebench from time to time, last I rebenched in May. A full rebench takes me around two weeks plus time to figure out benchmark scenes etc.
After that rebench there was 1080, then Computex, then 1070, 480, 1060. Impossible to stop all graphics card review work for two weeks. It's something I'll do asap, and yes, will include DOOM Vulkan, Hitman DX12. Unsure about Warhammer DX12 which is Beta."

Interesting he doesn't like Warhammer DX12 because it's a beta... i've been putting in many hours into it in DX12 mode, no issues. Extra performance is nice, especially MIN FPS in big battles.

For a beta, it's better than some other DX12 implementation for sure.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
Sorry hitman DX12 is stable and AMD cards benefit significantly from the DX12 version. RoTR DX12 on the other hand regresses performance for Nvidia cards from DX11 version while for AMD cards the perf is almost same after the latest patch. RoTR is the worst DX12 game out there. All of the DX12 games out there work very well for AMD cards and show good gains vs DX11 except RoTR which is a Gameworks title and has a broken DX12 implementation.

Anyway wizzard has agreed that he will moving Hitman to DX12 and adding Doom Vulkan. So that should address a few of the concerns with their game suite.

https://www.reddit.com/r/Amd/comments/4tz24g/rx_480_strix_oc_8_gb_review_techpowerup/

"I rebench from time to time, last I rebenched in May. A full rebench takes me around two weeks plus time to figure out benchmark scenes etc.
After that rebench there was 1080, then Computex, then 1070, 480, 1060. Impossible to stop all graphics card review work for two weeks. It's something I'll do asap, and yes, will include DOOM Vulkan, Hitman DX12. Unsure about Warhammer DX12 which is Beta."
Good news.. DX12 and Vulkan generally improve performance on Pascal cards as well so it should be a welcome change for everyone. Including those looking to see if an upgrade from Maxwell to Pascal is worth it.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
Interesting he doesn't like Warhammer DX12 because it's a beta... i've been putting in many hours into it in DX12 mode, no issues. Extra performance is nice, especially MIN FPS in big battles.

For a beta, it's better than some other DX12 implementation for sure.
How do you enable it? My option is grayed out and I am both on Win10 and have an rx480, I hope it's not because my Windows is Enterprise edition.

DX12 works fine in AOTS and Doom screams under Vulkan.
 
Feb 19, 2009
10,457
10
76
How do you enable it? My option is grayed out and I am both on Win10 and have an rx480, I hope it's not because my Windows is Enterprise edition.

DX12 works fine in AOTS and Doom screams under Vulkan.

Just the latest patch on Steam:



I'm on regular Win 10 Pro.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
And you can flip this around and get the same for the 480.

Precisely. Therefore using arithmetic mean is inconsistent. With arithmetic mean you can make an argument of both cards being 25% faster than the other
Yay. Someone gets it finally
However with geometric mean, you will get the consistent answer that both cards are equally fast.

I don't think this is how TPU calculates their performance summary.

Unfortunately TPU does the calculation using arithmetic mean (precisely as i did in my example).
See here.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Precisely. Therefore using arithmetic mean is inconsistent. With arithmetic mean you can make an argument of both cards being 25% faster than the other
Yay. Someone gets it finally
However with geometric mean, you will get the consistent answer that both cards are equally fast.



Unfortunately TPU does the calculation using arithmetic mean (precisely as i did in my example).
See here.

yeah I clearly understand what you are saying.

https://en.wikipedia.org/wiki/Geometric_mean

"A geometric mean is often used when comparing different items—finding a single "figure of merit" for these items—when each item has multiple properties that have different numeric ranges.[1] For example, the geometric mean can give a meaningful "average" to compare two companies which are each rated at 0 to 5 for their environmental sustainability, and are rated at 0 to 100 for their financial viability. If an arithmetic mean were used instead of a geometric mean, the financial viability is given more weight because its numeric range is larger—so a small percentage change in the financial rating (e.g. going from 80 to 90) makes a much larger difference in the arithmetic mean than a large percentage change in environmental sustainability (e.g. going from 2 to 5). The use of a geometric mean "normalizes" the ranges being averaged, so that no range dominates the weighting, and a given percentage change in any of the properties has the same effect on the geometric mean. So, a 20% change in environmental sustainability from 4 to 4.8 has the same effect on the geometric mean as a 20% change in financial viability from 60 to 72."

The case you pointed out was a classic case where arithmetic mean falls apart.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
because garbage data is garbage data. the only way to account for outliers is to eliminate them from your data set. That is what real statisticians do with outliers.

No it isn't, "real" statisticians will first try and determine the nature of an outlier, and only if it can be determined to be due to experimental error or similar issues will they eliminate them.

A data point being an outlier doesn't make said data point faulty or garbage.

no it isn't see above. Any metanalysis that contains widely variable numbers reflecting identical values is a metanalysis that chooses to include bad or misleading data which no statistician considers a strength. garbage in, garbage out. Further, tossing in outliers within that pool is amateur hour. There is no analysis that "compensates for outliers." This is why you simple toss them out on the high end and the low end. The N of your data set doesn't really matter, because outliers will always skew it.

This is all fine if you actually have a way of determining which data is "garbage", but as I said above, just because a data point is an outlier doesn't automaticely make it garbage.

Well, that's the real problem, isn't it? It always depends on your subject and the question you are trying to answer. I am a biologist and a lot of our work is bioinformatics, though I'm more a molecular biologist...data kinda puts me to sleep. :\

snip

This is basically what I have been saying all along, how do you determine what is bad data?

Until someone invents a clear and straightforward standard on how to test GPUs that everyone can agree with, we don't really have a way to determine what is or isn't bad data, and as long as we can't clearly identify the bad data, then doing meta reviews like what 3DCenter does, is the best way to at least minimize the impact of any potential bad data.

So they chose 2 games with DX12 (while there are currently ~7?), and yet those 2 games have broken DX12 implementation compared to others? Yet, when you see something like this, you suggest that maybe DX12 as a whole is an invalid variable, and not TPU's specific selection of the 2 broken DX12 games...and for what reason would TPU do this?

I am puzzled by the way you interpret this situation. I doubt you will be able to sufficiently explain your reasoning behind that, but I will try.

I never said that DX12 as a whole is an invalid variable, no idea where you got that idea from.

Nor did I ever say that TPUs selection of DX12 games was optimal, I simply said, that given they games they chose, using DX11 is the correct thing to do.

As for the final, bolded claim: Either you really aren't paying attention or you are outright lying.

The 2 most obvious examples:
--They chose the known broken implementation of TroTTR in DX12--that is terrible for both cards compared to DX11 in both cards, yet they do it anyway? How does this support your claim?
--They do not test Vulkan in Doom which not only favors both cards above DX11, but highly favors AMD. How does this support your claim?

Further, they still do not use AMD's updated and current drivers which are already known to boost 480 ~3-4% in most cases above the numbers that they still report. Why do you think they do this?

Mea Culpa, I'm afraid this is a case of me not paying attention. I genuinely thought that TPU tested Tomb Raider in DX11 and not DX12. As for them not using Vulkan in DOOM, there is a very good reason for this, namely the fact that DOOM isn't part of their benchmark suite, so that has nothing to do with my "claim".

As far as drivers goes, sure it would be nice if they used the new drivers, but then again they are still quite new and not even Anandtech are using them yet. No reason to crucify TPU over that.

I challenge you to write a letter/email to above professor, which you seem to trust so much, and ask him what mean he is suggesting for our case! He will give you precisely the answer i gave you.

Yes, for other unrelated metrics geometric means might not be the correct choice. For our particular case it is the only choice as others are at least inconsistent.

I give up arguing with you. You seem to be learn resistent.

Yes I'm sure that if I write a letter to said professor then he will give me an answer that contradicts his own publications

And I can easily give you an example of the exact same kind of metrics (relative fps, 2 GPUs, 2 games), where geomean gives clearly incorrect answers compared to arithmetic mean.

Also you might want to ease up on the personal attacks.

Sorry hitman DX12 is stable

snip...

No it isn't, not unless they have fixed all the texture issues (which don't exist in DX11)
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
And I can easily give you an example of the exact same kind of metrics (relative fps, 2 GPUs, 2 games), where geomean gives clearly incorrect answers compared to arithmetic mean.

Making an exception to my last statement because i cannot resist this offer. Please present your easy example Always interested in extending my collection of common fallacies.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Making an exception to my last statement because i cannot resist this offer. Please present your easy example Always interested in extending my collection of common fallacies.

Code:
           GPU A        GPU B
Game 1     100 FPS     5 FPS
Game 2     0.1 FPS     5 FPS

Using the relative values and geomean would put GPU B as 58% faster than GPU A, even though GPU B is completely unplayable in both games, whereas GPU A is perfectly playable in Game 1. Arithmetic mean correctly pegs GPU A as the better GPU (although at 901% faster, the exact value is quite arbitrary).

So if you were using geomean to recommend which GPU was better you would make a lot of people very sad.
 

zinfamous

No Lifer
Jul 12, 2006
110,810
29,564
146
No it isn't, "real" statisticians will first try and determine the nature of an outlier, and only if it can be determined to be due to experimental error or similar issues will they eliminate them.

A data point being an outlier doesn't make said data point faulty or garbage.

Right, it doesn't. But if you are trying to determine any sort of mean in a large data set, then outliers skew your data.

This is why real statisticians toss them out when performing such analysis.

...this makes me wonder what type of data you are normally working with.

I never said that DX12 as a whole is an invalid variable, no idea where you got that idea from.
then perhaps you can clarify for me what you could possibly be inferring from this part of your post:
Take TPU (seeing as this seems to be the one people are hating on), they currently have 2 games in their benchmark suite with DX12, but the DX12 implementation for both those games is more or less broken.

Do you still think they should use DX12 in their testing then?


Nor did I ever say that TPUs selection of DX12 games was optimal, I simply said, that given they games they chose, using DX11 is the correct thing to do.

again, not true, because:

Mea Culpa, I'm afraid this is a case of me not paying attention. I genuinely thought that TPU tested Tomb Raider in DX11 and not DX12. As for them not using Vulkan in DOOM, there is a very good reason for this, namely the fact that DOOM isn't part of their benchmark suite, so that has nothing to do with my "claim".

you got it And I think I am confusing their Doom use with another group. It is still strange that they ignore such a popular and important game.
As far as drivers goes, sure it would be nice if they used the new drivers, but then again they are still quite new and not even Anandtech are using them yet. No reason to crucify TPU over that.
agreed. To TPU's credit, someone here posted the update that scheduling with releases and traveling was bad for him and he is currently working on updating those benches. I'd be curious to see if some members here would then stop using TPU results exclusively if they stop showing what they hope they would show.
 
Last edited:

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Right, it doesn't. But if you are trying to determine any sort of mean in a large data set, then outliers skew your data.

This is why real statisticians toss them out when performing such analysis.

...this makes me wonder what type of data you are normally working with.

Of course outliers skew your data (otherwise they wouldn't really be outliers), but no statisticians do not just throw them out before having determined the nature of them.

If your outliers are not due to measurement errors, then tough shit, you go back to the drawing board and figure out how to perform your experiment in such a way as to get around them. You most certainly do not throw them away and pretend they don't exist, that's just bad science.

And I work with both biological data (electrophysiology) and computational data (molecular modelling)

then perhaps you can clarify for me what you could possibly be inferring from this part of your post:

"Take TPU (seeing as this seems to be the one people are hating on), they currently have 2 games in their benchmark suite with DX12, but the DX12 implementation for both those games is more or less broken.

Do you still think they should use DX12 in their testing then?"

I was inferring exactly what it says. That TPU shouldn't use the DX12 API for the 2 games they test.

Nowhere in that quote do I say anything about them not using DX12 in other games (which are not currently part of their benchmark suite), and as such I obviosly didn't make any claims about DX12 as a whole

"Nor did I ever say that TPUs selection of DX12 games was optimal, I simply said, that given they games they chose, using DX11 is the correct thing to do."

again, not true, because:

"Mea Culpa, I'm afraid this is a case of me not paying attention. I genuinely thought that TPU tested Tomb Raider in DX11 and not DX12. As for them not using Vulkan in DOOM, there is a very good reason for this, namely the fact that DOOM isn't part of their benchmark suite, so that has nothing to do with my "claim"."

How in the world is my claim that TPU should use DX11 for both games invalidated by the fact that they currently use DX12 for Tomb Raider, or the fact that they don't currently test DOOM?
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Using the relative values and geomean would put GPU B as 58% faster than GPU A, even though GPU B is completely unplayable in both games, whereas GPU A is perfectly playable in Game 1. Arithmetic mean correctly pegs GPU A as the better GPU (although at 901% faster, the exact value is quite arbitrary).

Sorry, this is ritch. Now because you cannot find an example where geomean does not give the correct and consistent answer to the question of how much A is faster compared to B in average you have introduce 2 unrelated metrics: "playable" and "better". This was not the question at all.
The correct answer is GPU B as 58% faster than GPU A (assuming you took the mean correctly, didn't bother to check this). And arithmetic mean again gives wrong and inconsistent answer. Irony is, that due to inconsistency of arithmetic mean, you again come to both conclusions using arithmetic mean that A is faster than B while B is faster than A at the same time (as Enigmoid correctly concluded further up in this very thread) Thanks for proving my point.
I close my case here.
 
Last edited:

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Sorry, this is ritch. Now because you cannot find an example where geomean does not give the correct and consistent answer to the question of how much A is faster compared to B in average you have introduce 2 unrelated metrics: "playable" and "better". This was not the question at all.
The correct answer is GPU B as 58% faster than GPU A (assuming you took the mean correctly, didn't bother to check this).
I close my case here.

So basically your argument can be boiled down to this.

"Geomean is always correct for comparing GPUs as long as noone cares about such piddly little things like playability"

Eerm yeah ok, I guess I can let you have that one.
 

Brunnis

Senior member
Nov 15, 2004
506
71
91
Sorry, this is ritch. Now because you cannot find an example where geomean does not give the correct and consistent answer to the question of how much A is faster compared to B in average you have introduce 2 unrelated metrics: "playable" and "better". This was not the question at all.
The correct answer is GPU B as 58% faster than GPU A (assuming you took the mean correctly, didn't bother to check this). And arithmetic mean again gives wrong and inconsistent answer. Irony is, that due to inconsistency of arithmetic mean, you again come to both conclusions using arithmetic mean that A is faster than B while B is faster than A at the same time (as Enigmoid correctly concluded further up in this very thread) Thanks for proving my point.
I close my case here.
Just wanted to thank you for spotting this issue and arguing against using arithmetic mean. Geometric mean is clearly the only method that makes sense here. I would be very surprised if W1zzard didn't implement this method instead.
 
Feb 19, 2009
10,457
10
76
I've noticed some sites test Witcher 3 and they find the 1060 beat the 480 by a big margin, I wonder what's going on when side by side comparison with video show them really close:

https://www.youtube.com/watch?v=uzyUVQHzDwk

https://www.youtube.com/watch?v=lBlCUV6TwiY

^ These aren't nobody reviewers either, the 2nd is Digital Foundry, testing in the toughest area in the city. And the above is from the expansion, again in the city.

Do these reviews which show a big gap run with HairWorks & HBAO+ on or something?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I'm surprised to see the asus card compared to those leaked 480 nitro graphs. Although I'm assuming those aren't fake, it showed the 480 performing a bit better than the 1060.

In that case if it's true it doesn't seem like the asus 480 would be worth the price increase.

If I'm buying AMD I'm getting XFX, Sapphire, Powercolor, depending on the specific model.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Because there is really no straightforward way of determining what is or isn't appropriate, so by using a mixture of settings you should hopefully even out any outliers.



As I just said above, this is actually a strength of meta analysis not a weakness.



And how do you determine what is bad data?

Take TPU (seeing as this seems to be the one people are hating on), they currently have 2 games in their benchmark suite with DX12, but the DX12 implementation for both those games is more or less broken.

Do you still think they should use DX12 in their testing then?



And as I said above TPU already tests in the API that best represents the cards, so what's the problem?

Why would mixing bad data with good data give you a better result than simply trusting the good data?
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Why would mixing bad data with good data give you a better result than simply trusting the good data?

Because for the umpteenth time, there is no generally agreed standard upon which one can easily categorize good reviews versus bad reviews.

The may be a few cases here and there, where we can easily conclude that a subpar method was used, like for instance when Computerbase.de uses DX12 in Tomb Raider for AMD and DX11 for Nvidia, even though DX11 is the best API for both (at least until they start testing with the new patch). TPU using an open case test setup is also problematic, albeit arguably to a lesser degree (since there may actually be consumers out there who also uses open cases).

Problem is that every review out there probably suffers from issues like the above one to a lesser or greater degree, and unless someone is willing to trawl through every single one of them and identify every single potential issue and then develop a weighting system based on this, the safest thing is to simply go with something like 3DCenters analysis.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |