3DVagabond
Lifer
- Aug 10, 2009
- 11,951
- 204
- 106
Because for the umpteenth time, there is no generally agreed standard upon which one can easily categorize good reviews versus bad reviews.
The may be a few cases here and there, where we can easily conclude that a subpar method was used, like for instance when Computerbase.de uses DX12 in Tomb Raider for AMD and DX11 for Nvidia, even though DX11 is the best API for both (at least until they start testing with the new patch). TPU using an open case test setup is also problematic, albeit arguably to a lesser degree (since there may actually be consumers out there who also uses open cases).
Problem is that every review out there probably suffers from issues like the above one to a lesser or greater degree, and unless someone is willing to trawl through every single one of them and identify every single potential issue and then develop a weighting system based on this, the safest thing is to simply go with something like 3DCenters analysis.
Which is why I'd argue that you can't simply lump them all together. You actually have to take the time and make the effort to look at them individually.
The way TPU does it has it's value. Is it definitive? No. The way [H] does it has it's value. I don't believe that you can average their results though and have a better solution. Each tells you something different about the hardware.
And if you think you get frustrated having to repeat yourself, imagine how hard it is to read you repost the same thing over and over like if you say it enough it'll make it right.