Originally posted by: BFG10K
This is no mojo going on. The GeForce 8800 GT and 9600 GT use the same stream processors.
The following are synthetic shader and fillrate tests. Note the sizable difference between 8800 GT and 9600 GT.*
What should be noted is that the 8800 GT and 9600 GT have identical bandwidth. Realworld games do not simply run a perlin noise function or fill polygons. They fetch a lot of textures, render targets and so on. These get swaped in and out of memory. With AA, bandwidth consumption goes up even more.
In general, games consume more BW than shader. This is why the GeForce 9600 GT performs 'almost' like a 8800 GT.
nVidia's answer is all well and good, but it ignores the following:
[*]The 9600 GT has higher core and memory clocks than the 8800 GT, and it has more than half the SPs to begin with.
[*]The 9600 GT tests were run on newer drivers than the rest of the cards.
[*]People overclocking the 8800 GT?s memory bandwidth are seeing next to no performance gain compared to overclocking the core/shader.
[*]Despite all of this, in shader intensive games the 8800 GT is 30% faster in some situations over the 9600 GT.
expreview used the new 174 whql drivers that the 9600 runs on and put them on a 8800gt (g92). It boosts the 8800Gt's preformance as well. The "mystery magical filters" are driver side, and suitable for all g92's now, possibly via a modded driver INF...
Now we?re getting somewhere, as this is quite interesting. I?d like to see an 8800 GT compared to the 9600 GT using these drivers (assuming they function correctly on a 8800 GT) and that will give us the true performance rift to work around.
As a hypothetical example, those drivers could enable the ROP compression that was thus-far inactive, hence the 8800 GT isn?t benefitting like the 9600 GT is despite having the same core.