Originally posted by: Nebor
Just a diehard fan, thanks. Besides, I like playing things the way they're meant to be played.... (TM)
Originally posted by: GTaudiophile
Originally posted by: UlricT
you guys saw the benches for the 51.75Dets on AmdMB? Seems like some prettty decent improvements in PS 2.0... somewhere around 35% increase
It needs to be like a 100% increase...
Also, let's not forget the CAT 3.8s...
Originally posted by: Duvie
Originally posted by: GTaudiophile
Originally posted by: UlricT
you guys saw the benches for the 51.75Dets on AmdMB? Seems like some prettty decent improvements in PS 2.0... somewhere around 35% increase
It needs to be like a 100% increase...
Also, let's not forget the CAT 3.8s...
At this point it is probably a forgone conclusion ATI will own the benchmark on this game regardless of how many rabbits Nvidia can pull out of its arse, I mean hat!!!! Those leads are too large to overcome hardware issues....If the drivers give more fps so playability at higher res is insured on systems that are not p4c's of 2.8ghz or higher or top of the line Barton chips. I mean what, Nvidia out of all its supposed DX9 cards will only have 1 card that has playable rates in HL2 at normal gaming resolutions. That is Sad. If they can get a bit more fps without a major visual sacrifice and get the 5600 and 5200fx cards to run better then they can salvage this and take their Doom3 winner and pray and wait for the next NV line....
I know some whinning Nvidot will same those are all the highest settings...Blah, Blah, Blah...I am not a gamer but when I do play a game at my friends house I like all the eye candy...the realism...the idea the software designers wanted....can't we count on Nvidia to make cards play the way they were intended???
Valve refers to this new NV3x code path as a "mixed mode" of operation as it is a mixture of full precision (32-bit) and partial precision (16-bit) floats as well as pixel shader 2.0 and 1.4 shader code. There's clearly a visual tradeoff made here, which we will get to shortly, but the tradeoff was necessary in order to improve performance.
Am I the only one to be really really excited by what Anand said?
Next, Gabe listed the tradeoff in pixel shader instruction count for texture fetches. To sum this one up, the developers resorted to burning more texture (memory) bandwidth instead of putting a heavier load on computations in the functional units. Note that this approach is much more similar to the pre-DX9 method of game development, where we were mainly memory bandwidth bound instead of computationally bound. The fact that NVIDIA benefited from this sort of an optimization indicates that the NV3x series may not have as much raw computational power as the R3x0 GPUs (whether that means that it has fewer functional units or it is more picky about what it can execute and when is anyone's guess).
Originally posted by: Regs
Most of which is explained why Valve had to do so many trade offs , was not because of the drivers, but because the actual engineering method to conserve die space.
And here Nebor, A passage for you that sums up why core clock does not mean everything:
Next, Gabe listed the tradeoff in pixel shader instruction count for texture fetches. To sum this one up, the developers resorted to burning more texture (memory) bandwidth instead of putting a heavier load on computations in the functional units. Note that this approach is much more similar to the pre-DX9 method of game development, where we were mainly memory bandwidth bound instead of computationally bound. The fact that NVIDIA benefited from this sort of an optimization indicates that the NV3x series may not have as much raw computational power as the R3x0 GPUs (whether that means that it has fewer functional units or it is more picky about what it can execute and when is anyone's guess).
Originally posted by: thatsright
Could anyone give me their opinion on how my Radeon 9700 NON-Pro would do?? My Core clock runs at 276Mhz which is slower than the 9600Pro, but it has more pipelines, etc. I get about 15,920 on 3DMark2001SE and around 4450 on 3DMark2003. Of course none of the graphs in Anand's review show a 9700np card, but it seems like the game will rely more on Chip architecture, and raw GPU speed.
Anyone's opinions?
Originally posted by: rickn
Originally posted by: Skibby9
I dunno... I am excited that there is finally new software (hopefully not bloatware) that brings high-end systems to their knees... I mean come on-- I have been doing fine with my 1333MHz T-bird for so long now, there has been no real impetus for me to upgrade. Seeing a 2.8CGHz machine get smoked sure makes me look forward to AMD64/WinXP64 a whole lot more than otherwise.
Think about it this way-- if the 2.8 got 100FPS and the 3.06 got 103 and the 3.2 106, then would you really be looking forward to the next increment or two of cpu speed bumps?
In any case, those who chose ATI dx9 parts are in for a hell of a show if the ATI demos are any indication of what is to come. I really love the natural light demo!
Animusic was my favorite