Originally posted by: Extelleron
Originally posted by: Dillybob
It takes only about as much power as the 8800GTX, and is comparable to 2 9800GTX's (or 1 9800GX2), but it's 1GPU, has physics, and completely slams current cards. It's got PhysX (ported through CUDA), and, c'mon....the 8800GTX was close to the price of it when it came out. All the ATI fanboys need to stop complaining...I've had nothing but problems from ATI's cards...Nvidia is much more stable hardware-wise, and doesn't need to use experimental new memory or cram 800 stream-procs in there to do it (which ATI has done in an attempt to catch-up).
EDIT: Sorry, I had been reading an older page when I wrote this.
I have owned both nVidia and ATI cards and I have not had any significant problem with either. The only problems I can remember is with my 8800GTS 640MB; The Vista drivers were very immature for several months in early 2007. The second most significant problem that I've had (although it is nothing big to complain about) is with the GTX 280 in fact... I had to use Rivatuner to force the card to stay at Perf 3D clocks all the time, because it does not go up to the right clocks during 3D. And the card is weird in terms of overclocking... core domain/shader domain seem to be linked more than in G80/G92. I can't get my core to 720MHz without increasing shader domain to 1512MHz.
I'm not going to get into the SP argument. nVidia's SPs and AMD's SPs are two different things, and they cannot be compared like that. AMD's SPs are less efficient per unit but much more efficient per die area. The number of units does not matter to anybody, the die size matters to the manufacturer. So in any important measure, RV770 is actually more efficient.
GDDR5 is not "experimental", it is a new memory standard. nVidia uses a 512-bit bus with GDDR3... AMD uses GDDR5 with 256-bit. They are both valid approaches. nVidia's approach increases the cost of the PCB significantly, AMD's approach increases the cost of the memory (by how much we do not know). For some reason nVidia seems to have fallen in love with GDDR3 memory. Not moving to GDDR4 made sense, because GDDR4 sucked... it increased latency and did not increase frequency by much. But GDDR5 is a serious step forward and IMO nVidia would have been better served by 256-bit + GDDR5 in GT200 than the current setup.