ElFenix-
"programmable t&l that renders previous t&l obsolete and almost worthless"
Polygons still have to be transormed and lit, that isn't going away with DX8, the GeForce3, magic, or any combination of the three. Hardware T&L, on the GF and GF2, is proving its' worth right now and still will be useful in games like Doom3. In fact, if you look at those benchmarks carefully you will see that the "flexible" GF3 T&L has trouble with the horrible "static" GF2U's T&L(
honestly). Flexibility comes at a price, even if it is worth it overall.
Soccerman-
"so, that means 3D speed/power doesn't depend on nVidia, after all, the companies can set their clocks to whatever the hell they want.."
That would be a good point, except that nVidia
DOES let vendors use different clock rates. Check on the nVidia site and they list the GF as having 480MPixel performance even though the Herc version had 520MPixels. The TNT2 Ultra series boards offered a rather huge rift between the fastest available and the default nVidia specs. Apple is currently producing GF2MXs clocked above that of their PC counterparts and Hercules shipped their GF2s OCed versus the default nVidia settings(and then had to roll the speed back, but not because of nVidia).
Orbius-
"I'd argue that without T&L those games were pretty good anyway"
I'd argue that Half-Life was still a good game running in software mode.
"but 2 games a feature does not make"
There are many more, I'm just listing two GOTY winners. Ignoring that, how many people here think Voodo1 and in the same thought have TombRaider and/or GLQuake? I know I sure as he!l do.
"Also with the amount of technology 'borrowed' from Ati, I think Nvidia should stamp an 'Ati' logo on one of their chips."
Name a single feature. You might want to take a look at the patents on those "new" features. "Hyper-Z" technology, the actual portions that make it work, were mostly patented right around the launch of the Voodoo1 and it wasn't by ATi(or 3dfx for that matter).
KarlHungus-
"I honestly doubt the drivers are causing the "low" numbers."
They are definately having an impact. Look at those charts you posted the liks to, only a 4% drop moving from 16bit to 32bit running 1600x1200 Quaver benches? The GF2 Ultra is nowhere near its' peak theoretical 16bit or 32bit performance at that setting so the raw power edge that it has shouldn't be a factor.
Soccerman Part2
"the question is, what will ATi bring to the table with their next card?"
I hope they can do it, but I have very little faith. ATi in the high end, despite what many seem to believe, has not been a threat to nVidia for more then a total of three or four months in the last several years(and part of that was the Rage128 briefly edging out the TNT). The Radeon, for all it does well, was simply manhandled by the GF2Pro and Ultra which have been widely available for some time now. The Pro is also priced in the sub $300 range which has been considered the "mass market high end" price for video cards. Perhaps ATi will shock the doubters(myself included), but they have yet to show a long term commitment to trying to compete in the high end 3D gaming hardware market.
"we only NOW see the benefits to their architecture"
What benefits over the GeForce series of boards? I have been of the mindset that the Radeon's features best the GF's for gaming purposes, but it seems that developers think differently. Carmack has gone on record stating that Doom3 will look better on a GF then a Radeon already, that benefits of the Radeon are we supposed to see?
"so I'm wondering.. will they pile on T&L performance, or work a bit on increasing efficiency of the rasterizer (tile based rendering?), or what?"
They need to work on both areas badly. Look at the Radeon in bandwith limited situations. The norm is for it to lose out to the GTS despite having a bandwith edge, and all of the "
Hyper-Z" technology. Their T&L performance is horrid, in some situations it is only one third that of a GeForce1 SDR, an area that could become worse moving to a more flexible unit(check out the GF2U compared to the GF3 in raw poly throughput).
"I wonder why Kyro has up their sleeves btw.. I hope they don't go bitboys and try to add a whole bunch of these programmeable pixel shaders etc.. I want that thing ASAP! add them later!"
IT(Kyro is just one board) needs to have all of the programmable pixel shaders to match DX8 specs. With X-Box on the horizon having a fully DX8 compliant part isn't a luxury, it is a requirement. I have much higher hopes that they will offer some competition for nVidia then ATi although the ArtX acquisition could put ATi in a competitive situation.
Zero
"My radeon didn't look half bad."
Were you reading the same review? In actual game benches the GF3 was absolutely obliterating the Radeon's scores(though they kindly didn't show them head to head). Try 1024x768 32bit color Quake3 with 2X FSAA enabled, the GF3 is roughly
500% faster then the Radeon 64MB. Even ignoring FSAA and sticking with just plain old high resolution(16x12 32bit) the GF3 is in the 200% to 300% faster range. I can't wait till we see numbers with release drivers in games and boards that aren't an Ultra(a $500 MSRP piece itself folks) to compare.