Alf-
It seems there a few misconceptions as to what exactly I was saying.
"It depends on the system in question, which is part of my original point. Also, the performance difference between PCI66, AGP 1x, 2x and 4x is not even a factor here."
In geometry throughput, it absolutely is key. Run a V3 against a V5, the V3 will roughly double the poly throughput on an AGP 2X system that a V5 can push. The bandwith is limiting the amount of vertex data that can make it to the card.
"I?ve benchmarked it myself. There is a lead in 16 bit; that I will concede. But if you up the color depth to 32-bit (which nVidia has spent well over a year evangelizing as the only valid color depth) the lead drops dramatically. I?d hardly call that crushing the V5 by any stretch of the imagination."
Leave specular lighting(hardware T&L checked) for both boards which was my original point. You can reduce image quality and run a lot quicker with the V5, but still not as fast as a hardware T&L board and you still are dealing with inferior visual quality. Even using the old Det2s, which poorly cached vertex data the GF2 was still significantly faster then the V5 when utilizing high quality lighting. Rev's numbers
1024x768 32bit color-
V5 - 38.09
GF2 - 82.04
Lower the resolution down to 800x600 still using 32bit color-
V5 - 42.55
GF2 - 113.38
I would say those qualify as noticeable differences in 32bit. Link, though you have to scroll through a few pages as Rev uses an odd setup-
http://www.voodooextreme.com/reverend/Reviews/nVidia_GF2GTS/7.html
"Also, if you?ll check most of what has been said about FSAA, you?ll see that FSAA is overwhelmingly favored as the most dramatic image quality enhancement. I?ve got a bout 20 links to sites saying just that, giving the Voodoo5 the undisputed image quality crown. Since this is a fairly well-established stance, do I really need to post them?"
I've been using FSAA for years, 4x4 at resolutions up to 8000x8000 using software based render engines for visualization packages. FSAA is a hack, period. Enabling FSAA reduces image detail, the best solution would be to push a native resolution that exceeds the level at which the human eye can detect the artifacts. 8000x8000 on an average display would do it, though we can't handle that with current monitors, when we can FSAA will be of very little use.
"3DMark 2000 defaults to 16-bit color. Test in 32-bit mode using hardware T&L then select SSE and check out the difference for yourself."
SSE checked enables low quality diffuse lighting, hardware T&L uses superior quality specual lighting. Because of the simplicity of the bench you can't tell the difference, but games like MDK2 display it quite nicely. How about legit comparisons like ViewPerf, GLPerf and Indy3D, where the CPU isn't allowed to cheat its' way to victory?
"Run Test Drive 6 in 32-bit color."
Software, 32FPS(Athlon 550). Hardware, 92FPS(GeForce1).
"That's not the only thing. People have seen T&L in action and the crown is still going to the Voodoo5 for image quality. That's not 3dfx PR there, that's direct feedback from the community."
You don't think that is PR? Anyone here with both a V5 and a GF/Radeon board run Evolva. Give the V5 every advantage it can muster, use 4x FSAA, 32bit color, LOD bias tweaks and then set the GF/Radeon for bump mapped 16bit without FSAA and try and say the V5 doesn't look horrible by comparison. The titles that you want to push are those that are utilizing T&L for increased FPS instead of increased polygon complexity such as Evolva does, that is where the V5, and any other non T&L boards start looking real bad.
"But even forgetting that, the lack of title support for T&L is not an insignificant factor by any means. The 100 titles that nVidia has cited have yet to make it onto the scene nearly a year later."
Depends on your definition. Any game that utilizes any native OpenGL support for either lighting or transforms benefit in some way from hardware T&L. This isn't just nVidia, this is also ATi and soon to be Matrox. The "support" has been overhyped, but now the big two are both on board.
"T-Buffer, on the other hand, delivers FSAA, is here, now, and with our implementation you should be able to apply this new technology to just about every 3D game you own as well as just about any 3D game you might buy in the future. There is no lack of title support there. That is far from a feature without merit (and far from the only feture we have going for us with the V5)."
FSAA is the only feature of T-Buffer that is going to be widely used unless a small miracle happens. For that, GF/GF2 and Radeon also offer FSAA, the edge that 3dfx has is noticeable but still remains marginal.
"Check out Tom?s Hardware for information about the Detonator 3 drivers."
What was I supposed to be looking at? Some people are supposed to be having problems, how is that different then
any driver release? I can assure you that 3dfx driver revisions create their own share of headaches as evidenced on this board. Perhaps not as many as nVidia, but you guys don't release drivers every other week either
"The GF2?s T&L is NOT balanced, that?s a documented fact and has been since the original GeForce. People have commented on the fact that the Lighting engine limits the Transform (for example, some of the Radeon reviews)."
Then no CPU is balanced that I have ever seen. Utilizing high quality specular lighting brings CPUs to their knees very quickly, I deal with it every day. Non lit geometry by comparison is rather simplistic. I would say again to try one of the industry standard benches that utilize identical lighting techniques for both software and hardware and see what the results are, on average the GF1 is an order of magnitude faster then an Athlon600 MHZ, roughly ten times the throughput.
"Q3 only uses the Transform. Depending on your system, T&L can make a difference in the very few titles that have T&L options, to that point I will agree. However, even with the partial T&L titles on the market now, we have a ways to go before we see any appreciable utilization of this feature and when that point comes, a more balanced solution will be required."
From the sounds that we have heard over and over on this aspect it sounds like you guys are planning on having a RISC based CPU(or perhaps a modular unit combined with the rasterizer?) on board the Rampage to handle T&L instead of true dedicated hardware. That would fall in line with the PR spin, but from a performance standpoint it leaves much to be desired. Improving the lighting pipelines is one thing, compromising the polygon throughput to enable offloading of IK onto the graphics chip along with other traditonal CPU calculations is something else entirely. Check out the ten CPU MIPS IR2 stations from SGI having a hard time keeping up with the unbalanced GeForce in certain situations. Flexibility is often times not the best solution. I have a dedicated MPEG2 decoder even though my system can handle it with ease. Dedicated hardware is almost always superior for specific tasks.