Ok, demos.
They optimize the geometry mostly. I don't feel like going into specifics.
Sampling proximity(same thing that gives RGSS an edge in eliminating aliasing artifacts creates more noticeable FSAA artifacts ie, haloing and blurring). Perhaps I should have used detail instead of accuracy, it may be better expressed that way.
There is no reason this would be true.
The only thing that is different by changing sampling positions is where your maximum color varitions go. Which angle.
Near the end of the thread it seemed to be agreed upon that on a mathematical basis at least, higher res could be proven to be superior(though I don't recall who was still left in the discussion, it had dwindled by that point).
No, I pretty clearly recall that the conclusion was that an anti-aliased image was more accurate, while high resolution provided more detail.
They are still CPU limited. Using MDK2 or TD6 as examples(covers both major APIs) while my FPS are quite a bit better with hardware T&L, I am completely CPU bound by game code. I have tested extensively and the edge that my GF1 provides increases on a percentage basis when I overclock the CPU, though they don't budge when I OC or UC the core. Both games, particularly TD6, are composed nearly entirely of static vertices, so that of course does have an impact. With TD6, I'm seeing nearly a 400% increase using hardware T&L over software, and I have an Athlon550. Upping it to 600MHZ using FSB(I don't have a GF) it increases to over 400%. The GF T&L unit is still scaling.[b/]
That is interesting. I'm a bit perplexed though. Not that I doubt you, but recall how a big deal was made before because in TD6 a CPU was faster than T&L. Now that aside, I do find your results interesting. There must be a very small fill-rate requirement in that game for that to be happening. One thing that does bother me about the whole thing though is that scaling of T&L with CPU. I mean why is the T&L engine so dependant on the CPU? The idea of a T&L is to offload the CPU, and really there is no technical reason that should be happening. The situation is really fishy (not that I doubt you, but the app)
For MDK2, if you run hardware mode on a GF compared to a V5 at low res(just to leave it strictly T&L and eliminate fillrate), the GF/GF2 is quite a bit faster then the V5, in the order of ~250% using Rev's numbers with older drivers. That's a ~850MHZ CPU, although the comparison unfortunately relies on the OpenGL drivers of each board, both of which have improved significantly in the past four months since those numbers were run.
Yeah, it is the old drivers thing. 3dfx (trying to keep myself seperate here ) made a lot of optimization to it. Also, you can try the WickedGL and you might get further performance over the latest V5 drivers.
Ben, you present one of the big issues with T&L and future games, which is something I'm pointed at though not enough. When it comes to future games, there are going to be a lot more texture passes and depth complexity. Because of this you'll be hitting an early fill-rate limit and so T&L here won't get you any benifit. Again though, I reference our old articles at B3D on the subject is they go deap into this.