Looking through various tech articles on the net I noticed there seems to be massive variation in the polygon throughput figures for hardware accelerators and the modern consoles.
Firstlty there seems to be no mention of textures/lighting used to derive said figure, surely this would affect the outcome greatly? Does 83 Million Triangles per sec mean single texture & 1 light? Does it include texture compression or several light sources?
Secondly, this applies only to the PC accelerators, is the final figure a theoretical maximum for the card alone or was it achieved in a PC, what about the spec? Can modern PC's actually supply these cards with enough data to achieve this maximum? This leads me onto my next question.
Are PC's becoming more and more of a bottleneck to the latest accelerator cards, is the agp port being pushed to it's maximum? If not now, soon?
Does each sucessive nVidia card improve on the last by an equal amount or is it more? If more is this why the cards get more expensive each release?
Have there been any tests to investigate the above? Is there such a thing as an accurate polygon throughput figure?
Can software be written soley to test a machines total throughput, has it been done?
Or is the Polygon per/sec a meaningless figure?
Just for kicks, anyone make an educated guess to the realistic throughput of the fastest consumer pc in polygons per sec? (textured and single light)
Many thanks
Dom.
Firstlty there seems to be no mention of textures/lighting used to derive said figure, surely this would affect the outcome greatly? Does 83 Million Triangles per sec mean single texture & 1 light? Does it include texture compression or several light sources?
Secondly, this applies only to the PC accelerators, is the final figure a theoretical maximum for the card alone or was it achieved in a PC, what about the spec? Can modern PC's actually supply these cards with enough data to achieve this maximum? This leads me onto my next question.
Are PC's becoming more and more of a bottleneck to the latest accelerator cards, is the agp port being pushed to it's maximum? If not now, soon?
Does each sucessive nVidia card improve on the last by an equal amount or is it more? If more is this why the cards get more expensive each release?
Have there been any tests to investigate the above? Is there such a thing as an accurate polygon throughput figure?
Can software be written soley to test a machines total throughput, has it been done?
Or is the Polygon per/sec a meaningless figure?
Just for kicks, anyone make an educated guess to the realistic throughput of the fastest consumer pc in polygons per sec? (textured and single light)
Many thanks
Dom.