I'm sure we all remember the days of the PowerVR, and now more recently the Dreamcast and Kyro products. Also recall the gigapixel technology aquired by 3dfx and now (apparently) held by nVidia. Like it or not, the technology has the respect of at least some in the industry. However, it has been shunned by others, notably Tim Sweeney, lead programer for the people who brought you Unreal Tournament.
As far as I can tell, the arguments against it deal with the fact that deferred rendering involves scene capture, where the entire scene is captured and processed (hidden surfaces are removed, etc.) prior to actual rendering, whereas traditional architectures deal with a stream of data. They end up doing more work, but can start doing it more quickly.
Scene capture in itself has a few negative aspects. Firstly, it uses additional memory on the card itself. All those triangles need to be stored somewhere after all. As triangle counts increase, so will memory requirements. Secondly, the entire scene must be processed before it is rendered. Again, the problem arises with higher polygon counts, when there is more work to be done. So it is when polygon counts are higher that deferred rendering meets its bottleneck.
On the other hand, traditional architectures also have problems with high polygon counts. This slow-down is actually not assosciated with the number of polygons, but with the higher overdraw factors that usually come into play with the environments that have such high polygon counts. Where deferred renderers draw only that which is seen, traditional renderers (simply represented) draw everything.
It seems that the issue comes down to triangle rate vs. fill rate. Will traditional renderers have enough pixel-pushing power to make up for the inherent disadvantage when compared to deferred renderers? Will triangle counts grow quickly enough to rule out a deferred renderer at the high end, or will they move slowly enough to allow a deferred renderer to excel?
Up 'till now we have not yet seen a high-performance implementation of a deferred renderer. Even so, recent products have left a favorable impression on the marketplace. My question is, where do you see the future taking us?
Also feel free to discuss any other advanced 3d-graphics related developments here.
Marty
As far as I can tell, the arguments against it deal with the fact that deferred rendering involves scene capture, where the entire scene is captured and processed (hidden surfaces are removed, etc.) prior to actual rendering, whereas traditional architectures deal with a stream of data. They end up doing more work, but can start doing it more quickly.
Scene capture in itself has a few negative aspects. Firstly, it uses additional memory on the card itself. All those triangles need to be stored somewhere after all. As triangle counts increase, so will memory requirements. Secondly, the entire scene must be processed before it is rendered. Again, the problem arises with higher polygon counts, when there is more work to be done. So it is when polygon counts are higher that deferred rendering meets its bottleneck.
On the other hand, traditional architectures also have problems with high polygon counts. This slow-down is actually not assosciated with the number of polygons, but with the higher overdraw factors that usually come into play with the environments that have such high polygon counts. Where deferred renderers draw only that which is seen, traditional renderers (simply represented) draw everything.
It seems that the issue comes down to triangle rate vs. fill rate. Will traditional renderers have enough pixel-pushing power to make up for the inherent disadvantage when compared to deferred renderers? Will triangle counts grow quickly enough to rule out a deferred renderer at the high end, or will they move slowly enough to allow a deferred renderer to excel?
Up 'till now we have not yet seen a high-performance implementation of a deferred renderer. Even so, recent products have left a favorable impression on the marketplace. My question is, where do you see the future taking us?
Also feel free to discuss any other advanced 3d-graphics related developments here.
Marty