TuxDave
Lifer
- Oct 8, 2002
- 10,571
- 3
- 71
Originally posted by: Velk
Originally posted by: TuxDave
Originally posted by: Velk
Originally posted by: CTho9305
Consoles using the Cell will have the advantage of only having to render for TV resolutions - at most 1080 lines, while PCs will be rendering at up to 1600x1200
1920x1080 (1080i) > 1600 x 1200
That doesn't stop the original article being silly though - reading a patent and then giving real world performance predictions for a product built off it is more or less akin to reading performance specs in tea leaves.
Just to be a little more accurate.
1920x1080 > 1600x1200 > 1080i
That 'i' is a kicker.
Not at all - to be able to display alternating interlaced frames you still need to be able to work in the full resolution in the first place. You'd have a point if it was, say, television broadcasting, but for a games console you still need to do all the work to calculate a 1920x1080 resolution image to be able to render the interlaced half frames for 1080i.
Of course, if you are going solely on the basis of how many pixels are displayed in a given image, then yes, 1080i is a lower resolution than 1600x1200, but we were talking about the work required to produce the image in the first place. 1080p would probably have been a better example, given it works on both levels.
Well, the point I was trying to make is that getting 60fps on 1080p should be more difficult than 1080i. This is how I broke it down. Let's say there's X pixels per frame and you go SLI w/ 2 GPUs (or even dual core since that's the 'in' thing nowadays). For 1080p, you need each GPU to render X/2 pixels in 1/60. On the other hand, if you wanted to output at 1080i, you should only need each GPU to render X/2 pixels in 2/60 time. So in the second case, you can afford to have a processor with half the performance. You can correct me if I'm wrong. I'm not familiar with the details of graphics processing, but that's logically how I think about it.