Originally posted by: dunno99
Originally posted by: TheSnowman
Originally posted by: dunno99
Originally posted by: HostofFun
Originally posted by: dunno99
Quick question for everyone here:
Why do people believe that they need more ram for higher resolutions? The difference between 1600x1200 32-bit output and 1920x1200 32-bit output is only 1500KILObytes (which is slightly less than 1.5MB). Even if you consider what the OS reserves for the output buffer, double buffering for your game, additional deferred rendering requirements, that's only 6 times more, which is roughly 9MB. How do people think that games out there will actually borderline on that 9MB, and therefore hit the PCIe bus? From my understanding, most of that RAM is used for textures only. If you don't change the quality of of the textures, the game would therefore only take 9MB more going from 1600x1200 to 1920x1200. Am I missing something?
What you're saying is true for 2d, but 3d is a whole different ballgame.
And how exactly is it different for 3D? Last I recall, everyone is still using a monitor that is 2D. All textures are still pretty much 2D (no one seriously uses 3D textures...except academics). The output buffers are still 2D as well. The only thing that's 3D are the geometric coordinates, normals, etc... of objects, and that's usually stored in main memory, not graphics memory.
The output is 2D, but the calculations are done on a 3d framebuffer, typically with 32-bits of depth on the z-axis. That is what separates 3D graphics from 2d rendering. Also, anti-aliasing samples multiply the size of the backbuffer, and of course HDR means more bits dedicated to color depth as well.
And anyway, 1600x1200 isn't even 17% less than 1920x1080. When people say you need more ram for higher resolutions they are referring differences like 1600x1200 compared to 1024x768, where the former is over double the latter and that difference can easly require over 50mb of RAM. Yeah, textures still use the most VRAM by far. However, if textures are already pushing a card close to it's limit, then running a high resolution requires either turning the texture resolution down a notch so that it all fits or living framerate hitches that come with texture swapping.
The calculations are done in 3D, but when they're stored in the memory, they're still stored in 2D space (with usually 24-bits on the depth buffer and 8-bits for the stencil buffer). Nothing really separates 2D and 3D graphics output other than the fact that you have a depth buffer (2D resolution * bits-per-pixel = memory taken) and some projected intermediary data. Sure, anti-aliasing takes up more memory, but that's usually only memory for the framebuffer/depthbuffer, nothing else.
The whole point is that when people on the forums are asking whether or not they need to get the higher memory card for their 1920x1200 monitor instead of their old 1600x1200. Even if that's not the case, I don't think I've even seen any posts about upgrading from 1024x768 in the last few months on these forums, since I don't think most people who're wondering about the difference between 256MB and 512MB was gaming with 1024x768 to begin with. I think the bottom line is that graphics cards are limited by their shading power when going to higher resolutions, rather than by the amount of onboard memory. And in some instances, one *may* encounter the borderline case when the card has to start
hitting the PCIe bus...but that depends from game to game, and the settings within the game.