Matthias99
Diamond Member
- Oct 7, 2003
- 8,808
- 0
- 0
Originally posted by: BenSkywalker
Why is it "not close to reasonable"? Isn't this along the lines of what Doom3 does for its lighting?
No, D3 does a Z pass to determine straight line visibility- childs play in comparison to what you are talking about.
Ah, you're right. They "cheat" (in a sense ) by using hardware-supported shadow volumes for the visibility tests.
If the point you're concerned about on the surface can 'see' the light, the light can 'see' the point on the surface (barring single-sided surfaces between them, but most dynamically lit game engine objects are 'solid'). It shouldn't matter if you trace light->surface or surface->light for visibility and lighting calculations (although one or the other may be faster in a particular case, if you don't have to invert any of the vectors).
You are ignoring reflection, refraction and scatter.
Seriously, do any current game engines 'really' calculate reflected or refracted light from dynamic sources, or attempt to do much beyond approximated ambient lighting? I've seen some tech demos (like rthdribl) that do these things (as well as shader-based HDR), but they run so slowly as to be useless in a game engine with any of today's cards.
I'm talking about trying to determine 'is there an unobstructed line of sight between point A and point B'. Something that, I would think, has to be done for any nontrivial shader/fragment-based dynamic lighting system.
Why do you need "six or eight different full frame sized textures"? You need one full-frame buffer (presumably 24bpp) to store the dynamic lighting contribution for each pixel (or you might be able to work directly with the framebuffer). But you only need one per-pixel bitmap (with 1bpp) to mark the pixels that will be illuminated.
As long as your hardware supports conditionals you can do it that way- SM2.0 doesn't support conditionals. You can't exit the shader based on a simple flag- THAT IS THE PROBLEM. What you would have to do is create a shader per pixel and then store that for each frame.
I was under the impression that you didn't have to run a pixel shader on every single pixel on the screen (which you seem to be saying you do have to do). Admittedly, my knowledge of shaders stops somewhere before knowing how you would actually implement this in a program. If you can't do this... how do pixel shaders for things like water work? Certainly they're not running that shader on the pixels that are not part of the water... or are those sorts of things always done with vertex shaders? Or do they actually create a new set of shaders per frame as you are suggesting (and if so, wouldn't that indicate that this is still a viable technique?)
I did find some links that discussed ways of doing this in SM2.0 (mostly by hacking up the depth buffer and sorting out pixels that way), so it definitely seems like it should be possible.
With multiple lights, you either need one bitmap per light (and then do one pass per light to sum up the contributions), or you could just 'tag' each pixel that is visible to one or more lights, then calculate the contribution of each light to each tagged pixel (which may turn out to be 0 for some of the light/pixel combinations, and would be 'wasting' GPU time).
All requiring conditionals, which you can't do with SM2.0. We are not talking about fully programmable hardware here- SM3.0 isn't either it just removes some serious restrictions.
It only requires the capability of running a (non-branching) shader on a specified subset of pixels. You do not actually need conditionals within the shader if you can do this.
Other people would probably say that being more 'future-proof' for "only" a 10% increase in cost is obviously worth it. There's nothing wrong with this view; you're basically 'betting' that SM3.0 will turn out to be worth it at some point in the future.
It is a certainty that SM3.0 will be a major feature in the gaming industry- it is looking like every next gen console is going to be built around that exact feature set. No emerging gaming technology has had that level of support across the board to date.
I agree that it will be adopted more widely than SM2.0, but I still think the timeframe will be on the order of a few years.