I was pretty lukewarm about it initially, but sounds like its pretty impressive. I wondered if there wasn't some tessellation like auto-generated geometry, but figured that'd be woeful as I thought geometry throughput was one of the aspects of GPU hardware that wasn't really increasing at the pace of the rest. Sounds like they're mostly bypassing that issue by doing it in software and then it does a good job of culling. And then them being able to bring in highly detailed models and then cull them based on the scene is impressive (although them touting the number of same models makes me feel like a lot of the benefits will be based on utilizing the same assets multiple times). Its also something that I don't think would've been possible without the SSD. And now, I wonder if the best way wouldn't be to add an SSD socket to video cards, or NAND directly.
And, this seems to actually show that Primitive Shaders was everything AMD had hyped it to be. I might be reading that wrong (and they're saying that for when the hardware rasterizer is faster they use primitive shaders). But now we have to consider the compute performance, so would Vega actually be a really good architecture for this?
I'm guessing they were developing that since they didn't know for sure what the ray-tracing hardware would be (but they knew what GPU hardware they had to work with otherwise). And same for Crytek. I wonder how much benefit they'll get from dedicated ray-tracing hardware.
This is the eternal problem with jagged low detail meshes and normal maps that only work at the right view angles.
Displacement mapping is a partial solution, but not an ideal one by any means.
As far as game development goes this will be a big boon in terms of simplifying their production pipelines, but it will force them to be very economical about the geometry and texture size in storage, probably meaning them having a library like Quixel Megascans (or just using QM) and just selecting what assets they absolutely need for the game, and just reusing those assets a lot in varying ways.
This ties in well with the storage decompression/compression co processing hardware in both new consoles - but compression will only go so far, and I fear that the co processing HW won't be terribly flexible for new compression formats once they exist.
Its not like they've been running N64 models with crazy levels of bump mapping, so its not like it was that big of a disparity. Plus I believe geometry hardware was languishing compared to the overall improvements put into the GPU hardware so there were reasons that they were balancing geometry. And yeah there's still going to be a tradeoff where they have to juggle constraints. I could definitely see that, where they have a certain amount of base objects then work it into different configurations (not unlike they used to do with tiles back in 2D era, where they'd have simple blocks and then use them multiple times in different ways to make more complex things, so stuff like stone/bricks/etc).
This is showing what the big increase in overall data throughput (which is the next big thing in computing) can bring. Its highlighting one of our current issues that could become a big problem as this next generation pushes this. Getting that data into the system (i.e. we're going to see game sizes continue to balloon), and Blu-rays can hold a lot but getting the data off is a hassle (long load/install times). And downloading 100GB+ games takes a long time. Plus flash is expensive still, so while it enables that, its still a limitation in that we're not going to see them exponentially increase that size to keep up, and there's only so many games you could install on one before its full.
I now see why they're putting fairly large amount in, as I expected that they would put a lot less (mainly using it as a buffer, or enough to hold a single game at a time, where they'd load the game fully in, and then purge it when you wanted to play something else and needed to make space). Which they'll need to protect the write endurance some.
I wonder if someone might make a MIDI fontset type of thing, where they make a library of assets, and then games just pull from there for a lot of base assets. I kinda think that's already what this is about, as there's super high quality asset libraries that companies can license or buy assets from and then use instead of having to build so much new assets for each new game. But I was thinking how Sony was saying they'd keep a certain amount of game data on the SSD so that it doesn't have to be written a bunch, and improves the overall responsiveness. What if they dictated a library of base assets (that they'd tune for size/usability/etc) and say games should use those.
Also would need to develop a program that consistently formulates geometry on the fly based on the base asset, so it could create buildings out of bricks, etc. Think of it like Minecraft where it procedurally generates the world using the base assets, but at modern gaming quality. I think that's kinda what No Man's Sky was going for. This also reminds me a bit of...forget the one company, but they had the "infinite detail" where I think they were talking about using base "atoms" to build more complex things. And I believe they were talking about pulling assets from photorealistic libraries, where instead of needing to pull all the assets separately you feed it simplified information and then it builds the asset out of the base building blocks.