What a fundamentally love about gaffs like this is that it opens the door for competition/innovation and far better prices.
New architectures. New approaches. New offerings. Better prices. Although I have an interesting compute workflow, I am skipping the 20 series Geforce cards. I am doing so solely for what I feel is an extortionist price. As a result of being interested in the tech but absolutely disgusted with the price, I conducted a lot of research on how the rage of algorithms tied to hybrid ray tracing work. I also investigated how GPU acceleration works. I investigated how divergent rays cause performance to tank and how tricks with cache coherence can mask this. I researched how the current tech is indeed hybrid ray tracing in that it uses ray tracing cores as well as the traditional rasterizer pipeline. I researched how the pictures are quite noisy and there are still lots of hacks to make this look like its doing its job. The key is the denoising and intelligent sampling. This is a 1st gen architecture that will evolve significantly. A good deal of software groups wont even have compatibility until mid 2019. So, this lands Nvidia into 7nm competition with whatever AMD provides.
Upon doing a lot of research, I found that Pascal already can do gigarays worth of 'meme rays' per second. It largely depends on the benchmark. Pascal is not stuck in Megarays. Microsoft's directX ray tracing fallback code is quite garbage. A person on another forum managed to double performance just by makes 3-4 line code changes involving shared memory. Pascal already does gigarays is similar benchmarks. Whereas Jensen claims the quadro RTX 6000 does 10 Gigrays, one rendering company is on record saying it does 3.2 Gigarays. What Jensen was referring to were primary rays [the initial batch of rays that flood into a scene]. After the hit objects, they begin diverging further and further into secondary rays, etc etc. This divergence causes a hug performance hit. Nvidia made architectural changes with caching/etc to mask this but it still is a hit. In the real world, you don't slam 10 gigarays per second at one static object that is parallel to your view port. This is why the real world performance is much less than the quote.
So, I read alot about the algorithm. I went through Nvidia's own publications and found lots of things I didn't like. I view this is a great 1st version architecture. Current pascal cards are already great performers for my needs. If the prices of Geforce 20 weren't so extortionist, I wouldn't have hesitated to buy one and sample it. Now I am looking more forward to 2019 7nm and what the competition has to offer because I by the way discovered that AMD cards support this too and do in the Gigarays of primary rays. Apple did a demo showing a friggin ipad doing megarays in the box scene Jensen demo'd. Apple has their own framework for raytracing atop metal.
Everyone is going to be getting in the game and Nvidia wasn't first in it. The real offerings come next year. With no big software support until 2019 anywhere, there is no reason to be clambering over these cards. I game on a Maxwell card with only 2GB and even that performs well enough. I have 1080s but they are for compute only and couldn't imagine caring a single bit If I gamed on one.
This is peak hubris and a money grab and I will not support it with my money. I am also hopeful of AMD's open source driver initiatives and vulkan support finally maturing for compute because right now its a mess and its why I sold off the vega card I bought. I look forward to intel and many others entering the game for AI compute and that exactly what's in the pipeline. On the traditional GPU side, cards will look nothing like they are now into the future and there will be many more competitors besides AMD/Nvidia in the game thank god.
Later on this week, I will be parsing Nvidia's official detailing of their micro-architecture. I can't wait to see what kind of tricks they're conducting to pull off these numbers. As for the reviews of these cards next week, I'm not holding my breath. I don't think Nvidia cares because their motive ATM is to sell of their huge oversupply of pascal cards and thus priced this in a way that it doesn't compete at all and continues the sales of Pascal well into the end of the year. If Nvidia doesn't re-deliver Geforce 20 on 7nm with better pricing before this time next year, they're screwed. I hope they are arrogant enough like Intel to think they can do otherwise. They need to be brought back down to earth. Competition is good for that as we have already seen in the CPU market.
Ray tracing and tensor cores belong in an MCM architecture on a different die. However gets their act together with such an implementation is going to destroy the market.