ub4ty
Senior member
- Jun 21, 2017
- 749
- 898
- 96
The problem here is that he wasn't bashing their product. He stated something pretty straight forward and sound. To go and terminate his sponsorship is one thing... To be petty and ask for your goofy gear back is a whole other level. This is what happens when you have too many fratboi chads in your business/marketing team who think they can treat anyone however they like because their fee-fees got hurt. This is not how you run a business in todays age where these locker room level stunts make it to the front page and cast a big negative shadow on you as a company.I saw that but I don't see an issue with them asking him to return it seeing as he was selected to be a sponsor for them. You wouldn't want someone that sponsors your company to do anything that would detriment sales.
I agree with his logic 100% though. Why are people preordering without any info he is right about that.
I'll come right out and say it... I was in the midst of getting an application processed for one of their programs and I'm sort of glad I didn't get it. I don't like the feeling of being censored or restricted especially when the opinions/ideas are grounded and reasonable. As a result of observing this fiasco and others, I'm going to do everything I can possibly do to have zero contracts with hardware vendors. I have after-all bought all of my hardware with my own dime up until now w/ zero support.
I'm currently having a discussion with other informed individuals on a different platform and the consensus is the following :
They achieve this functionality via a hybrid overlay solution / Hybrid pipeline.
Tensor cores and ray trace cores are packed into the SM. Ray trace cores have been placed where the prior double precision floating point logic was placed. They generate a BVH data structure and use it to guide traditonal rasterization and also use it to feed the ray trace cores in parallel. The ray trace cores calculate a series of intersection and produce a quite noisy image. This noisy image is 'overlayed' over the traditional rasterizer pipeline output and then tensor cores are used to do "AI - meme learning based denoising' : https://research.nvidia.com/sites/default/files/publications/dnn_denoise_author.pdf.
The penalty hit comes from the extension of the graphics pipeline to a new process conducted by the tensor core denoising process. The ray trace core calculations and results happen in parallel w/ the rasterizer pipeline and the result is quite noisy and ugly. The real magic happens by fusing this to the rasterized image in the tensor cores. When not used to denoise the ray tracing output, the tensor cores are restructured to support DLSS.
There's issues w/ pixel flicker and noise still present in the image. There's also issues with ghosting in the case that they don't do per frame ray trace rendering and there's 'hold over' shadows. They'll zero this in over time. The Star Wars demo had a 45ms per frame render time using the ray trace cores. This is somewhere around 22FPS. So, you either decide you want higher quality ray trace results and lower FPS or higher FPS and lower quality/non-per frame ray trace results. I could imagine a slider for adjusting this functionality.
The reason there is a lack of details and benchmarks is because the performance is going to be all over the place and the details as to how this works is quite complicated. I consider this to be a beta level dev board for gaming applications integrated into 1080ti. The speedups will come from GDDR6, a transition from14nm to 12nm, and the architectural changes they made to the caching structure to allow for a hybrid ray trace/rasterizer pipeline. When you turn RTX off, the traditional GPU pipeline probably gains access to a larger cache space that would otherwise have been dedicated to the ray trace pipeline. You will get a performance hit in FPS when you turn ray tracing on because the graphics pipeline has new stages regarding the "AI denoising" and upsampling of the low res ray tracing results. You cannot use raytracing w/o denoising them w/ the tensor cores. While the ray trace Op runs in parallel w/ the rasterizer pipeline, the post processing denoising does not.
I'm torn whether or not to purchase one of these cards and dedicate resources to evaluating its functionality. I feel there will be some time before its supported in Vulkan and I don't feel CUDA 10 will be released immediately. I also feel there are a good amount of things they're going to disable from the Quadro variants.
On pricing, it's simple... A 2080 costs as much as an entry level Quadro now. Nvidia was pissed because people were using 1080ti FE's in the data center and in rendering farms, so they changed the EULA agreement to combat it. No one cared so they got rid of the reference blower design to prohibit usage in a server environment and they jacked up the price to Quadro levels. An entry level Quadro RTX now costs $2,300. And the 2080 now hilariously costs the same price as a Quadro P4000 entry level card ($800). They couldn't man handle the market that was rejecting their prior price premiums so now they're jawboning it forcefully. This has the handprint of a over-aggressive business dev group all over it and Jensen and the technical staff better reign these jokers in before they severly harm Nvidia's brand and future success.
For the gamer, one must decide if it's worth investing in something that Nvidia will even obsolete in 8 months with 7nm. This is a 1st try at an architecture and its no doubt going to see huge revisions of the SM over time. Few games will support this and its very immature in its capabilities. For professional renders, this is a godsend feature (As they aren't under real-time compute constraints except for convenience in preview). In final render, they can let this thing run for as long as they want to produce a more refined image and it is much faster than CPUs. The problem comes down to the marketing however.. AMD can produce 400 Megarays in its current architecture and has demo'd this in their pro-render real time ray tracing :
Overall, I'm disgusted with the lack of decency Nvidia has been exhibiting as of late.
You have the Partner program fiasco.
You have this : https://www.hardocp.com/news/2018/0...ship_for_stance_against_preordering_hardware/
You have this :
https://www.techspot.com/news/72545-nvidia-geforce-eula-change-prevents-data-centers-using.html
I know this is why they nixed blowers on the Geforce cards
You have the ridiculous pricing
You have this ridiculous blackout regarding details
And god knows what else is lurking under these lofty specs
If it turns out that the ray trace cores only do Megarays just like AMD's GPUs and their gigaray quote is from the tensor core upsample, they will have officially lost all decency in my book. So, I'm waiting it out. If this gigaray nonsense is a farce, there's no reason to go w/ them vs AMD and with AMD opening up their software stack and having the same compatibility with Vulkan, it's them who I will invest resources with. Lastly, you can already do ray tracing in current Nvidia GPUs, its just slower. For dev purposes, I'm going to focus on doing just that with Pascal. For gaming, I use Maxwell and have no performance issues. I'll upgrade my gaming rig in 2020 probably when this idiocy comes back down to earth.
Last edited: