If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.
Oh look, our OpenGL drivers was so rubbish that if we move all the work on the developers shoulders this game gets 50% faster! Wow! What an achievement :)
Small...
I thought Sontin was being sarcastic after people went crazy accusing id & NVIDIA of sabotaging the new doom game on some AMD parts. I can't otherwise make rational sense of what he wrote :)
If you understand what preemption is then you know it makes no sense to say it's used to run concurrent tasks.
Let me tell you how concurrent tasks run on Pascal. It's really easy. You see, some SMs are idle and the HW scheduler decides to schedule some independent compute (or graphics) work on...
It's not even propaganda. He doesn't really know what he's talking about. Perhaps he thinks modern games spend half of the frame time blitting stuff around.
I was talking about the new Polaris primitive discard unit, not primitive discarding in general. As I mentioned earlier in this thread conservative rasterization is the corner stone of countless rendering algorithms, which is why it's such an important feature.
You are not reading slides correctly. A GCN instruction runs for 4 cycles, on each cycle p 16 vector lanes are processed, for a total of 64 lanes. This is the smallest unit of computation on GCN and it's called a wavefront. Wavelets are completely different objects :)
An NVIDIA SM smallest...
It's not the same. GCN logical SIMD width is 64 vector lanes, not 16. NVIDIA SIMD width is 32 vector lanes, which makes it less likely than GCN to suffer from so-called SIMD divergence issues. On the other hand it's highly unlikely this is what makes NVIDIA HW vastly more efficient than GCN. The...
Oh boy, ignorance is bliss. Those DX11 titles use NVApi to provide support for features not supported on DX11. The very same features that are officially supported in DX12. 5 seconds on Google is all you need to confirm those are DX12 features, but please, keep trolling.
Intel supports both...
Those features are NOT available on DX11, they were introduced in DX12 and GCN does not support them.
Conservative rasterization is a cornerstone of computer graphics and it's used in hundreds of algorithms. Saying that it's useless because it is used in some algorithm you don't like because...
Also AMD still doesn't support very important DX12 features like conservative rasterization and raster order views. Unfortunately even Polaris doesn't support them.
Before making yet another long, tedious and ultimately subjective recommendation I'd suggest you actually wait for the benchmarks.
If you want to continue to read tea leaves instead it'd better to not leave out VR apps in your considerations, where the gaps between 1060 and 480 is likely to be...
I love the fact that if AMD advertises features that don't seem to exist in the shipping product is because they haven't enabled them yet, not because they are lying.
You see..we must understand AMD cannot possibly lie, they just don't enable. It's yet another thing they forgot to do before...
Most statements NVIDIA made about Pascal were true or quite close to the truth. AMD has lied left and right about the 480 but please go ahead and troll some more.
I trust much more NVIDIA marketing perf numbers, which were spot on for the 1080 and 1070, than AMD convoluted and non-sensual perf...
So all AMD has to do is to remove support for fast double precision math on their gaming oriented GPUs? This is naive at best.
Clearly there is so much more to NVIDIA HW power efficiency that we don't necessarily know about and that we might never know.
Disowning? If it were NVIDIA people (not saying you) would yell "lies!!". They delivered big time on the price front because they couldn't deliver on anything else. It was clear from moment one it was positioned this way because there was no other option. Of course there is nothing wrong with...
You don't understand and you are drinking AMD kool aid slides. The "foveated rendering" they mention has been pioneered by NVIDIA in their VR SDK and it runs in a single pass on Maxwell without using the slow geometry shader path. The 480 doesn't support this in a single fast pass according...
Not true. AMD simply fixed an issue with their architecture that console developers work around in SW by doing culling in compute. It's simply common sense and I am really surprised AMD HW did not support this before the 480.
There is nothing cheap with what AMD is doing, quite the opposite. It's smart to get rid of polygons that don't contribute anything to the image as early as possible in the pipeline to avoid bubbles/stalls.
From that slide we can only tell it only supports what GCN already supports: assign a different viewport to each polygon, which doesn't allow to do things in one pass since polygons can straddle multiple viewports. This is worse than Maxwell!
The 9 viewports method is simply AMD catching up...
Apple is a major supporter of open standards?!? How do you come up with this stuff? Even rocks know Apple is killing OpenCL in favor of compute on Metal and they are transitioning away from pretty much any open standard to their own APIs for gfx and compute. Bye bye OpenGL, OpenCL, etc.
Using out of order execution in massively parallel application is not a very smart thing to do. OOOE speculatively extracts parallelism out of a serial program. It tends to be power hungry, expensive and complicated. Doesn't really make sense in real time graphics where parallelism is widely...
Perf/w directly translates in performance on power limited parts (i.e. Everything today). So it does matter, a lot! Every good enough developer already knows that in the vast majority of cases optimizing for performance or power leads to the same decisions.
They solve nothing because we know nothing about those feature. Also if a game throws visible pixel sized polygons at the screen no culling accelerator would help, especially given the fact that even DX11 allows to cull patches straight in the shader already.
From all the material leaked so far there is no mention of any new GCN4 feature like the primitives discard accelerator, improved color compression, etc. if find this rather strange and I wonder if AMD is completely de-emphasizing the technical aspect. I really hope this is not the case.
Putting down $30 extra for an investment that is supposed to last 2+ years is a no brainer. Whether games effectively use 8GB now is irrelevant as they will use more and more memory.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.