AMD Mantle Currently Causes Washed Out Graphics Compared to Direct X in Battlefield 4
This would render every single benchmark done so far completely invalid.
AMD Mantle Currently Causes Washed Out Graphics Compared to Direct X in Battlefield 4
Are u serious?The pics posted at WCCFtech , Mantle on the left and DX on the right :
Are u serious?
Dx is on left(Image which is using fraps) and mantle is on right.
Observe the two images below, both screenshots from Battlefield 4. The first one is from Mantle and the second from Direct X.
Agree..When all mantle bug are fixed then we will have our actual results.This would render every single benchmark done so far completely invalid.
Ask the site , here what they wrote :
U Really serious.He miss wrote it lol Mantle dont use fraps.U cannot use any OSD with Mantle.It is Not D3d.Ask the site , here what they wrote :
U Really serious.He miss wrote it lol Mantle dont use fraps.U cannot use any OSD with Mantle.It is Not D3d.
There is also Image quality difference to.I got so frustrated with people comparing different views that I took these. I'm sure they will eventually fix this, I hope, because Mantle is destroying the beautiful blue skies! :'(
[/URL][/IMG]
[/URL][/IMG]
That might not work. I know similar behaving people in RL. It's best to leave them as they are. Wrong forum here to elaborate this further. He at least creates motivation to bring up more undeniable facts. But this might be the plan, to keep ppl being busy, hehe.I think it's time for you to find another hobby.
You're constantly wrong. You're constantly negative. You've posted nearly 9000 times since April 2012 and you barely contribute anything worthwhile.
Give it a thought, would you?
Full explanation from a source that isnt that horrible wcc :
www.extremetech.com/gaming/175998-battlefield-4-amds-mantle-causes-washed-out-foggy-graphics-compared-to-directx
I've seen reviews as recent as this month and it's not reflected in them. Until the top sites we typically use reflect those numbers, it's a one off fluke in my book. You can believe it if you want, but it doesn't make sense to me.
I have a question -
BF4 does a lot of directcompute stuff. Does directcompute work with mantle or are they converting it to opencl?
any other ideas?
i think from all the data we can see there is something from a uarch perspective that is giving NV a big advantage that isn't seen in other games. people with agendas are then using that performance to try and claim mantle is worthless vs DX11.x/win8.1/insert junk here, yet they can't explain 780 vs 290x on a 7850K compared to a 780 vs 290x on a 4770k.
so the question is what is causing that difference? im pretty sure kepler supports some buffer formats at full rate that AMD only do at 1/2 rate. Kepler has a setup rate advantage and that would also align to NV performance scaling on 7850 vs 4770k. GCN and Kepler are simlarish in there memory hierarchy but given memory bandwdith is pretty equal i think thats less likely.
any other ideas?
We've already gone over this several times in this thread already. It's the drivers..
And it's not just BF4 either. Game engines that can utilize multiple threads give NVidia the advantage, as NVidia's drivers are better tuned for multicore processors.
Only two engines that I'm aware of can use 8 threads. Frostbite 3, and CryEngine 3. In both BF4 and Crysis 3, NVidia has a significant lead in CPU limited scenarios using Direct3D. The Mantle pathway is still more efficient though, and will be improved upon no doubt:
But it's too late for Mantle to gain a solid foothold in the industry. AMD needed Mantle to address their shortcomings under Direct3D, but I doubt they ever intended for it to become a replacement for Direct3D..
If Mantle failed when it came to developer adoption, at least it would shine a very bright light on the problem of multithreading in Direct3D and force Microsoft to come up with better ideas.
We've already gone over this several times in this thread already. It's the drivers..
And it's not just BF4 either. Game engines that can utilize multiple threads give NVidia the advantage, as NVidia's drivers are better tuned for multicore processors.
Only two engines that I'm aware of can use 8 threads. Frostbite 3, and CryEngine 3. In both BF4 and Crysis 3, NVidia has a significant lead in CPU limited scenarios using Direct3D. The Mantle pathway is still more efficient though, and will be improved upon no doubt:
I'd ask for another site giving those numbers but I know there isn't so..
no you haven't and no it isn't... i notice that you haven't explained the bit i pointed out as well.We've already gone over this several times in this thread already. It's the drivers..
pffft he doesn't know what hes talking about.........Yeah, you can get some minor gains in a few applications, but it's typically stuff like eating *two* submission threads for barely double-digit performance improvements (in CPU bound cases)... not really that exciting and doesn't scale at all beyond 2. It's clearly something that needs a fundamental rethinking, a la. Mantle.
we all know repi is a lying shill :awe:we support it but it is fundamentally broken in DX. check my slide #34 from 3 years ago:
Drivers can get in the way of performance but they dont magically create higher then peak performance as determined by the hardware. The simple fact mantle scales so well on low end CPU's when NV doesn't ( magical DX multithreading working so well!) yet scale CPU performance high enough they end up in front and AMD stop scaling at not much more then a 7850K.Some time ago, I did some testing with my Tahiti to determine some of it's L2 properties. One of the things I noticed immediately, was that Tahiti is already BW bound when you just alpha blend to a 4x16 bit float HDR render target, even if you are not sampling any textures at all (just write solid color triangles). Fast GDDR5, wide 384 bit bus and a total of 264 GB/s bandwidth, and a simple untextured blend is able to spend all of that BW. I roughly calculated that it would require more than 500 GB/s BW to reach full Tahiti fill rate in this scenario. So I split the viewport to 128x128 tiles, and rendered each tile separately. This doubled the performance (reached maximum theoretical fill rate). 128x128 tile of 4x16 bit float color + 32 bit depth buffer size is 196 KB in memory. It easily fits the 768 KB L2 of Tahiti, and thus all the blending passes and depth read/writes occur completely inside the L2 (no memory BW used at all). It seems that cache optimizations are very important for these new breed of GPUs. Even simple things such as sorting objects by screen space XY location (in addition to depth) could bring nice reductions in backbuffer BW usage (= big performance gains for the BW starved APUs).
It's hard to say how the large L2 caches of GCN (and Fermi/Kepler) affect performance of current generation games. Current generation consoles do not have large general purpose GPU caches, so most developers have surely not analyzed their L2 cache behavior or changed their rendering methods to exploit the L2 caches of the recent hardware. Cache optimization was (and still is) one of the major performance improvements for CPU code. If you don't design around caches, you still get some gains from the caches, but the biggest gains require careful design. So I would expect to see biggest gains in new games that are designed from the ground up for the new GPUs.
I doubt it is related only to a better threading, I would assume it is related to a lower CPU overhead as well, Nvidia seems to handle high drawcall numbers better than AMD under directx 11.
I'd ask for another site giving those numbers but I know there isn't so..