Io-tech.fi did some testings with different memory settings. Included are speeds from 2933MHz upto 3600Mhz, with additional The Stilt's 3466MHz settings that are inlcuded in Crosshair VII Hero's bios. The article is in finnish, but it's waste to translate anything as pictures tell everything...
They did improve nicely with memory for sure. But if you look at that last review, it shows that there is still huge gains to be made by setting subtimings manually. As ashes of singularity didn't get the ~30% boost we've seen from tighter subtimings do. Faster memory hardly did any difference...
Even if Intel would be able to summon magically fast gpu for gamers, moving to such card would be risky for consumer, as Intel really has horrible reputation when it comes to their divers. And unfortunately that reputation changes very slowly.
Even if Intel would end up bringing up great GPU's for market, and even in gaming, they will have to fight the driver battle. AMD has been fighting for a long time to clear their name with having bad drivers, and still their past haunts them even today. Intel's drivers have even worse reputation...
Well is the Skylake-X "poor" gaming performance really from changed cache layout, or is it simply that currently they have slow thread to thread communication latencies (which can be lowered by overclocking cache).
Offtopic. I really hate to whine, but would it be possible to add spoilers in...
It could be because core to core ping times are rather slow. Moving from Ring to Mesh seems to be causing problem especially in games. In Ryzen's case we learned that when certain threads were running on different CCX, there was very clear penalty for peformance. Atleast in Zen's case latency...
Some more impressive scaling with fast memory and low latencies in Ashes of the Singularity's cpu focused test.
https://bbs.io-tech.fi/threads/amd-ryzen-7-am4-b350-x370-kellotukset-ja-kokemukset.14849/page-54#post-1016073
Summary from most critical clockspeeds:
2400MHz 40,4
3200 MHz 45,2
3200...
If you look at the test posted on previous page you can see things being completely opposite. In all honesty different applications and games react differently to changes, and there can't be direct summary which on is more important. Best is ofc have more bandwidth with lower latency.
True, these few individual tests are everything we got so far. Well hopefully when new BIOS versions with new AGESA comes out officially, we'll see some sites actually retests Ryzen memory.
Here is another test , this time from Witcher 3 by Keketin...
The Stilt's point with the graph was to show balance between bandwidth versus latency. You can clearly see Latency makes more difference than bandwidth. These LL data sets have also subtimings changed that weren't possible before latest AGESA updates AFAIK.
Yeah but keep in mind that the scaling is the same in scenes where Ryzen was single thread limited , and couldn't get 60 fps. We're pretty much seeing 25% scaling from 2400MHz launch timing towards 3200 LL. So in scene where you had 50 fps before, you would get over 60 now. Ofc all games don't...
Carefull when making assumption between games before actually checking their threading details, which thread is bottlenecking and when. If your game is bound by main game logic thread, bigger driver overhead won't have real impact on fps hence you're still being bottlenecked by main game thread...
They have absolutely no reason to sabotage Ryzen's paradade for purpose. AMD's cpu department ain't their direct competitior, and Ryzen sells very well without their optimized drivers. If they can't get their drivers working well with Ryzen, it will only mean more Vega sells in the future. No...
Pcper tested memory speed effects on ping times between cores over CCX's.
https://www.pcper.com/reviews/Processors/Ryzen-5-Review-1600X-and-1500X-Take-Core-i5/CCX-Latency-Testing-Pinging-between-t
Yes, but this is a bit out of the context. Recent discussion was that Nvidia driver doesn't work that well under DX12 (especially on ryzen). If you make test between I7-7700k and Ryzen, you are testing driver stack in different conditions (8 fast threads versus 16 slower threads). Same with...
I just can't understand why they are all benchmarking Ryzen against I7-7700k when they should benchmark it against I7-6900k. That's the only valid comparision if you really want to see how driver stack works in multithreaded surrounding when you actually have room for all those threads.
Yes, though I wouldn't be suprised if their DX11 driver overhead on Ryzen would be bigger than Intel counterparts, afterall their code is heavily optimised for Intel. And it's very likely that AMD will also still find extra efficiency. You don't tune 100% efficiency in few months.
Digitalfoundrys benchmark shows that Nvidia's DX12 works just fine with I7-6900k, it's actually faster than I7-7700k in RotR:
4:04 RotR.
So the problem seems to be mostly with Ryzen.
This ofc would mean that all those 6 threads should be heavy. Witcher 3 for example runs 16 threads, but most of them are rather light.
For example audio threads are always very light in games.
I already mentioned you can do it with process hacker.
http://processhacker.sourceforge.net
Go to properties of application, and find threads tab. Or simply press threads button on top menu after selecting application. Setting affinity works same way as setting them for apps.
Are you really setting affinity for single threads, or simply just forcing whole application to run on those cores. I'm talking about binning specific gamethreads for each cores separately. If you just set affinity for application, win scheduler can still bounce them around ccx'es.
Looncraz, could you possibly run some benchmarks with custom affinities where you bind threads to different cores. You can do that easily with process hacker.
I'm especially interested in seeing perf uplift when you bind game mainthread and driver thread to one ccx.
http://wccftech.com/amd-ryzen-performance-negatively-affected-windows-10-scheduler-bug/
Suprised that wccftech only had article about the issue now, when we had the discussion here week`ago?
Dota2 Vulkan test looks like.... well something isn't really right...right?
Can we be even sure that affinity 0-7 are for CCX1 and 8-15 are for CCX2? They could be 0-3 & 8-11 for CCX1 and 4-7 & 12-15 for CCX2.
Or even clockwise rotation, 0-3 for CCX1, 4-11 for CCX2, and 12-15 again for CCX1.
I would like to see from different games how many threads they run. Games have different ways in recognizing threads. Where game creates 16 threads to run for 6900, but only 8 threads for R7.
I would really love to see more testing done with specific affinities. And I don't even mean general affinity, but thread specific affinity. Run all 8cores and 16 threads. But bind threads to specific cores (especially critical threads like game main thread and driver thread.
I don't really it's a question if Ryzen scales with memory bandwidth, and more like if game scales with memory bandwidth. It's same with Intel's cpu's.
Here is Memory scaling on 1800x in Arma 3.
source:
https://www.io-tech.fi/artikkelit/ylikellotustesti-ilmalla-ryzen-7-1800x-1700x/
What does it actually mean in real use when windows sees caches wrongly (as seen from coreinfo)? Someone with more technical knowledge could open this up a bit.
That memory bandwidth efficiency is incredible though.
http://www.guru3d.com/articles_pages/amd_ryzen_7_1800x_processor_review,13.html
High latency might cause problems though.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.