Paratus
Lifer
- Jun 4, 2004
- 16,840
- 13,765
- 146
I'm beginning to feel better about maybe picking up an RX56here you go
I'm beginning to feel better about maybe picking up an RX56here you go
It seems that there's a substantial CPU limitation on NVIDIA cards at both 1080p and 1440p, (compare the 1080 Ti's advantage at those resolutions to its advantage at 4K) so does this mean that Vega has less CPU overhead than Pascal cards?
BTW, a Turn10 developer stated that the reason for the 100% CPU usage on one or two cores is to reduce input latency, so it appears to be a design decision. But while multithreading might induce higher input latency, if done properly the penalty should be very minimal and be a nice trade off for a massive increase in framerates.
here you go
hardly a good excuse when the Xbox One version runs at 60FPS with super low ST performance CPU, so that version is highly threaded, also as far as I know Forza Horizon 3 was just like this game, with lots of load on one thread, they later patched and the game now have proper MT support with good scaling and runs way better.... really makes no sense.
No it's lots of load on one CORE, they actually let one core run as many threads as it can before starting to send threads to the second and beyond core,also they let the thread with the most important work run with a higher priority then the rest, that way the workload is finished the quickest.I guess for them it makes sense since desktop cpus are way faster then console's one so the benefit of input lag probably made sense to them,crapped it up for anyone else though.hardly a good excuse when the Xbox One version runs at 60FPS with super low ST performance CPU, so that version is highly threaded, also as far as I know Forza Horizon 3 was just like this game, with lots of load on one thread, they later patched and the game now have proper MT support with good scaling and runs way better.... really makes no sense.
This is exactly what they promised.funny how this is a DX12 game by Microsoft and this exactly the opposite of what MS and DX12 promised to achieve.
Forza 7 seems to love Vega architecture. Surprisingly AMD GPUs perform worse at higher resolutions like 4K. Still there is room for optimizations i guess. Nvidia also has some optimization work left in this game. The gap between 1080 Ti and 1080 is too small and indicates there is room for performance improvements. I still think Vega launched with very poor launch drivers. I am looking forward to the yearly major driver release in December to see if there are driver based performance improvements for Vega across a wide range of games. RTG execution in 2017 has been abysmal. Hopefully 2018 execution from RTG is much better otherwise Nvidia will crush AMD GPU sales even more with Volta lineup.
This is not necessarily impressive. If they were only able to produce a 1000 cards, would be easy to sell out. If Nvidia made a 100,000 and sold 90,000, thats not 'sold out', but guess whos doing better. So the argument is entirely relative to how many cards were produced.?? AMD cant even produce enough cards to sell even with all of the negative chat about vega - they are selling out every card.
When the RX 580 is pushing better min fps than the 1080Ti, then you know there is some work yet to be done on NVIDIA's side.
And where do you see it in Forza 7? The DX12 path is broken for nVidia user. Forza 7 uses one (and maybe another core) with 100% utilization. This is bad programming because it will result in stuttering and less space for unexpected jobs.
DX11 games like Project Cars 2 and Formula 1 2017 showing a more "DX12" utilization pattern than this game...
BTW, a Turn10 developer stated that the reason for the 100% CPU usage on one or two cores is to reduce input latency, so it appears to be a design decision. But while multithreading might induce higher input latency, if done properly the penalty should be very minimal and be a nice trade off for a massive increase in framerates.
LOL, that's just nonsense, no amount of architecture can make up that much difference, you are just straw grasping as usual. And NO, DX11 on NV was faster than even AMD's Mantle in BF4.Or AMD's architecture is superior at DX12 because it is. We saw the same thing with BF4 and Mantle. The minimums were 80% higher than DX11. In fact the minimums is where they next gen APIs truly shine. It's just a smoother overall experience.
Yeah, could be in this game only! Rest assured the either the game or the driver will be updated to fix this, NVIDIA actually has a marketing deal with Forza's developer.So Nvidia has "cpu overhead" now?
Don't know why are you so hung up on this particular point? NVIDIA relentlessly optimizes their drivers and never stops, they optimized Hitman's DX12 LONG after it's release, it's now slightly faster on the GTX 1080 vs Vega 64.Nvidia released their Forza 7 drivers a week ago.
LOL, that's just nonsense, no amount of architecture can make up that much difference, you are just straw grasping as usual. And NO, DX11 on NV was faster than even AMD's Mantle in BF4.
Yeah, could be in this game only! Rest assured the either the game or the driver will be updated to fix this, NVIDIA actually has a marketing deal with Forza's developer.
Don't know why are you so hung up on this particular point? NVIDIA relentlessly optimizes their drivers and never stops, they optimized Hitman's DX12 LONG after it's release, it's now slightly faster on the GTX 1080 vs Vega 64.
http://www.hardware.fr/articles/968-8/benchmark-hitman.html
Yeah, What matters is you like to talk about anything other than the data we have now, some fantastical gains and driver improvements for Vega that have yet to materialize, I am not the one entrenched in that meme!Don't act like AMD doesn't keep optimizing either. I didn't mention that because it's obvious that both companies do it. What matters is what we are discussing right now on data we have right now. That should be obvious.
Again, that's just nonsensical crap! There is no such thing that is an architecture for next gen, GCN trailed Maxwell and Pascal even on basic features and power efficiency, it had no comparable memory compression, tiled rasterization, polygon throughput to NVIDIA, and it's a power hog of untold proportions! None of that is next gen material.GCN is quite literally built for these nextgen APIs. It's exactly why generally it can't beat Nvidia in DX11 even though its much wider at every tier.
Nope, Forza 6 and Forza Horizon 3 had NVIDIA beat down AMD quite easily, what we are seeing in Forza 7 is just an anomaly.CN+DX12 is why it can deal with this game's CPU utilization and other games that have high back pressure async compute implementations. Nvidia just starts thrashing at certain thresholds. That's what we are seeing.
Yep, that's what the example of Hitman I posted above serves for, NVIDIA optimizes long after release if needs be. Want another example? Ashes of Singularity! This game couldn't be anymore favoring to AMD, yet NVIDIA competes quite nicely there. Again Forza 7 is just an anomaly that WILL be corrected soon, because that's what NVIDIA does.So wait for unannounced future drivers after the "game ready" drivers they already released?
Any benchmark data to back up this claim? Turn 10 has fixed the CPU hog bug way after release so I'm not really interested in benchmarks conducted just after game release. I tested GTX 1060 vs RX 580 this spring and the AMD card was faster by ~5%.Nope, Forza 6 and Forza Horizon 3 had NVIDIA beat down AMD quite easily, what we are seeing in Forza 7 is just an anomaly.
Any benchmark data to back up this claim? Turn 10 has fixed the CPU hog bug way after release so I'm not really interested in benchmarks conducted just after game release.
It seems that there's a substantial CPU limitation on NVIDIA cards at both 1080p and 1440p, (compare the 1080 Ti's advantage at those resolutions to its advantage at 4K) so does this mean that Vega has less CPU overhead than Pascal cards?
This. People are reporting GTX1070's running at only 80% utilization. Developer has stated they intentionally load up on a single core - which is exactly what you aren't supposed to do in the DX12 model.
Developer screwed the pooch on this one.
People like you said the same about Hitman previously.
You are here to defend Nvidia and Intel on these forums. Thats so painfully obvious.
Some users may notice that the game utilizes nearly 100% of one of their processor cores. This is expected behavior; we intentionally run in this manner so we can react as fast as possible in order to minimize input latency. Users on power-constrained devices, such as laptops and tablets, might want to use a Performance Target of “30 FPS (V-SYNC),” which will reduce processor usage and minimize power consumption.
Ad hominem much?
From the developer: