R9 380 equals R9 390 ?? drivers ??
380/960 benched at 1080p, others benched at 1440p/4k.
Green = 1080
Red = 1440
Blue = 4k
EDIT:
This is getting tiresome. If AMD uses its strengths - that's good (to you). if NV does its bad (to you). Because reasons.
I get it. People don't like the short end of the stick. The exact same thing is happen with Async Compute in reverse roles. Yet, it's "apples to orange, derps!!!!"
/facepalm
I'd love to see both companies duke it out using their strengths. And we're getting there.
EDIT:
How ironic. In another post you tell me something "oh its possible" to defend your stance, but here "I don't think its possible."
$599 for custom cards, and $379 for (custom) Geforce GTX 1070. And sorry, I am not impressed considering this is clearly yet another AMD showcase title and NVIDIA hasn’t released drivers optimized for the game yet.
they are middle of the road cards.
960 970 980 390 390x 380. missing 380x
380/960 benched at 1080p, others benched at 1440p/4k.
Green = 1080
Red = 1440
Blue = 4k
I meant whatever the latest doom game is ( i thought it's called 3 or is it 4 ? lol sorry i'm out of gaming scene nowadays)
I'm not sure, but looks like possible thermal throttling from the 1080. Performance might be more consistent with better cooling (or maybe optimized drivers). But the Founders Edition reference cooler doesn't seem to be doing the job.
Where are these mythical custom cards? All we know for sure is reference cards for $100 more. Until they materialize the 1080 is $699. And until it's benched we don't know anything about the 1070.
You can be skeptical about the lack of optimizations on nVidia's part. That's fine. But everyone who wants to be honest knows AMD doesn't do anything to gimp nVidia's performance. And if there is any AMD code in the game all nVidia has to do is shoot on over to GPUopen and get it and optimize their drivers.
The 380 is beating the 980 4COL. That's not even the 380X. And considering the 980 is EOL who really thinks we are going to see nVidia optimize for it?
The 380 is embarrassing the 980 bad enough.
There is a huge ethical difference between overloading a game with unneeded tessellation, wasting energy in the process as well. And leveraging a tech to offload the CPU and lower game latency with Async Compute.
Compute shaders are a good idea, invisible tessellation or over tessellation has no benefit.
Careful, it's a really hard to read/misleading chart, there's no 380 vs 980 comparison there - the colours represent different resolutions.
Did I miss something? This looks amazing for a Total War game. The units with randomized details are wicked.Another Gaming Evolved title another graphical disaster. What is up with those AMD sponsored game and their outdated graphics: https://www.youtube.com/watch?v=WAu7945zfPc
Anno 2205 looks much better and runs much better. Maybe these developers should care more about PC gamers and less about bribing.
Another Gaming Evolved title another graphical disaster. What is up with those AMD sponsored game and their outdated graphics: https://www.youtube.com/watch?v=WAu7945zfPc
Anno 2205 looks much better and runs much better. Maybe these developers should care more about PC gamers and less about bribing.
you do know that gaming evolved is just a marketing name right? it has nothing to do with gamework
they could just say "its a dx12" and still be right since dx12 naturally favors amd
well assuming that gcn is tailored perfectly for dx12 you cant really say "they guessed" its being built around gcnDX12 favors AMD because their hardware is better designed for it. Unlike the feature set 12_1 hype, and we've been working on DX12 with MSFT for 6 years lies.
AMD either guessed the future better than nVidia when they were designing their uarch years in advance, or they knew what was coming and nVidia didn't. Odds are it's the second and they were working on what we see now as DX12 before nVidia. Could be because of XBox1?
Consoles have been about multicore and parallel execution for a long time. My guess is that GCN came from the work AMD has been doing with Sony and/or Microsoft for PS and Xbox. Those guys also required a low level API and it's probably how Mantle came about as well.DX12 favors AMD because their hardware is better designed for it. Unlike the feature set 12_1 hype, and we've been working on DX12 with MSFT for 6 years lies.
AMD either guessed the future better than nVidia when they were designing their uarch years in advance, or they knew what was coming and nVidia didn't. Odds are it's the second and they were working on what we see now as DX12 before nVidia. Could be because of XBox1?
So long as you can turn off whatever offending feature to get both vendors to the same baseline, I say game on AMD/NV. I confess, after years of seeing AMD play Mr. Nice Guy (aka technically inferior poor guy) I am heartily enjoying all the whining about "AMD slanted games".
Let's not kid ourselves with the whole "Taking the high road" thing either. AMD finally has its ducks in a row and can turn the screws on NV. Is this Lisa "We're not a value brand" Su's doing? Whatever, I hope to see it continue.
NV has been playing chess while AMD has been playing checkers, nice to see them both playing the same game.
Do you honestly not see a difference between a feature that creates geometry that go un-displayed but still have to be rendered (over tessellation) and a feature that allows you to better utilize "unused" parts of the GPU at the same time as it does other work?
One is only there to harm performance.
The other is there to add performance
Blame Nvidia for not following core DX12 specs and claiming they support it.
I see Maxwell claiming it has Async Compute support, ask Nvidia where the drivers are to enable it.
https://twitter.com/PellyNV/status/702556025816125440
Stop blaming developers for using core DX12 features to make their game run better. That is the whole point of using DX12, optimization!
It's not the exact same thing. With Async Compute there is at least a long end to the stick, and the people who have it get better performance. With NV's tesselation implementations that I and others are unhappy with, including locked sub-pixel tessalation factors, or tesselated invisible water, there is only a short and a shorter end to the stick and nobody wins. Everybody gets decreased performance for no in game benefit.
In another post I said what was possible? Which thread or which topic? A little context might help me understand why you think that my posts contradict each other.
If my understanding of how Async Compute works is wrong, I'd like to understand why. I'm certainly not an expert, and I don't try to portray myself as one. That's why I phrased my response with "I'm not sure that it's possible." I am genuinely not sure, but that is how I understand it at the moment. If my impression of how Async Compute works is wrong, I'd like somebody to explain why it's wrong and how it does actually work.
@railven
You're trying too hard. Calling for Async. Compute to be disabled is like asking for CPU comparisons to always only feature 2 cores.
Face it Nvidia is behind on tech.
Good to see AMD get more devs on their side. They got to build up a good library to start changing perception from regular gamers.
Need to get more popular games though. Hopefully Deus turns some more heads for them. NV is running amok uncontested.
Yeah I don't get it either. In the end both technologies are used to make it possible for us to have nicer looking games with any given card/engine. Both can be misused or used in an inefficient manner if no thought is put into the implementation.