Maverick177
Senior member
- Mar 11, 2016
- 411
- 70
- 91
Well NV has a massive user base, anything long term to the GeForce user base will only cause NV to make less money. Cash grab yo.
I doubt Maxwell will be gimped so much as compute limited. Even in AMD biased games, Maxwell seems to fall in line with GCN according to raw compute. (280x closing in on the 970 due to having similar compute capabilities)
I believe games right now are utilizing Maxwell fairly well. Right now, getting more brute in there is probably the main focus behind Pascal. We'll see how that works out. Without ACEs, it will probably be difficult to maintain peak efficiency in all situations, though if they are able to effectively utilize the chip without, they could very well simply out brute Polaris now, then work on ACEs and other such additions later.
One other thing is the bad PR Nvidia has been gaining as of late. Enthusiasts are already fully aware of Nvidia GPUs losing steam against their competitors, not to mention the memory issue in the 970, and now poor drivers, it would be dumb to recommend an Nvidia card at all unless CUDA was a necessity. I'm not sure if Nvidia can hold back again this time and come away unscathed.
It's not like that at all.
NV's architectures were made to excel for the now. Because the now matters more for people when they decide which GPUs to buy, they look at current benchmarks and see that NV is on top. They don't think about a year or two years down the road how the GPUs stack up.
Thus, NV's architectures give them the dominant position and it's the winning design. It also promotes more frequent upgrades and such, it generates more revenue for NV. It's a good strategy.
That's two sides of the same coin. NV knows what's here now, while AMD can tell themselves that their designs are what games are going to want in the future. There's advantages to both.
Not for company profits, NV's approach is clearly superior for that.
AMD's approach is what resulted in my answer to guskline when he asked me whether I will upgrade to Polaris 10 from my R290X: No I won't, because the 290X is still kicking ass and I only paid $250 for it brand new.
That isn't good for AMD's bottom line, it's only good for me.
Also we have to remember that AMD does not have the resources to constantly replace their lineup of GPUs generation after generation technology wise. This is mainly due to their financials and potentially engineering resources too. nVIDIA on the other hand can do this unless their financials take a huge hit Q after Q.
They've been milking the GCN architecture for a very long time now with many rebrands and they are prolonging the life of these video cards by other methods e.g. APIs that suit their architecture more, console wins etc. I mean if AMD had driver optimizations on the same level as nVIDIA or better, would they really have pushed mantle as they did now? I don't think so.
Not for company profits, NV's approach is clearly superior for that.
AMD's approach is what resulted in my answer to guskline when he asked me whether I will upgrade to Polaris 10 from my R290X: No I won't, because the 290X is still kicking ass and I only paid $250 for it brand new.
That isn't good for AMD's bottom line, it's only good for me.
Also we have to remember that AMD does not have the resources to constantly replace their lineup of GPUs generation after generation technology wise. This is mainly due to their financials and potentially engineering resources too. nVIDIA on the other hand can do this unless their financials take a huge hit Q after Q.
They've been milking the GCN architecture for a very long time now with many rebrands and they are prolonging the life of these video cards by other methods e.g. APIs that suit their architecture more, console wins etc. I mean if AMD had driver optimizations on the same level as nVIDIA or better, would they really have pushed mantle as they did now? I don't think so.
Tell me though. If there was a clear upgrade path from both vendors which would you go with?
And this isn't meant to be a loaded question. I just don't want to make it too leading.
No hesitation, AMD. Because I prefer for my gaming dollar to go as long as possible and AMD has proven GCN keeps on keeping on.
But, the majority aren't as informed nor they may not hold that view and/or they don't keep their GPUs for that long.
Also we have to remember that AMD does not have the resources to constantly replace their lineup of GPUs generation after generation technology wise. This is mainly due to their financials and potentially engineering resources too. nVIDIA on the other hand can do this unless their financials take a huge hit Q after Q.
They've been milking the GCN architecture for a very long time now with many rebrands and they are prolonging the life of these video cards by other methods e.g. APIs that suit their architecture more, console wins etc. I mean if AMD had driver optimizations on the same level as nVIDIA or better, would they really have pushed mantle as they did now? I don't think so.
if by milking you meanAlso we have to remember that AMD does not have the resources to constantly replace their lineup of GPUs generation after generation technology wise. This is mainly due to their financials and potentially engineering resources too. nVIDIA on the other hand can do this unless their financials take a huge hit Q after Q.
They've been milking the GCN architecture for a very long time now with many rebrands and they are prolonging the life of these video cards by other methods e.g. APIs that suit their architecture more, console wins etc. I mean if AMD had driver optimizations on the same level as nVIDIA or better, would they really have pushed mantle as they did now? I don't think so.
wrong.
game developer like DICE Johan Andersson had asked for this for years, AMD simply built the framework for them what they asked for.
AMD now created a revolution for gamers, DX12, Vulkan as Mantle is the base code for the new evolution of PC Gaming all running with AMD GCN tech and we already know it runs better on AMD Hardware with DX12.
Why optimize for a broken API as DX11 is?
Do you repair your cellphone or just buy a new one?
Mantle from AMD created a new era for PC gamers.
They did listen to what game developers wanted and gave them that.
Not for company profits, NV's approach is clearly superior for that.
No hesitation, AMD. Because I prefer for my gaming dollar to go as long as possible and AMD has proven GCN keeps on keeping on.
But, the majority aren't as informed nor they may not hold that view and/or they don't keep their GPUs for that long.
https://twitter.com/Battlefield/status/727241279734726656
i wonder could bf5 be the first true vulkan game? seems like both companies are advertising it
j.andresson were saying they are trying to remove dx11 very agressive till the summer and that statement was on early april who knows(he also said that bringing down games to vulkan into other systems is proving to be difficult because they need to rewrite for the old wddm system )I doubt it will be vulkan showcase, Vulkan still seems pretty fresh and needs some work from the API and driver perspective.
I do see it going DX12 at launch with vulkan patch later for Win7/8, but doubt we'll see it on Linux. Also I don't see them removing DX11 support in favor of Vulkan only due to lack of support for older cards.
That said, DICE has always pushed the latest techs hard, they were the reason for most of my upgrades due to needing newer video card to play, so I'd love to see them continue that and force vulkan/dx12 only, but I don't see them doing it due to lowered sales
j.andresson were saying they are trying to remove dx11 very agressive till the summer and that statement was on early april who knows(he also said that bringing down games to vulkan into other systems is proving to be difficult because they need to rewrite for the old wddm system )
Not for company profits, NV's approach is clearly superior for that.
This future vs. present philosophy is simply not substantiated by facts.
Cerny is convinced that in the coming years, developers will want to use the GPU for more than pushing graphics -- and believes he has determined a flexible and powerful solution to giving that to them.
"The vision is using the GPU for graphics and compute simultaneously," he said. "Our belief is that by the middle of the PlayStation 4 console lifetime, asynchronous compute is a very large and important part of games technology."
Cerny envisions "a dozen programs running simultaneously on that GPU" -- using it to "perform physics computations, to perform collision calculations, to do ray tracing for audio."
"The time frame when we were designing these features was 2009, 2010. And the timeframe in which people will use these features fully is 2015? 2017?" said Cerny.
"Our overall approach was to put in a very large number of controls about how to mix compute and graphics, and let the development community figure out which ones they want to use when they get around to the point where they're doing a lot of asynchronous compute."
Cerny expects developers to run middleware -- such as physics, for example -- on the GPU. Using the system he describes above, you can run at peak efficiency, he said.
"If you look at the portion of the GPU available to compute throughout the frame, it varies dramatically from instant to instant. For example, something like opaque shadow map rendering doesn't even use a pixel shader, it’s entirely done by vertex shaders and the rasterization hardware -- so graphics aren't using most of the 1.8 teraflops of ALU available in the CUs. Times like that during the game frame are an opportunity to say, 'Okay, all that compute you wanted to do, turn it up to 11 now.'"
Sounds great -- but how do you handle doing that? "There are some very simple controls where on the graphics side, from the graphics command buffer, you can crank up or down the compute," Cerny said. "The question becomes, looking at each phase of rendering and the load it places on the various GPU units, what amount and style of compute can be run efficiently during that phase?"
Tell me though. If there was a clear upgrade path from both vendors which would you go with?
And this isn't meant to be a loaded question. I just don't want to make it too leading.
AMD in a heartbeat. It's not even a question. Between the money I saved going with a freesync screen over its gsync equivalent (even though I got a refurb xr341ck that cuts that gap to "only" $300) and an expectation that I can get another year out of AMD cards, I can step up a bracket for roughly the same expected outlay. That's going to make a much bigger difference than any theoretical advantage NV holds and is why I bought a freesync screen.