"Acoustics" are very much determined by power consumption.
Tell that to my WC without fans, and yes it doesn't even need them.
"Acoustics" are very much determined by power consumption.
They're hemorrhaging market share. How's that for forward thinking?
The 290/290X still hold up well against the 970/980 despite being an entire year older. Even more so at higher resolutions.
You really think they are even aware of power consumption? It's all about the brand and marketing.But AMD isn't selling many 290/290X despite them pricing below 970/980 and no it's not because everyone loves nvidia. People given a choice will pick the lower power solution if the price is anything close. On top of that you can bet a huge high power card like the 290 costs a lot to make, more then the lower power nvidia chips. So AMD can't sell their more expensive to make cards for less money, I'd say that's a pretty major problem.
Then there's the whole gaming laptop market, which is big these days that AMD has no share of at all once you go above APU's (for which there is barely any money to be made). Another major problem. The whole world is going lower power higher efficiency these days, if you want to sell more then desktop gpu's you need to nail that.
300W says that AMD can't compete on efficiency, if they could they wouldn't be releasing new cards using that much power.
People given a choice will pick the lower power solution if the price is anything close.
Was never true for AMD when they had the more efficient GPUs.
Could you please provide a link to sales figures of the 290/290X so that I could check it out?But AMD isn't selling many 290/290X despite them pricing below 970/980 and no it's not because everyone loves nvidia.
First off, I stand by what I said earlier. If a person doesn't lock themselves into a line of cards simply for the name on the box, they look at price and performance. Power consumption would only be a check mark to ensure their current power supply has enough headroom unless they're planning to install the card in an HTPC or some other thermally constrained environment.People given a choice will pick the lower power solution if the price is anything close. On top of that you can bet a huge high power card like the 290 costs a lot to make, more then the lower power nvidia chips. So AMD can't sell their more expensive to make cards for less money, I'd say that's a pretty major problem.
Again, could you provide some facts to back up the "barely any money to be made" and "big gaming laptop market"? You can actually make more money selling a large number of small margin items than selling a few large margin items. There's no way to know for sure unless we have some hard facts.Then there's the whole gaming laptop market, which is big these days that AMD has no share of at all once you go above APU's (for which there is barely any money to be made). Another major problem. The whole world is going lower power higher efficiency these days, if you want to sell more then desktop gpu's you need to nail that.
We have no idea if 300W is efficient or not because we have no performance numbers to compare it to. A 300W card that is 60% faster than a 290X would be efficient to me.300W says that AMD can't compete on efficiency, if they could they wouldn't be releasing new cards using that much power.
That's why it's kinda odd to ask if NV users would buy a 300W product, they most certainly would if it's made by NV and provides good flagship performance. I never understood the backlash against high power usage high end cards. If someone only wants a 150-175W card, it's always out there in the market. Just imagine a 300W Maxwell card. You take the most efficient Perf/watt architecture and scale it. High end gamers get what they want and others get to enjoy 970/980. What's not to like? (...)
Also, as mentioned, we already know that TDP rarely matches real world power usage. Cards like 7970/7970Ghz/280/280X/780/780Ti have similar/same TDPs but different power usages. You can also have cards like 970/980 with marketing driven TDP that has nothing to do with reality for 99% of retail after-market cards. This idea that TDP somehow equates to real world power consumption has always been wrong because TDP does not actually tell us the maximum power usage of an ASIC.
Maybe Maxwell at 28nm can't scale out to make a meaningful improvement to Kepler that would justify its development, unless of course NV is willing to make a huge ~550mm2 chip that's target only at gaming and some niche compute market where DP doesn't matter like that dual GK104 Tesla or was it Quattro?. And leave compute market to GK110 for some more time but historically that never happened and their best chips were always dual purpose by always I mean ever since CUDA became a thing. GK110 is already at 550mm2 and that's the practical limit of the size of ASICs in products that don't cost thousands of dollars and if they do the upper limit is upper 6XX mm2* but I remember only Intel and IBM going that high and that's for chips that stared at $2461(not counting dice castrated chips) and up. Is maxwell even more efficient per mm2 than kepler? Remember that GM204 is just 10% faster than GK110 and its at 400mm2 already and that's without 896 DP shaders that GK110 has. With the same DP performance do you think it would be smaller than GK110 and it would have to have even better DP performance to justify it. Do you think that 150mm2 is all that is dedicated to GK110 for DP performance? I think not.
* Some ancient nodes had chips fabricated using them that were as large as 1000mm2
You see. It's not how it works.
You can have a card that takes 400Watt and still it could be the most efficient card out there.
Huge power consumption doesn't mean inefficiency.
If 390X uses power efficient HBM, which in turn makes it run lean, will this be directly transfered to 380X which will probably sport old GDDR5 (possibly refreshed hawaii?). Because I expect the influence going from bad product to good product when it comes to amd products - there is heavy contra-marketing going on to enshoure that.
300W suggests they haven't got a hyper efficient new architecture and are just throwing more power at the problem which isn't going to work.
980 beats your 690, and 780Ti is practically on par. If it can't beat 690 it would be trash.
The 980 is not faster than the 690 unless in a VRAM limited scenerio.
In many scenarios the 980 and 690 can be close in performance. Some of the recent console ports are very dependent on higher VRAM, this can give the 980 a real edge.....sure. This is a result of not optimizing the next gen console ports. The VRAM is becoming more and more important.
The 690 can be VRAM limited but there is plenty of reviews out there that show its not slower than the 980 on average. If it wasn't for the VRAM limitation, the 980 wouldn't have much at all over a 690. But because of it, depending on the games you play, the gtx980 might be worth upgrading to. But honestly, I think anyone with a 690 would be better off waiting for big maxwell or a 390x, cause those would be more of a true upgrade. Not a 980
My thinking was get a 970 and i could go SLI later. Surely 970SLI will be very capable even compared to a gm200/390x.
I expect the 970SLI to be faster than those cards but worst case......just as fast.
Not really, 2GPUs can be more power efficient because they can be clocked lower, compare 590 with 480. And nowadays M-GPU really scales very well, in the past few months I only had one "issue". In DA:3 SLI suddenly stopped working because they included windowed full screen mode and SLI only works in full screen otherwise scaling is very close to perfect. See for yourself:If you're doing high performance desktop gaming, you're not power efficient. Period. If you want to game on total 250w go buy a console. If you want to save power, buy LED lightbulbs. Unless you've replaced all your 60w incandescents with 6 watt LEDs, arguing about video card consumption is disingenuous "do as a say and not as I do." 60w -> LED = 10x improvement in power efficiency. Even switching from CFL bulbs to LEDs doubles your power efficiency. I guarantee you use your lights more often than your GPU. GPU power consumption matters for figuring out what size power supply you should buy
In the real world, where enthusiasts care about performance and not a negligible increase in annual kWH, I'd take a 300w monster chip if it puts up the performance numbers every day of the week. I'd buy a 400w monster if they could cool it and it put up the performance numbers. I really hope they release a HUGE single chip card so I can get tons of performance without having to resort to SLI or CF with all the issues that brings.
If you really want to talk about efficiency... given that SLI and CF do not scale perfectly, reaching high performance levels via multiple cards becomes less and less efficient as you add cards. A single card which is equivalent to two weaker cards will almost always be more efficient from a power use perspective. For example, if you want to hit Max quality in Crysis 3 at 4k, you will either need 3-4 weak cards or maybe 2 fast cards. The 2 fast cards will be more efficient than 4 weak cards because of the terrible scaling out of the 3rd and 4th GPUs.
The 980 is not faster than the 690 unless in a VRAM limited scenerio.
When we're supposedly so close to 390x being released, it's probably a good idea just to wait and see what drops. If 390x is as fast as some of the rumors state, it could be the ultimate pairing to an Oculus Rift or even just good enough for 4k gaming... SLI/xFire is usually isn't as good when it comes to the overall quality on a high-end single-gpu. The current 970/980/290/290x are good enough for 1440p gaming, but not the next-gen experience. These products will leave you wanting more in 6 months.
@Lepton87 in AC Unity, SLI scales above 100%.
That game really utilizes multi GPU really well /trollface.
It is a well known problem, that SLI scaling was basically broken by Ubisoft in the latest patch (October 27).
No messing around with settings or custom SLI profiles can fix this issue (trust me, I have tried it all)
The only solution is to roll back to the previous 1.05.324 patch, by following the instructions in this tweak guide I wrote for Watch Dogs a while ago:
http://www.forum-3dcenter.org/vbulle...postcount=1712
But please remember that Watch Dogs is still quite CPU limited on most SLI systems below 4K resolutions, even with the old patch.
SLI is working. The reason why this shows over a 100% performance increase is because the GTX 980 is unusually slower than it should be normally in this game. We think the GTX 980 is being held back on performance for some reason, chalk it up to some game bug, or other unoptimized nonsense, the GTX 980 isn't working as fast as it should be. This is quite normal for all the video cards in this game, we think the full power and full potential isn't being utilized in this game for whatever buggy or unoptimized reason.
Again, it's probably Ubisoft lol but this is part of the reason I was scared of mGPU setups. Having $300-600+ that you spent that is unable to be utilized in major AAA games is ridiculous. Especially if you're an NV user playing a Gameworks game.
Edit: Or in the case of AC Unity, needing a mGPU setup to get your cards running at 100% of their capability.