why would you need to upgrade when you already got the rig in your sig?
Higher resolutions than 1080p, anti aliasting, etc.
A LOT of games bring a GTX 780Ti SLI to it's knees at certain settings.
why would you need to upgrade when you already got the rig in your sig?
except, of course, the official productsI don't think we need any more confirmation that Bermuda and Fiji are 28nm GPUs with 2.5D stacking.
except, of course, the official products
nVidia showed us there is still performance and power savings to be milked out of 28nm. i would rather not upgrade to another 28nm GPU from my GTX680, so it looks like i'm going to be stuck with it for quite some time.
This isn't actually entirely new, we just saw something similar on AMD's launch of the Radeon R9 285 (Tonga) GPU. The Radeon R9 285 introduced something called: "Lossless Delta Color Compression." The Radeon R9 285 with a 256-bit memory bus matched the performance, and in some cases a little faster, than the AMD Radeon R9 280 which has the same GPU specs, but a faster 384-bit memory bus. AMD's Delta Color Compression allowed the R9 285 with a narrower bus to perform like that of a wider 384-bit memory bus.
Nvidia is using a lot of color compression with their new Maxwell. AMD tonga was already using a similar technology.
I think AMD and Nvidia are about to reach the end of 28nm. We don't need compression and such hacks. We don't need efficiency either. We, as enthousiats, need PERFORMANCE.
Thank you for explaining thouroughly Raghu78 ! :thumbsup:
except, of course, the official products
nVidia showed us there is still performance and power savings to be milked out of 28nm. i would rather not upgrade to another 28nm GPU from my GTX680, so it looks like i'm going to be stuck with it for quite some time.
Nvidia is using a lot of color compression with their new Maxwell. AMD tonga was already using a similar technology.
I think AMD and Nvidia are about to reach the end of 28nm. We don't need compression and such hacks. We don't need efficiency either. We, as enthousiats, need PERFORMANCE.
SOURCE
But performance is GAINED, or at least lessens the performance lost through such "hacks" as you call them. Why on earth wouldn't you want them? Unless you were being sarcastic ( I can't tell ) your post makes little sense. So let me know if my sarcasm meter is busted.
Hi keysplayr, I never said I don't want them, I said I want more and in my opinion we need another node. I pointed out an article to explain how they were able to achieve that better efficiency.
Have a nice day.
Karlitos,
While it is true nvidia implemented delta compression with maxwell, its actually not following tonga. Maxwell was their 3rd version, it was not something brand new.
I really am not sure we would have heard much about t had not amd marketed delta compression as a selling point for tonga.
Delta compression allows smaller bus and may reduce power consumption but its not really making the graphics chip more powerful, just the bus more capable. I guess that can result in mor performance if your design is limited by bandwidth.
Not sure how important delta color compression will be to amd after they move to stacked ram. As for what i know, any compression has overhead. Most compression has trade offs, delta is supposed to be lossless. But there has to be some extra work/energy involved. Even if the results are well worth it.
If u have the bandwidth, i just dont see the need for delta compression. So it will be interesting going forward
There really isn't any reason to be so hung up on process technology. When it all comes down to it, do you really care what nm your GPU is? I don't. I am actually thrilled that a process shrink was not relied upon to improve not only performance but incredibly reduce power consumption. That takes skill. That takes hard work. That is appreciated by me more than the thought of having a 20nm or 16nm GPU in my rig. Without even a second thought. Process does not matter. What you do with it, really does.
i do, or rather - my bank account does.Nobody cares what process tech a GPU uses.
i do, or rather - my bank account does.
i would hate to invest top dollar in a card if i know that right around the corner a refresh would introduce a card at the same price but with 30-40% more performance, or same performance but half the price.
Eh, I expect GP100 (big Pascal) to be available by 2016. I know that research paper leaked few months back said GM200 by Q4 2014 and hasn't come to pass but since the GK210 came a quarter later than the research paper I bet we'll see even big Pascal by the 2H of 2016 at least and a GP104 earlier (probably mid to late 1H of 2016). Also consider that Volta will be in a supercomputer in 2017 (though probably won't be avaliable for most until later on in 2018) so I doubt Nvidia will be waiting big Pascal in 2017, especially if high resolution monitors come down and proper current/"next"-gen games coming and GM204 type chips aren't enough at the likes of 2560x1440 at higher settings on games.Ya, this major jump on a lower node happened with April 2009 55nm 4890, followed by Sept 2009 40nm 5850/5870. 6 months later and a ton more performance. I think this time it's highly unlikely that in 6 months from 390X/GM200 launch NV/AMD will have a single-GPU card 30-40% faster. Having said that, if 14/16nm Pascal is scheduled for late 2016, then any $650-700 card released summer 2015 will definitely have a rather short "shelf life" as a flagship. I think NV is really liking the bi-furcating GPU gen strategy, so I think GP204 by late 2016, but big daddy Pascal not out until 2017. Still, even if 390X/GM200 will be short-lived vs. Pascal in 2016, it won't be as bad as the 980 that by Summer 2015 should look completely out of place as a $550-600 flagship.
We don't need compression and such hacks. We don't need efficiency either.
So as much as everybody appreciates an efficient architecture nothing can replace the raw performance provided by a node shrink. [/url]
There really isn't any reason to be so hung up on process technology. When it all comes down to it, do you really care what nm your GPU is? I don't. I am actually thrilled that a process shrink was not relied upon to improve not only performance but incredibly reduce power consumption. That takes skill. That takes hard work. That is appreciated by me more than the thought of having a 20nm or 16nm GPU in my rig. Without even a second thought.
Process does not matter. What you do with it, really does.
The 970/980 don't provide enough of a performance jump at my desired price point to make me finally upgrade my almost 3 year old GPU. Just as the 290/290X did not entice me to do so. Where as the next full node jump GPUs will probably provide that performance jump, if that somehow is not the case then the path to ever more realistic games is going to get bumpy.
i would love R9-290 performance by adding a small amount of money after selling my GTX680, but the noise and heat deter me from getting a used one (reference 'cause i'm cheap).
they say history repeats itself, the hot and power hungry R9-290X reminds me of the 2900XT which means R9-3XX probably won't be that great (HD3870) but the R9-4XX is going to be oh-my-gods good.