Xbitlabs always does a nice job of laying out the hardware specifications.
AMD's own slides showed that figure ,when using +20% in powertune .That is a nice layout but would they really have us believe a 6970 uses more juice than a 580? Or is that just the difference in how Nvidia and AMD measure TDP?
those are the official tdp numbers. Nvidia rates their tdp in a different way than AMD.That is a nice layout but would they really have us believe a 6970 uses more juice than a 580? Or is that just the difference in how Nvidia and AMD measure TDP?
those are the official tdp numbers. Nvidia rates their tdp in a different way than AMD.
Uh... 228 and 250w respectively, as in peak power consumption, aka furmark values. The gtx 580's peak tdp is ~300w...
Isn't it (in situations where 1.5 gigs of ram is more than enough) just a straight up fillrate issue? AMD cards have a higher fillrate than Nvidia cards, right?
Yes, that's right, AMD's cards have massive texture fillrates, I completely forgot about that. :thumbsup:.come on guys, the AMD cards do better at higher res because they have more tmus. that has been mentioned in some reviews I thought.
notty22
That last picture about how hot they get is from Furmark.
The 580 gets up to 91 degrees celcius too, if you "turn" off the limiter.
And yes... differnce in actual power use between a 580 and a 6970 isnt that big when playing games. It doesnt show if the 580 has the limitor on in the power used? or if the 6970 was at -20%, 0% or +20% settings? Im guessing thats at the +20% power use settings.
your not giveing the pictures context, or takeing them out of it on purpose.
anywheres heres a mix of avg tests of power use, done by techpowerup.
Here the 6970 is at its normal state, which is +0%.
http://www.techpowerup.com/reviews/HIS/Radeon_HD_6970/27.html
Very interesting. Forgive my ignorance on the entire subject, but is this process tech something Global Foundries will employ, and could we look forward to it with 28nm? Also, what specific tweaks/benefits does the HK/MG SOI process afford? Lower power consumption? Higher clocks?
Yes, that's right, AMD's cards have massive texture fillrates, I completely forgot about that. :thumbsup:.
Comparing the GPU's in the 6990 (a full Cayman @ 830MHz) and the GTX 590 (a full GTX 580 @ 612MHz), the GTX 590 should have about 10.6% higher fillrate but only about 50% of the texture fillrate. That doesn't look good for high-resolution gaming, despite all the memory on board. It seems like the GTX 590 is very unbalanced.
But you're assuming that AMD's limitation, especially its extent, is the same as NVIDIA's. Without stating those specifics, yes, you are taking things out of context.lol, no I did not take anything out of context. AMD has their own Furmark limiting implementation.
But that's what you did too, no? No specifics, nothing. I think this post best summarizes the power usage, given the breadth of testing:I'm sorry but showing a random picture of intentional disabling of a feature and then running furmark does NOT represent normal operation or temperatures.
Like you did.
Looks like the GTX 580 consumes about 25.5% more power on average.anywheres heres a mix of avg tests of power use, done by techpowerup.
Here the 6970 is at its normal state, which is +0%.
http://www.techpowerup.com/reviews/HIS/Radeon_HD_6970/27.html
You need to recheck your information. The GTX550Ti has 32 texture units, not 64. In total texture fillrate, it has about 65.6% of the fillrate of the GTX 570 (it's slightly offset due to its higher clocks). Texture fillrate plays a big part of high resolution performance as do the ROPs, memory capacity, etc. They're all part of the same engine, and if one part is disproportionally slow, it will bottleneck the rest.Then GF114 (GTX550 Ti) (64 Texture Units) should be faster at higher res than GTX570 (60 Texture Units) because it has more Texture Fillrate ?? Don't think so
High res has to do more with memory (bandwidth, fillrate (ROPs) but not Buffer) than texture.
Higher Textures need more memory (2GB vs 1GB) and more Texture Units.
Then GF114 (GTX550 Ti) (64 Texture Units) should be faster at higher res than GTX570 (60 Texture Units) because it has more Texture Fillrate ?? Don't think so
High res has to do more with memory (bandwidth, fillrate (ROPs) but not Buffer) than texture.
Higher Textures need more memory (2GB vs 1GB) and more Texture Units.
Texture Fill Rate = (# of TMUs) x (Core Clock)
GTX550 Ti has 64 Texture Units (TMUs) and 900MHz Core Clock = 57600
GTX570 has 60 Texture Units (TMUs) and 732MHz Core Clock = 43920
Raster Fillrate (ROPs) = (# of ROPs) x (Core Clock)
GTX550 Ti has 32 Raster Units (ROPs) and 900MHz Core Clock = 28800
GTX570 has 40 Raster Units (ROPs) and 732MHz Core Clock = 29280
If you think about it, that has more or less been the name of the game since 90nm, be it for CPU's or GPU's.
I'm dissapointed that AMD isn't migrating their GPU's to the same high-performance HK/MG SOI process tech that their CPU's benefit from.
An architecture advantage combined with a process tech advantage could have resulted in some rather intriguing products being brought to market.
Im sure the GF116/GF106 chips has 32 TMUs.
edit - and 24 ROPs..
From the same link, blu ray power. Another factor effecting overall power usage.But you're assuming that AMD's limitation, especially its extent, is the same as NVIDIA's. Without stating those specifics, yes, you are taking things out of context.
But that's what you did too, no? No specifics, nothing. I think this post best summarizes the power usage, given the breadth of testing:
Looks like the GTX 580 consumes about 25.5% more power on average.
While AMD has been leading in power consumption for quite a while, that seems to have shifted slightly with the use of the new faster GDDR5 memory chips. The added power draw is quite significant, especially in Blu-ray playback we see an enormous 73 W power draw for the graphics card alone, hopefully AMD can address this shortcoming.
yeah Nvidia must have done something to get power use down with bluray playback... you can clearly see that it wasnt implimented with the 480, but the cards released after have it.
And theres no argueing with facts, the newer nvidia cards use less power when playing bluray movies.
atleast with the software/testing senario that techpowerup uses.