AMD goes fermi

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
1. Fermi doesn't have an increadibly bad reputation. It was an undisputed champion of GPGPU. And the other issues you brough up were an issue of the GTX400 series, and not present in the GTX500 series which is ALSO fermi.
2. You assume what "everyone" else assumes. Stop assuming things.
3. The architectural structure is not "some minor detail nobody cares about". Everyone who reads anandtech cares about it else they would be reading some other site, like HardOCP, which doesn't go as much in depth into architectures.

"the other issues you brough up were an issue of the GTX400 series, and not present in the GTX500 series which is ALSO fermi."

500 series is still very power hungry. Ultimately, it is what the 400 series should have been, but it came very very late.

I'm not assuming anything, I've just seen the first 6 pages of replies to your thread- almost everybody thinks you are talking about the obvious well known flaws of fermi.

The number of people who really care about the underlying architecture are in the minority compared to the number of people who just care about the bottom line performance. I read anandtech because I am curious about such details, not because those details are more important than basic performance- because they aren't. If 7970 was performing worse than 580, nobody would be interested in it due to the architecture, except as far as perhaps curiosity as to why it failed so bad. It's only after basic performance needs are met that the underlying architecture becomes worth looking at, which is why I consider it a minor detail. And again, I refer you to the first 5 pages of responses to this thread- most posters had no idea what you were talking about.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
We already knew sooner or later AMD would go the GPGPU way.

Some even say AMD is already late to the party.

Is GPGPU of any relevance for most of us here that buy GPUs to play games?
Nope. There is some software but I dare to say most of us don't use any of it.

Many complain that the 7970 isn't a big enough boost or that considering the time frame the performance boost is small.

The problem is sometimes there isn't the manufacturing capability for it - for all we know AMD and NVIDIA tapped all the performance of the 40 nm process. Considering AMD without investing much on GPGPU and a chip ~70% the size got around ~80% of the NVIDIA chip performance, I guess it could be possible to get a bit more performance if AMD had gone for a larger die, but it is speculation.

We can blame the slow down of GPU performance increases on GPGPU or consoles, but for all we know AMD (at least) was ready to go to 32nm last year (and there is debate if this 28nm isn't in fact 32nm "rebadged") so in the end it seems manufacturing is slowing stuff down more.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
2. You assume what "everyone" else assumes. Stop assuming things.

Imho,

Oh, do I agree with this. To assume that everyone that thinks of Fermi as a detriment or negative is very odd. Fermi, over-all, has been a very robust architecture that brings in strong revenue from three families: GeForce, Quadro and Tesla. There are reasons why AMD is going Fermi and it's called revenue potential.

Forum posts, are one sided and extreme at times, opinions are facts, some assume everyone thinks like they do -- part of its lore and fascination, one may imagine!

Also, GPU Processing is important for the future of gaming to me from improved dynamics to eventually RayTracing; it has to begin to evolve and mature.

Of course GPU Processing was/is important to ATI:

From 2006

http://techreport.com/articles.x/10956/1
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Aren't most of those games favored by NVIDIA GPUs?

How is this relevant? I am comparing HD6990 to HD7970. So any advantages or disadvantages would apply to both of them equally. I showed that HD6990 is ~ 30% faster in those games. Secondly, people who play videogames go and buy a game at the store or online and start playing it with their hardware. It actually makes the most sense to buy graphics cards that perform better in the games "you" play (if those games run faster on an NV card or the AMD card, it makes the choice even easier).

A gamer doesn't particularly care if the game is NV or AMD sponsored when he/she starts playing it. They just want good performance and image quality in that game. As such if a gamer is going to play those games, HD6990 would give them 30-35% more performance over the 7970, while HD7970 would be barely faster than a GTX580. You can argue that all those games should be excluded from testing because they "favor NV GPUs", or don't put HD7970 in the best light; but don't we buy graphics cards to play games? I want to know how much more performance I can get by upgrading in the games I play, whether they are AMD or NV sponsored.

For example, I have no idea if Dragon Age 2 is AMD or NV sponsored and I could care less. I just look at the benchmarks and see that HD7970 isn't a great upgrade for this game. If you don't play DA2, this benchmark is probably irrelevant to you. But if you do play it, you can see whether or not it's worth upgrading for you. If the upgrade isn't worth it for the games you play, you just wait until the next card, etc.

And again, I refer you to the first 5 pages of responses to this thread- most posters had no idea what you were talking about.

Ya, the title of the thread is misleading. "AMD going Fermi" can mean anything from going to a GPGPU style architecture with emphasis on compute (true), to having industry leading Tessellation performance (true), to increasing die size to Fermi levels (not true), to having a card that runs hot & loud (not really true), etc. The title is not specific enough because some aspects of "AMD goes Fermi" are true, while others are not. In this case, it appears the main title should have been "AMD follows Nvidia with its all new GPGPU architecture, as the future points to a unification of graphics and compute, tasks which are handled better by the more modern/advanced GCN architectural design."
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
"AMD follows Nvidia with its all new GPGPU architecture, as the future points to a unification of graphics and compute, tasks which are handled better by the more modern/advanced GCN architectural design."
That is a very very long title. More accurate, but I don't think it will fit.

I meant it as "going to a GPGPU style architecture with emphasis on compute (true)" and more specifically "designing a GPGPU architecture which appears to be very similar to fermi in design, in some way aping it"
Also I would have though we would see both companies come up with complementary rather then replacement designs.
That is a GPGPU focused chip and a gaming focused chip, designed concurrently. Rather then a "jack of all trades, master of none" approach from both companies.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I have to say that was not at all what I expected. I thought we would see nVidia backpedal on fermi rather then AMD embrace it.

Backpedal indeed, as in backpedal on the "one size fit all" design and switch to a purely GPGPU and a purely gamng design.
To quote myself
taltamir said:
Also I would have though we would see both companies come up with complementary rather then replacement designs.
That is a GPGPU focused chip and a gaming focused chip, designed concurrently. Rather then a "jack of all trades, master of none" approach from both companies.

I thought AMD will do a node shrink of existing design, and simultanously develop a much more agressive GPGPU design for compute, instead they went with a fermi like "one size fit all" design.
I expected nvidia to backpedal on the "one size fit all" design of fermi and split it into a pure GPGPU and a pure gaming design.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
500 series is still very power hungry.

500 series is manufactured on the 40nm process, not on the 28nm process. You can't compare power consumption of GTX580 to HD7970 and expecting anything less but for HD7970 to blow the GTX580 completely out of the water when it comes to performance/watt or absolute power consumption. Not only that, but Fermi architecture is almost 2 years old while GCN just launched; so the expected performance boost is also a given.

You should compare HD6970 to the GTX580 because they are both built on 40nm and are from the same generation:


Total system:

HD6970 (40nm) = 370W (base)
GTX580 (40nm) = 425W (+15&#37

GTX580 is 15-20% faster than HD6970. Your argument doesn't compute. GTX500 series is actually pretty good given their performance on 40nm node (and the massive lead in Compute that it had over 6xxx series, which obviously used up more transistors, contributing to a larger die space).

Also, no real gaming enthusiast who was considering a $350-500 GPU would ever turn down the GTX580 only because of the extra 50-55W of power.

The argument for how important GPU power consumption is above $200 price level is blown way out of proportion. I am still amazed it gets brought up constantly. It's as if people buying $500 GPUs care about 50W of power differences when they are likely running heavily overclocked Phenom II and Intel i5/i7 systems.

See data below.



There is only an 85W difference from the lowest to the highest power consuming GPUs in the range of HD6950/HD6970/HD7970/GTX470/GTX480/GTX560Ti/GTX570/GTX580. If we discount HD6950, then it falls to just ~50-55W.

It is really difficult to be up in arms over 50-60W power consumption differences among mid- and high-end graphics cards when the greatest power consumption differences are among CPUs, not GPUs.





At the end of the day, we are still talking about 190-240W graphics cards here, not 20-30W ones that you put into an HTPC.

Trying to care about 50-60W of power consumption on rigs that actually have $500+ graphics cards is like caring about weight gain when you order an extra large cheese crust pizza with sausage, bacon and pepperoni, with a bottle of diet coke. :biggrin:
 
Last edited:

hectorsm

Senior member
Jan 6, 2005
211
0
76
Now I wonder what's AMD intentions for switching to a more GPGPU architecture. Medical imaging or maybe game physics?
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
500 series is made on 40nm process, not on 28nm process. Can't compare power consumption of GTX580 to HD7970 on fair terms.

Edit: Never mind, I didn't realize you guys were in some sort of academic debate over architecture. Nevertheless, my comment stands. You compare products at any given point in time. That is what has relevance in the marketplace to non-academics who buy cards.

Hypothetical question: say for example that Kepler gets delayed all the way to January 2013. Would comparing 7970 to GTX580 make any sense then?

I say yes. Why? Because the time period overlap is what matters, not the process.

To make the point even clearer, consider how Intel has a process advantage over AMD. Typically it's Intel's smaller node vs. AMD's larger node, since they overlap time-wise. You wouldn't give special consideration to AMD for lagging on process, so why give it to NVDA?

Imho, it was, is, and will remain fair to compare chips built on different nodes against each other, if they are positioned against each other in the marketplace (similar price and purpose), at any given moment in time.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Edit: Never mind, I didn't realize you guys were in some sort of academic debate over architecture. Nevertheless, my comment stands. You compare products at any given point in time. That is what has relevance in the marketplace to non-academics who buy cards.

There is no debate on that; I completely agree with your entire post #159. The discussion started when it was insinuated that you can compare the efficiency of architectures and completely ignore the advantages that a node shrink brings. How do you conclude that GCN architecture is more efficient than Fermi architecture is when the two are made on completely different nodes? If you figure it out, here is a cookie (oops, we don't have such an emoticon) ^_^

Imho, it was, is, and will remain fair to compare chips built on different nodes against each other, if they are positioned against each other in the marketplace (similar price and purpose), at any given moment in time.

I agree with you that GTX580 is the current competitor to the HD7970 given their prices.

You should read the last 2 pages of this thread. That's not the what's being discussed. It was insinuated that GCN is more efficient than Fermi is. We can only hypothesize this to be true, but we cannot confirm it to be true.

You can ONLY conclude with certainty that:

1) GCN on 28nm node is more efficient than Fermi is on 40nm node, and
2) HD7970 is more efficient than GTX580 is (in performance/watt, performance/mm^2 of die space)

You can't conclude anything about the efficiency of Fermi vs. GCN architectures because the node shrink difference brings 3 major advantages. For more explanation, please see Post #144.

For example, GTX580 = 3B transistors, HD7970 = 4.3B transistors. With 43% more transistors, HD7970 is only 25% faster on average. From here, one might be tempted to say that Fermi might actually be more efficient per transistor, but even that might be misleading since NV and AMD count transistors differently! (and plus would you even trust that after the Bulldozer transistor adjustment?). The transistor measurements themselves are just a best guestimate, not a physically verifiable characteristic. But even then, I still wouldn't conclude anything about the efficiency of Fermi vs. GCN architectures.

You can compare SKUs, chips, products, whatever you want across different nodes because the performance differences between them are both a function of the architectures and the nodes. But because the node itself impacts the efficiency (because it impacts power consumption, transistor switching speeds and transistor density), you now introduce 2 variables, which in turn blurs the causation of which of them brought the greater performance increase - the node or the architecture, or a combination of the two. You need to be able to isolate the variable you are comparing (in this case the architecture), while keeping everything else constant, if your goal is only to compare the efficiency of the architectures.

---> The Scientific Method.

If you had GCN on 40nm and Fermi on 40nm or if you had GCN on 28nm and Fermi on 28nm, you would be able to say with certainty which architecture was more efficient since you would completely isolate for the changing variable, the 28nm node, which allows for 60% more transistors at 50% lower power consumption.....

And if you really wanted to be scientific about it, you'd have to compare the same 28nm process (so you would need to make both chips at either TSMC factory or at GF, etc.). 28nm process at TSMC isn't the same as it is at GF.
 
Last edited:

MrX8503

Diamond Member
Oct 23, 2005
4,529
0
0
How is this relevant? I am comparing HD6990 to HD7970. So any advantages or disadvantages would apply to both of them equally.

Oh really?

BF3 - Still unplayable at 2560x1600. Almost no performance increase over the 580.

Crysis Warhead - Still unplayable at 2560x1600. Not enough performance increase over the 580 to make any difference.

Crysis 2 - Still unplayable at 2560x1600. Almost no performance increase over the 580.

Metro 2033 - Still unplayable at 2560x1600. Can't even break 40 fps.

You may have mentioned the 6990, but you seem to bring up NVIDIA GTX 580 every chance you got with NVIDIA catered games.

You said that its not a fair comparison of power consumption because NVIDIA is on 40nm and AMD is on 28nm. So.....that's unfair, but your comparison of NVIDIA catered games is completely fair?

People buying cards based on games they play is irrelevant on the overall performance of the card.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
The discussion started when it was insinuated that you can compare the efficiency of architectures and completely ignore the advantages that a node shrink brings.

Thanks, I made the mistake of interpreting it the other way, hence my edit.

Architectures and process are not completely divorced from each other--I can imagine an architecture working well only if the materials can, say, withstand high frequencies or heat or whatever. But those are impact by node--and TSMC makes chips for both so it's not even two different processes at the same nanometers--so I agree that architectures should be compared at the same node and can't understand why anyone would think differently.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
500 series is manufactured on the 40nm process, not on the 28nm process. You can't compare power consumption of GTX580 to HD7970 and expecting anything less but for HD7970 to blow the GTX580 completely out of the water when it comes to performance/watt or absolute power consumption. Not only that, but Fermi architecture is almost 2 years old while GCN just launched; so the expected performance boost is also a given.

You should compare HD6970 to the GTX580 because they are both built on 40nm and are from the same generation:

Total system:

HD6970 (40nm) = 370W (base)
GTX580 (40nm) = 425W (+15%)


GTX580 is 15-20% faster than HD6970. Your argument doesn't compute. GTX500 series is actually pretty good given their performance on 40nm node (and the massive lead in Compute that it had over 6xxx series, which obviously used up more transistors, contributing to a larger die space).

15% extra system power is not the same as 15% extra GPU power usage. That's 55w more power used by the gpu alone, as it's the only difference. The power also used by the rest of the system is homogenizing the percentages.

Also, no real gaming enthusiast who was considering a $350-500 GPU would ever turn down the GTX580 only because of the extra 50-55W of power.

The argument for how important GPU power consumption is above $200 price level is blown way out of proportion. I am still amazed it gets brought up constantly. It's as if people buying $500 GPUs care about 50W of power differences when they are likely running heavily overclocked Phenom II and Intel i5/i7 systems.

See data below.


There is only an 85W difference from the lowest to the highest power consuming GPUs in the range of HD6950/HD6970/HD7970/GTX470/GTX480/GTX560Ti/GTX570/GTX580. If we discount HD6950, then it falls to just ~50-55W.

It is really difficult to be up in arms over 50-60W power consumption differences among mid- and high-end graphics cards when the greatest power consumption differences are among CPUs, not GPUs.


At the end of the day, we are still talking about 190-240W graphics cards here, not 20-30W ones that you put into an HTPC.

Trying to care about 50-60W of power consumption on rigs that actually have $500+ graphics cards is like caring about weight gain when you order an extra large cheese crust pizza with sausage, bacon and pepperoni, with a bottle of diet coke. :biggrin:

+50w-60w might not be a lot to you, but it's a lot in the HPC world.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Edit: Never mind, I didn't realize you guys were in some sort of academic debate over architecture. Nevertheless, my comment stands. You compare products at any given point in time. That is what has relevance in the marketplace to non-academics who buy cards.

Hypothetical question: say for example that Kepler gets delayed all the way to January 2013. Would comparing 7970 to GTX580 make any sense then?

I say yes. Why? Because the time period overlap is what matters, not the process.

To make the point even clearer, consider how Intel has a process advantage over AMD. Typically it's Intel's smaller node vs. AMD's larger node, since they overlap time-wise. You wouldn't give special consideration to AMD for lagging on process, so why give it to NVDA?

Imho, it was, is, and will remain fair to compare chips built on different nodes against each other, if they are positioned against each other in the marketplace (similar price and purpose), at any given moment in time.

True, true! I never saw anyone say, "It's not fair to compare Intel and AMD CPU's because they're on a different node." Everyone just buys the product that performs best for them in their value budget
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
15% extra system power is not the same as 15% extra GPU power usage. That's 55w more power used by the gpu alone, as it's the only difference. The power also used by the rest of the system is homogenizing the percentages.

+50w-60w might not be a lot to you, but it's a lot in the HPC world.


Here is a Chart, that doesnt show system power draw, just the cards @peak.

The 7970 overclocked to like 1125mhz core (capped out ccc limit), uses ~11watts less power than a 580 does.
At that level of overclocking, its around 40% faster than the 580.


 
Last edited:

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
500 series is manufactured on the 40nm process, not on the 28nm process. You can't compare power consumption of GTX580 to HD7970 and expecting anything less but for HD7970 to blow the GTX580 completely out of the water when it comes to performance/watt or absolute power consumption. Not only that, but Fermi architecture is almost 2 years old while GCN just launched; so the expected performance boost is also a given.


i think you forget one thing, its GTX 480 that brought negative stigma over fermi, i mean even W1zzard (TPU owner) burn his own finger because touching GTX 480 heatsink, its so power hungry and after the fact that it was crippled fermi so beside its performance (witch not far a head of HD 5870 ) no other positive thing coming from it.

its even late for 6 month
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Here is a Chart, that doesnt show system power draw, just the cards @peak.

The 7970 overclocked to like 1125mhz core (capped out ccc limit), uses ~11watts less power than a 580 does.
At that level of overclocking, its around 40% faster than the 580.



Which coincides with the die shrink. I think both camps are argueing two different things.

You have one side saying that it is not fair to compare 28nm to 40nm, which is why the ATI is able to fit more performance into the same power envelope. Just wait for the 28nm nVidia!

Then you have the other side saying that it doesn't matter if we are comparing 28nm to 40nm because this is here and now. It could be months before nVidia comes out with a 28nm chip. Comparing product to product as opposed to how the or what process the product uses.

Personally, both camps are right, but we live in the here and now. Therefore, the latter is more important to us. But, I have no doubt that when nVidia does release their 28nm they will be on par, or perhaps even better than AMD's current offering.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
i think you forget one thing, its GTX 480 that brought negative stigma over fermi, i mean even W1zzard (TPU owner) burn his own finger because touching GTX 480 heatsink, its so power hungry and after the fact that it was crippled fermi so beside its performance (witch not far a head of HD 5870 ) no other positive thing coming from it.

its even late for 6 month

Imho,

Actually, most of the negative stigma may of been more-so from extremists and sensationalism. The GTX 480 and GTX 470 still sold very well and were good, not great, chips.

I remember when some posters enjoyed showing the massive lead AMD enjoyed with DirectX 11 hardware based on Steam and these negative stigma, crippled products still has percentage advantages over the 6970 and 6950.

http://store.steampowered.com/hwsurvey/videocard/
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
True, true! I never saw anyone say, "It's not fair to compare Intel and AMD CPU's because they're on a different node." Everyone just buys the product that performs best for them in their value budget

You should probably read the thread more carefully next time. The comment regarding nodes was in relation to architectures, not GPUs.

15% extra system power is not the same as 15% extra GPU power usage. That's 55w more power used by the gpu alone, as it's the only difference. The power also used by the rest of the system is homogenizing the percentages.

You should probably read the thread more carefully next time. I said GTX580 consumes 50-55W more power. I then proceeded to show the overall power consumption and how CPU and GPU differences affect the total power consumption of a system. The conclusion was that people running these videocards are more likely than not will be doing so on overclocked CPUs. Once that is introduced, the extra 50-60W of power consumption differences between CPUs are meaningless.

Also we aren't talking about HPC world. We are discussing gaming rigs. I have no idea how you bring this point into the discussion since no one ever anything about HPCs.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You said that its not a fair comparison of power consumption because NVIDIA is on 40nm and AMD is on 28nm. So.....that's unfair, but your comparison of NVIDIA catered games is completely fair?

I responded to the comment that GTX500 series cards were power consuming. Compared to their direct competitors (HD6900 series cards on 40nm), relative to the performance, they weren't hot. Comparing HD7970's performance vs. power consumption vs. GTX580/HD6970 is obvious - HD7970 will blow them both away. His comment should apply bo both the 6970 and the GTX580. Relative to their performance, they were both power hungry.

People buying cards based on games they play is irrelevant on the overall performance of the card.

You insinuated that we shouldn't put too much weight into those 4 games I linked since they are NV sponsored (otherwise why would you even bring up the NV sponsored part?). I responded that they should count 100% since gamers play them. If HD7970 isn't fast enough in them over GTX580 by more than 20%, that's just the reality, regardless if the game is NV sponsored. It should count towards the overall performance delta of the videocard vs. its competitor. I have an AMD card and I don't just ignore the performance advantages NV has in Crysis 2, HAWX2, Lost Planet 2 and Civ5 because those games run better on NV.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
You should probably read the thread more carefully next time. I said GTX580 consumes 50-55W more power. I then proceeded to show the overall power consumption and how CPU and GPU differences affect the total power consumption of a system. The conclusion was that people running these videocards are more likely than not will be doing so on overclocked CPUs. Once that is introduced, the extra 50-60W of power consumption differences between CPUs are meaningless.
I've been following this argument and I can't say I agree, especially when you look at it from several perspectives. First off, if you look at the charts from the last dozen posts, the difference between a GTX 580 and HD6970 isn't even 50-60W, it's more 29-33W from those charts. While that may seem like it makes this whole point moot, it's still more than significant. For example, if you add enough voltage and clock the HD6970 high enough to consume the same amount of power as a GTX 580, you know what you'll get? GTX 580 performance, give or take depending on the game. And there in is the answer to why power consumption and performance/watt is so important - they basically designate the quality of the chip. Now the above scenario can also be attested to the 40nm process in itself, but does any of that matter to consumers when the 6970 is $380 and the GTX 580 is $500? What does the extra $120 get you? Possibly +/- another 20% performance, + more heat and power consumption? What about the needed PSU to put up with that? All these costs are multiplicative, simply because the chip was pushed too far from the start. If you throw in the fact that a <$250 6950 2GB can be unlocked to a 6970, and overclocked largely the same, now the GTX 580 is essentially double the cost for only the chance at ~20% higher performance, assuming it clocks well. And it has less vRAM to boot, but I digress. The take home point - the GTX 580 was never a good buy, and that just highlights what a joke the GTX 480 was.

This is why the 7970 is that much more impressive. To an average consumer, one might look at it and say "oh, it's only 20-25% faster than the GTX 580, that's not much." But then, once you start analyzing different metrics and become a better consumer, it really shines. Any consumer regardless of technical knowledge can do a quick price/performance comparison and say, "wait, that's 20-25% extra performance for only $50 at the enthusiast level - that's unheard of, and it has double the vRAM!" Then, if that consumer is an enthusiast, one can look at the technical merits of the chip itself, such as the lower power consumption and overclockability, and the picture just gets better and better. As I stated above, if you crank the 7970 to a GTX 580's power consumption level, it will slaughter it. Look at the power consumption chart Arkadel posted - even while consuming less power than the GTX 580, the overclocked 7970 is in the range of 40-50% faster. Yes, it's a process advantage, but I'd imagine NVIDIA will follow its same suite on 28nm - it'll "overclock" its chips until they have some performance lead over AMD's at the cost of heat and noise. Consumers will get wise to this (if not inherently, look at the Fermi release and what it did to NVIDIA's marketshare), and I hope posts like these help spread that message.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Imho,

Actually, most of the negative stigma may of been more-so from extremists and sensationalism. The GTX 480 and GTX 470 still sold very well and were good, not great, chips.

I remember when some posters enjoyed showing the massive lead AMD enjoyed with DirectX 11 hardware based on Steam and these negative stigma, crippled products still has percentage advantages over the 6970 and 6950.

http://store.steampowered.com/hwsurvey/videocard/
I think it's been proven time and time again that the Steam survey is too small to adequately model the whole market, so it's pointless to attempt to make generalizations from it. If you look at the marketshare values in general, AMD has taken about 20% of the discrete card marketshare from NVIDIA driving it from a 40/60 split in NVIDIA's favor to a 60/40 split in AMD's favor: http://www.fudzilla.com/graphics/item/25027-nvidia-loses-more-market-share-to-amd . I think up until Fermi's release, NVIDIA's marketing carried them, but you can only put up smoke and mirrors for so long until your product has to speak for itself. I imagine a lot of GTX 480 owners got burnt, especially after a GTX 580 was released only 6 months later. Lesson learned, IMO.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I have an AMD card and I don't just ignore the performance advantages NV has in Crysis 2, HAWX2, Lost Planet 2 and Civ5 because those games run better on NV.
With the 7xxx series, I think you'll see all those advantages disappear. Crysis 2 and Lost Planet 2 ran better on NVIDIA hardware simply due to the tessellation performance advantage the 5xx series had. With the drastically improved tessellation engine on the 7xxx series, I don't think AMD will have trouble in those two games anymore. On a side note, I hope this removes the garbage "tessellate the ocean under the whole level" sucker punches NVIDIA pulls - I'm not losing FPS in general because of some squabble. On a side note, I think that shows how much NVIDIA really cares about its gamers if it's willing to hurt EVERYONE's performance just to make its cards look a little better. But I digress, cold hard facts: http://www.guru3d.com/article/amd-radeon-hd-7970-review/23 . I'm not sure what the performance advantage was in HAWX 2, tessellation as well? If so, same point.

Civ5 was a unique situation as I'm aware, as there's code/algorithms in the game that are specifically designed to run on NVIDIA's hardware and speed it up. I forget where I read about it, but I also remember it being mentioned that it would possibly be very easy to port the same code to AMD's GCN architecture since it can run the same C++ code. I might have screwed up that concept in translation, but it would be something if AMD could suddenly "port" all of NVIDIA's tweaks over to GCN (sucks for NVIDIA, good for consumers).
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I think it's been proven time and time again that the Steam survey is too small to adequately model the whole market, so it's pointless to attempt to make generalizations from it.

I agree it's not ideal but what I like is over-all revenue, profits and margins -- market share numbers from Mercury Research and Jon Peddie.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |