nVidia wins this round - Charlie D.

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
I dont expect GK104 to overcome HD7970 in performance but i could see it be close/next to HD7950.

GK104 could have almost the same performance as HD7950 but at a much lower price and that's what i believe Charlie is implying.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
When the GTX 285 hit, if it was ~35% faster then the 2900XT and nothing had launched between(the 8xxx/9xxx generation) for nV, how good of a part do people think it would have been? The term 'catasrophic failure' comes to mind. That is a full process node drop. That is what the 7970 is compared to the GTX 580.

A comparison is only valid and useful when the circumstances are similar and any possible difference is taken in account.

The GTX580 was the fastest single GPU in the market until the 7970 hit the market.

The 2900XT wasn't the fastest single GPU card in the market in your hypothetical scenario, the 8800Ultra was.

Interestingly, the GTX 280 was fabricated on the 65 nm node while the 8800GTX/Ultra were fabricated on the 90nm node. So a full node shrink (or is that 2?).

And look



Is that 37% over the 8800GTX? So 30% or less over the 8800Ultra?

Now that is a similar comparison.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
A comparison is only valid and useful when the circumstances are similar and any possible difference is taken in account.

The GTX580 was the fastest single GPU in the market until the 7970 hit the market.

The 2900XT wasn't the fastest single GPU card in the market in your hypothetical scenario, the 8800Ultra was.

Interestingly, the GTX 280 was fabricated on the 65 nm node while the 8800GTX/Ultra were fabricated on the 90nm node. So a full node shrink (or is that 2?).

And look



Is that 37% over the 8800GTX? So 30% or less over the 8800Ultra?

Now that is a similar comparison.

I think what Ben means (could be wrong here) is that it is a catastrophic failure when a competitors top new gen flagship is only at maximum 35% faster than the competitors last gen flagship. Also, he was comparing GTX285 performance over 2900XT (new gen flagship over last gen competitors flagship) to AMD7970 performance over GTX580 (new gen flagship over last gen competitors flagship).

It doesn't matter that the 2900XT wasn't the fastest single GPU card in the market in this example. Not sure how you figured it does?

And 90nm to 65nm is a full single node shrink. As opposed to a half step or two steps. Wasn't there some AMD/ATI GPU done on 80nm at some point? 1/2 step? Unsure.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
I think what Ben means (could be wrong here) is that it is a catastrophic failure when a competitors top new gen flagship is only at maximum 35% faster than the competitors last gen flagship. Also, he was comparing GTX285 performance over 2900XT (new gen flagship over last gen competitors flagship) to AMD7970 performance over GTX580 (new gen flagship over last gen competitors flagship).

It doesn't matter that the 2900XT wasn't the fastest single GPU card in the market in this example. Not sure how you figured it does?

And 90nm to 65nm is a full single node shrink. As opposed to a half step or two steps. Wasn't there some AMD/ATI GPU done on 80nm at some point? 1/2 step? Unsure.

About nodes:

90/80nm, 65/55. 45/40 (45 was cancelled and probably the 40nm is the 45nm renamed), 32/23 (and again the 32 was cancelled and 28 is probably 32 renamed).

The 2900 was a failure. Comparing anything to it is pointless.

If 35% over last competition flagship is a failure what shall we call a flagship that is only 35% faster than your own last flagship?

A success? Give me a break.

If we want to compare node jumps we either compare at first card in a node or at last card on a node. Reaching conclusion about a new node based on a new card on a new process to a card on a mature process is silly - imagine, that just like NVIDIA and AMD were able to extract around 20% more performance with their refreshes, AMD is able to extract 20% more performance from GCN? Doesn't seem that silly, considering 7970 that are able to OC to 1100+ on core already get that.

2900 was an architectural problem, 7970 seems to be an yield problem.

Have you noticed that TSMC keeps cancelling new nodes? What does that tell you?

Have you noticed that new games keep looking (IQ wise) the same for the last few years? What does that tell you?
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
About nodes:

90/80nm, 65/55. 45/40 (45 was cancelled and probably the 40nm is the 45nm renamed), 32/23 (and again the 32 was cancelled and 28 is probably 32 renamed).

The 2900 was a failure. Comparing anything to it is pointless.

If 35% over last competition flagship is a failure what shall we call a flagship that is only 35% faster than your own last flagship?

A success? Give me a break.

If we want to compare node jumps we either compare at first card in a node or at last card on a node. Reaching conclusion about a new node based on a new card on a new process to a card on a mature process is silly - imagine, that just like NVIDIA and AMD were able to extract around 20% more performance with their refreshes, AMD is able to extract 20% more performance from GCN? Doesn't seem that silly, considering 7970 that are able to OC to 1100+ on core already get that.

2900 was an architectural problem, 7970 seems to be an yield problem.

Have you noticed that TSMC keeps cancelling new nodes? What does that tell you?

Have you noticed that new games keep looking (IQ wise) the same for the last few years? What does that tell you?

Gaia, forgive me but I still have little idea why you're going on this node tangent.
2900XT was a failure in that it was soooo hyped up and failed to deliver. It was just about as fast as the 8800GTS640, Nvidia's second fastest at the time. Didn't do to badly, just nowhere near as good as hyped. Kind of like Barcelona.

So, unless you wish to get into a node discussion, that probably wasn't the focus of Ben's post which you quoted. Maybe he could chime in here.

I'll repost here what I believe the focus was:

I think what Ben means (could be wrong here) is that it is a catastrophic failure when a competitors top new gen flagship is only at maximum 35% faster than the competitors last gen flagship. Also, he was comparing GTX285 performance over 2900XT (new gen flagship over last gen competitors flagship) to AMD7970 performance over GTX580 (new gen flagship over last gen competitors flagship).

"If 35% over last competition flagship is a failure what shall we call a flagship that is only 35% faster than your own last flagship?"

Is that what we're talking about? A single company's new gen improvement over it's own last gen? Or are we discussing competitors products? I though it was the latter. In fact I'm pretty sure of it. ::shrugs::
 
Last edited:

WMD

Senior member
Apr 13, 2011
476
0
0
I dont expect GK104 to overcome HD7970 in performance but i could see it be close/next to HD7950.

GK104 could have almost the same performance as HD7950 but at a much lower price and that's what i believe Charlie is implying.

My best bet is it will slot right in between AMD's top 2 cards in performamce and price avoiding direct competition. A win win for both sides.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Gaia, forgive me but I still have little idea why you're going on this node tangent.
2900XT was a failure in that it was soooo hyped up and failed to deliver. It was just about as fast as the 8800GTS640, Nvidia's second fastest at the time. Didn't do to badly, just nowhere near as good as hyped. Kind of like Barcelona.

So, unless you wish to get into a node discussion, that probably wasn't the focus of Ben's post which you quoted. Maybe he could chime in here.

I'll repost here what I believe the focus was:

I think what Ben means (could be wrong here) is that it is a catastrophic failure when a competitors top new gen flagship is only at maximum 35% faster than the competitors last gen flagship. Also, he was comparing GTX285 performance over 2900XT (new gen flagship over last gen competitors flagship) to AMD7970 performance over GTX580 (new gen flagship over last gen competitors flagship).

"If 35% over last competition flagship is a failure what shall we call a flagship that is only 35% faster than your own last flagship?"

Is that what we're talking about? A single company's new gen improvement over it's own last gen? Or are we discussing competitors products? I though it was the latter. In fact I'm pretty sure of it. ::shrugs::
You misunderstand the comment. BenSkywalker's comment was talking about a full node jump, not cards, so Gaia is on target here and brings up several valid points. How can you hold a company's progress based solely on a comparison of performance to its competitor's, especially when parts are compared in a multitude of metrics? It'd be like saying the GTX 280 and GTX 260 were complete failures because they were brought out on a 65nm process when AMD's 4800 series came out a few days later on a 55nm process. Each company has its own roadmap and you have to look at the progress from release to release and compare whatever is available on the market. Whenever Kepler arrives, we'll be able to see what NVIDIA can do on 28nm process, until then, it's all speculation about what a "full process node drop" this generation should entail performance-wise.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
Gaia, forgive me but I still have little idea why you're going on this node tangent.
2900XT was a failure in that it was soooo hyped up and failed to deliver. It was just about as fast as the 8800GTS640, Nvidia's second fastest at the time. Didn't do to badly, just nowhere near as good as hyped. Kind of like Barcelona.

So, unless you wish to get into a node discussion, that probably wasn't the focus of Ben's post which you quoted. Maybe he could chime in here.

I'll repost here what I believe the focus was:

I think what Ben means (could be wrong here) is that it is a catastrophic failure when a competitors top new gen flagship is only at maximum 35% faster than the competitors last gen flagship. Also, he was comparing GTX285 performance over 2900XT (new gen flagship over last gen competitors flagship) to AMD7970 performance over GTX580 (new gen flagship over last gen competitors flagship).

"If 35% over last competition flagship is a failure what shall we call a flagship that is only 35% faster than your own last flagship?"

Is that what we're talking about? A single company's new gen improvement over it's own last gen? Or are we discussing competitors products? I though it was the latter. In fact I'm pretty sure of it. ::shrugs::

So if the NVIDIA flagship card is 30% faster than the 7970 is it a failure? Or is that new card competition the 6970?

So the 7970 competes with the GTX580 (assuming that is the name) and the GTX680 competes with the 6970?

So by that logic if the GTX680 is 40% faster than the 6970, being as fast as the 7970, it is a win?

And did the 5870 compete with the GTX285 but the GTX480 competed with the 4890?

In my opinion products compete with existing products.

At the moment the 7970 seems to be a solid product, being a better product and value (at least in my opinion) than the GTX580, which seems to be its competitor on that price bracket.

When the GTX680 arrives we compare it against the 7970.
 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
IonusX said:
hey dude ill stop comparing it to fermi when kepler launches so how about you tell jason to whip his team harder so he can race around the track too.
i only compare the racers on the field man. if you expect anything more than its YOU that needs help. you want to argue different okay
get me a gtx 680 review right now on the double and hesitation will mean you cant.
and thats how we do things back on the spot..

i dont get why nvidia fans are screaming for kepler. tesla wont be for 2 years if not more giving amd plenty of time to surpass it and reign supreme. 2013 the year without nvidia..

you make justin beiber look good..

So what is it, you can compare future GPUs, you can't compare future GPUs? Make up your mind kid...
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
You misunderstand the comment. BenSkywalker's comment was talking about a full node jump, not cards, so Gaia is on target here and brings up several valid points. How can you hold a company's progress based solely on a comparison of performance to its competitor's, especially when parts are compared in a multitude of metrics? It'd be like saying the GTX 280 and GTX 260 were complete failures because they were brought out on a 65nm process when AMD's 4800 series came out a few days later on a 55nm process. Each company has its own roadmap and you have to look at the progress from release to release and compare whatever is available on the market. Whenever Kepler arrives, we'll be able to see what NVIDIA can do on 28nm process, until then, it's all speculation about what a "full process node drop" this generation should entail performance-wise.

I remember already stating this position (since I'm coming from an AMD card anyways) and was basically called a shill.

Looking at the previous AMD cards, the performance increase is consistant across generations. It's just amusing that people argue when compared to nVidia it isn't consistent (which it still is when you factor in nvidia always had the crown at the end of a generation.)

But everyone spins it so they're right. Change the conditions until you win, right?
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
So if the NVIDIA flagship card is 30% faster than the 7970 is it a failure? Or is that new card competition the 6970?

So the 7970 competes with the GTX580 (assuming that is the name) and the GTX680 competes with the 6970?

So by that logic if the GTX680 is 40% faster than the 6970, being as fast as the 7970, it is a win?

And did the 5870 compete with the GTX285 but the GTX480 competed with the 4890?

In my opinion products compete with existing products.

At the moment the 7970 seems to be a solid product, being a better product and value (at least in my opinion) than the GTX580, which seems to be its competitor on that price bracket.

When the GTX680 arrives we compare it against the 7970.
Exactly. :thumbsup:
So what is it, you can compare future GPUs, you can't compare future GPUs? Make up your mind kid...
There's no comparison made to current cards in that second quotation. If you're trying to argue an inconsistency, you didn't find one.
I remember already stating this position (since I'm coming from an AMD card anyways) and was basically called a shill.

Looking at the previous AMD cards, the performance increase is consistant across generations. It's just amusing that people argue when compared to nVidia it isn't consistent (which it still is when you factor in nvidia always had the crown at the end of a generation.)

But everyone spins it so they're right. Change the conditions until you win, right?
I agree, and suggest you argue based on logic as to keep people honest :thumbsup:.
 

Meaker10

Senior member
Apr 2, 2002
370
0
0
I like comparing overclocked to overclocked for these cards (because hey a lot of enthusiasts buy these) and you are looking at a 50%-80% lead for the 7970 over the current 580.
 

Bobisuruncle54

Senior member
Oct 19, 2011
333
0
0
I remember already stating this position (since I'm coming from an AMD card anyways) and was basically called a shill.

Looking at the previous AMD cards, the performance increase is consistant across generations. It's just amusing that people argue when compared to nVidia it isn't consistent (which it still is when you factor in nvidia always had the crown at the end of a generation.)

But everyone spins it so they're right. Change the conditions until you win, right?

I think the difference this time is the price. It just appears to be much more expensive for the performance gain per dollar.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I think the difference this time is the price. It just appears to be much more expensive for the performance gain per dollar.

Oh, definitely, it cost more. But so does my gas. So do my groceries and other living expenses. Like I said in another thread - this is the top tier halo card launching with no competition. Historically both manufacturers who launched first raised the price.

Again, this card is launching with inflated prices in place. I had no expections for AMD to think about me when they set their price. I'm sure the world would be a happier place if AMD launched the card in the $450 range, and what will that get them? I'd predict we'd start reading "something must be wrong for them to sell it so cheap."

In the end, the product will drop price WHEN proper competition comes out. Until then, either suck it up (the price) or just ignore it. If you think AMD/nVidia care about you, well, I don't know what to tell you.

EDIT: oh yeah, Go Premiums!
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I don't see Nvidia coming anywhere close to the 7970, not in our area anyways. 7970s can get 75% or more performance over stock through overclocking.

If they're already 20% faster than 580s at stock, the additional OC ceiling puts them at almost double the performance of a 580.

It's possible Nvidia could double the 580's performance, however not with this card they're talking about in the OP they'd need another massive card to do that and this is not a massive card.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
The GTX580 was the fastest single GPU in the market until the 7970 hit the market.

You see nVidia spending a lot to compete with themselves? How about AMD? The comparison to the competition is the one that is relevant in any market sense. If a company is pushing themselves around they aren't likely to last too long(not that it doesn't happen on occasion, but it isn't good business).

Interestingly, the GTX 280 was fabricated on the 65 nm node while the 8800GTX/Ultra were fabricated on the 90nm node. So a full node shrink (or is that 2?).

http://en.wikipedia.org/wiki/Die_shrink#Half-shrink

That is a full node shrink, and an extremely interesting chart you linked to-

http://www.anandtech.com/show/2549/11

~50%-100% in almost every GPU strained bench(QW:ET being a notable exception). The page I linked isn't the biggest gap, I just try to be reasonable and link Crysis to use as an example of GPU limitations.

If we want to compare node jumps we either compare at first card in a node or at last card on a node.

The 285 *was* nV's first high end 55nm part- I was careful to make sure I took that into consideration.

I think what Ben means (could be wrong here) is that it is a catastrophic failure when a competitors top new gen flagship is only at maximum 35% faster than the competitors last gen flagship.

Thanks for having my back Keys, but they have it right this time

In this industry the limiting factor is process availability overwhelmingly. This generation, for the first time I can ever recall, we missed a half node move and the only option was for a full node drop. Both AMD and nVidia had an oppurtunity to make enormous progress with such an oppurtunity. It looks like AMD didn't redesign when the half node got canned, and just went with their original plans. That is smart in terms of being first to market for obvious reasons. The fact that the cards are showing enormous OC potential backs this up. The problem is going to come when nV launches their parts, barring a major stumble by them, are going to handily best AMD's parts in the same price segment. I think AMD will likely have a refresh ready to roll by then, or perhaps the part they were planning to release on this build process in the first place, either way I was shocked when I saw how small the performance difference was given the huge die advantage they had.

People can try and act like what I am saying is absurd, go look at what these parts are hitting when they are OCd and then come back and talk
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
I think the difference this time is the price. It just appears to be much more expensive for the performance gain per dollar.

Interestingly, talking in relative terms, the 7970 isn't more expensive (actually is less expensive) than the 5870 was when compared to the GTX285 and the 4870/4890 (lets not forget the 4870 was $160 and the 4890 was $180 while the the 5870 was twice as much at $380), although in absolute terms, it is by far.

On the other hand the 4800 series undercut prices for quite a good margin, for market share and mind share reasons. i just don't know if it is valid to hold a company to the 4000 series standard, although it would be nice if this time NVIDIA does some fanning the flames.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
A comparison is only valid and useful when the circumstances are similar and any possible difference is taken in account.

The GTX580 was the fastest single GPU in the market until the 7970 hit the market.

The 2900XT wasn't the fastest single GPU card in the market in your hypothetical scenario, the 8800Ultra was.

Interestingly, the GTX 280 was fabricated on the 65 nm node while the 8800GTX/Ultra were fabricated on the 90nm node. So a full node shrink (or is that 2?).

And look



Is that 37% over the 8800GTX? So 30% or less over the 8800Ultra?

Now that is a similar comparison.

Take into consideration these are watered down results including 1024 x 768, and the lack of enhancements. Using these makes them all much closer than they really are.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
On the other hand the 4800 series undercut prices for quite a good margin, for market share and mind share reasons. i just don't know if it is valid to hold a company to the 4000 series standard, although it would be nice if this time NVIDIA does some fanning the flames.

Why would a consumer defend premium pricing strongly? I understand why an AIB would -- or an IHV -- or an AMD loyalist would at this time because that's the direction the company has taken now.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Why would a consumer defend premium pricing strongly? I understand why an AIB would -- or an IHV -- or an AMD loyalist would at this time because that's the direction the company has taken now.

Go Premiums!

Ignore the fact that this is the better buy in the price bracket for some buyers. Nope, we're AMD loyalist. Gotcha.

Shill on brothers!

Go Premiums!
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
You see nVidia spending a lot to compete with themselves? How about AMD? The comparison to the competition is the one that is relevant in any market sense. If a company is pushing themselves around they aren't likely to last too long(not that it doesn't happen on occasion, but it isn't good business).

That is exactly why NVIDIA didn't improve performance on the 80nm shrink, while it certainly did from the 65->65nm shrink.

http://www.anandtech.com/show/2549/11

~50%-100% in almost every GPU strained bench(QW:ET being a notable exception). The page I linked isn't the biggest gap, I just try to be reasonable and link Crysis to use as an example of GPU limitations.

Unfortunately techpowerup didn't have a breakdown by resolutions then. Sure, if you overload the 8800GTX/9800GTX memory limits, the GTX280 crushes it.

Likewise, if you use the super resolutions of today, 3xscreens the GTX580 loses by default (aside from custom cards).

The 285 *was* nV's first high end 55nm part- I was careful to make sure I took that into consideration.

Sure, but you could also use the example of 65nm->40nm as well, after all it is more recent.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
You see nVidia spending a lot to compete with themselves? How about AMD? The comparison to the competition is the one that is relevant in any market sense. If a company is pushing themselves around they aren't likely to last too long(not that it doesn't happen on occasion, but it isn't good business).
I think you might want to look up Intel...
http://en.wikipedia.org/wiki/Die_shrink#Half-shrink

That is a full node shrink, and an extremely interesting chart you linked to-

http://www.anandtech.com/show/2549/11

~50%-100% in almost every GPU strained bench(QW:ET being a notable exception). The page I linked isn't the biggest gap, I just try to be reasonable and link Crysis to use as an example of GPU limitations.

The 285 *was* nV's first high end 55nm part- I was careful to make sure I took that into consideration.



Thanks for having my back Keys, but they have it right this time

In this industry the limiting factor is process availability overwhelmingly. This generation, for the first time I can ever recall, we missed a half node move and the only option was for a full node drop. Both AMD and nVidia had an oppurtunity to make enormous progress with such an oppurtunity. It looks like AMD didn't redesign when the half node got canned, and just went with their original plans. That is smart in terms of being first to market for obvious reasons. The fact that the cards are showing enormous OC potential backs this up. The problem is going to come when nV launches their parts, barring a major stumble by them, are going to handily best AMD's parts in the same price segment. I think AMD will likely have a refresh ready to roll by then, or perhaps the part they were planning to release on this build process in the first place, either way I was shocked when I saw how small the performance difference was given the huge die advantage they had.

People can try and act like what I am saying is absurd, go look at what these parts are hitting when they are OCd and then come back and talk
This line of thinking is incorrect if you're assuming that a full node shrink automatically guarantees a similar architecture improvement as well. You're lumping too many variables into "full node shrink," which alone didn't produce the performance increases we've seen from generation to generation. If you want to make the argument that AMD's architecture isn't as refined as possible on 28nm, OK, but more proof than an assumption is needed. NVIDIA's Kepler performance would be a good comparison to validate your claim. It could also be that we're starting to hit a wall in maximizing the performance efficiency of architectures, in that as GPU designs become more advanced, it's more difficult to improve them further. Again, whenever NVIDIA actually displays Kepler, we'll have a better idea.

Really? Try reading it again, it will come to you...
Neither Kepler nor Tesla has been released, and in that second quotation he compared unreleased GPU's against other unreleased GPU's. What's so difficult about that? Again, if you have a quotation that actually proves what you're accusing IonusX of, post it. Otherwise stop derailing the thread and wasting forum space. End of discussion.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
That is exactly why NVIDIA didn't improve performance on the 80nm shrink, while it certainly did from the 65->65nm shrink.



Unfortunately techpowerup didn't have a breakdown by resolutions then. Sure, if you overload the 8800GTX/9800GTX memory limits, the GTX280 crushes it.

Likewise, if you use the super resolutions of today, 3xscreens the GTX580 loses by default (aside from custom cards).



Sure, but you could also use the example of 65nm->40nm as well, after all it is more recent.

Fortunately Computebase.de does have resolution and enhancements - and uses GPU limited examples. Give that site a try instead of using watered down results.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
Take into consideration these are watered down results including 1024 x 768, and the lack of enhancements. Using these makes them all much closer than they really are.

I know, unfortunately TUP which has been the site I've been linking to didn't do the breakdown by resolution then.

Still, we need to consider that the GPU industry has been a really fast advancing industry and now is slowing down, some will say it is because consoles and whatnot, but reality is that impossible to keep doubling performance.

If you go further back, considering both the speed and IQ advances, even jumps like the 8800GTX -> GTX280 dwindle.

EDIT: But regardless, look at the jump between the 3870->4870 in the same node. The HD2900 sucked. You can get 100% performance with a node jump but you can also get 100% performance on the same node!!!

Why would a consumer defend premium pricing strongly? I understand why an AIB would -- or an IHV -- or an AMD loyalist would at this time because that's the direction the company has taken now.

There you go. I didn't buy any 7970 or GTX580 or any GPU for over >$350 in my all life.

So, the people you need to convince is the ones that buy or are considering buying these cards.

No one is defending price premiums, people are just explaining reality - when there is competition, supply and demand rule the prices, unless there is price fixing which is illegal (dumping is illegal at well)

If you have any legal document, where AMD guaranteed you they would never price their cards over a certain value, I encourage you to take legal action.

Otherwise is the price of the 7970 justified considering market conditions?
If it is, what is wrong?

Are you hopping people refuse to buy the 7970 and buy the GTX580 instead as a protest? The GTX580 is even more overpriced.

And people already buy cheaper cards in higher volumes.

Your indignation would reach much more people if a cards that cost between $100-250 didn't already allow people to play their games comfortably up to 1080p.

It is a bit like the gap between richer and poorer - only idiots care about the gap, what matters is how much the poorer have. If the poorer don't have food, housing, etc, then it is outraging, but if the poorer already have the basic necessities and can even afford small luxuries, it is ridiculous.

It is like the "poor US citizens" complaining their are poor and complaining about unequally while wearing expensive brands and using expensive gadgets... while the really poor people of the world are starving to death.

To go back on topic, who cares if a 7970 costs $500 or $1000, when 2 6870 that cost $320 give similar performance?

That is why your crusade doesn't fly.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |