nVidia: "We expected more from the 7970"

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I honestly don't see what is such a difficult concept here. If the game is coded to run better in multi-threaded DX11, and NVIDIA's drivers support multi-threaded DX11 while AMD's don't, then the game is automatically going to run better on NVIDIA's cards. Therefore, it's no longer a comparison of the cards, but the drivers. It'd be like running a single card triple monitor display benchmark and then declaring AMD the winner. No wonder, NVIDIA doesn't support that.

Furthermore, you didn't read Ryan Smith's write up, as he stated that it's optional. Once more programs take advantage of it, it will probably be in AMD's drivers as well.

Honestly, don't feed the troll. He comes into every thread about a 7970 and cries about his GTX 470's, makes up performance claims, and then never owns up to them, despite getting destroyed over and over: http://forums.anandtech.com/showthread.php?t=2221009&page=2 .

But wasn't civ 5 one of the best-selling games of 2010? How successful does a game need to be to get AMD's driver team to pay attention to it? And nvidia gpus didn't get the code for multi-threaded rendering until after civ5 came out. In fact, they only fully enabled it a week before Ryan's post that you and I linked, so probably somewhere around april 1, 2011. It's been nearly a year and AMD still hasn't done anything about it. Many, possibly even most, repuatable review sites use civ5 as a benchmark game, so now the real question becomes "Is there something about AMD dx11 gpus that prevents them from enabling MTR, or is their driver team too weak/understaffed to do it?".

Good to know that info about ballathefeared. Apparently, the only thing he "feared" is the truth.

Do you think that it's unfair to show benchmarks, where the 6970 and 6950 maul the GTX580 and GTX570 due to VRAM limitations? Why are most reviewers doing their best to avoid such situations?

Those are also legitimate situations, and I'd like to see them more often in benchmarks. If nothing else, it could help people to decide on the 2gb 6950 or 3gb gtx 580 if that better-suited their needs.
 
Last edited:

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I wasn't aware Nvidia refined GF100 :hmm:

All my cards were bought after 5 series came out, all of them have been nothing short of fantastic with none of the problems others talked about. Water didn't solve any problems, it just took some beast clockers and made them stupid beast clockers lol.

580 is really just a refinement of the original fermi process. I think that someone earlier called it fermi 1.5, that's a very apt description. NV got their yields up, their voltages down, and generally fixed all the crap that was crippling gtx 470 and 480. So if you bought an early-edition gtx 470/480, you got a hot/power hungry/loud beast with little overclocking room. But if you got a more recent one, you got a cool/less power hungry/relatively quiet card that might overclock a lot. This issue was kind of swept under the rug with gtx 460 and, later, gtx 570/580 taking over the limelight, but the process improvements definitely benefitted those of us were late-adopters of fermi 1 as well.

5870->6970 wasn't just iterative improvement, it was a new architecture. I know that the end result looks like iterative improvement but under the hood it's a pretty significant change.

It was allegedly a new architecture, but really it was just a minor refinement of the old architecture, wasn't it? They milked it for all it was worth, and talked about all the great improvements that would come of it, but it was really kind of a letdown in my eyes. I still remember the deafening silence from the AMD fanbois when 6970 came out late and slow (hmmm, maybe there is some sort of correlation between late and disappointing in the gpu world...). Going from 5 to 4 wasn't really that much of a change, anyway. Going from either of those to GCN is a much bigger change.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
580 is really just a refinement of the original fermi process. I think that someone earlier called it fermi 1.5, that's a very apt description. NV got their yields up, their voltages down, and generally fixed all the crap that was crippling gtx 470 and 480. So if you bought an early-edition gtx 470/480, you got a hot/power hungry/loud beast with little overclocking room. But if you got a more recent one, you got a cool/less power hungry/relatively quiet card that might overclock a lot. This issue was kind of swept under the rug with gtx 460 and, later, gtx 570/580 taking over the limelight, but the process improvements definitely benefitted those of us were late-adopters of fermi 1 as well.



It was allegedly a new architecture, but really it was just a minor refinement of the old architecture, wasn't it? They milked it for all it was worth, and talked about all the great improvements that would come of it, but it was really kind of a letdown in my eyes. I still remember the deafening silence from the AMD fanbois when 6970 came out late and slow (hmmm, maybe there is some sort of correlation between late and disappointing in the gpu world...). Going from 5 to 4 wasn't really that much of a change, anyway. Going from either of those to GCN is a much bigger change.

That's not necessarily true because I purchased a GTX 470 Twin Frozr 2 and it could over-clock it to over 800 stable but settled on 750, and still runs cool and quiet.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I'm aware of GF100 > GF110, I wasn't aware of a refinement within GF100 that improved the samples after the release ones.

Thanks for that info


Maybe there were some bad ones that barely meet requirements but they pushed them out anyways because they had such low yields?

I dunno, I've had five 470's and only one was a dud, that one didn't take kindly to voltage added, even at stock clocks.
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
I'm not too sure why people are so adamant against a game being used because of a difference in driver maturity. When did we suddenly stop ignoring drivers as a potential reason to purchase one piece of hardware over another? I wish someone would have given me a little insider information on how bad the 8800GTX drivers were (at release) before I bought one!

However, I do believe that it is important to substantiate differences in performance between video cards when necessary. This belief would include a situation like Civilization V. A user could choose to ignore the performance benefits in the game if they never play it, and wish to consider that driver optimization as an "outlier" because very few games use that feature.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
It was allegedly a new architecture, but really it was just a minor refinement of the old architecture, wasn't it? They milked it for all it was worth, and talked about all the great improvements that would come of it, but it was really kind of a letdown in my eyes. I still remember the deafening silence from the AMD fanbois when 6970 came out late and slow (hmmm, maybe there is some sort of correlation between late and disappointing in the gpu world...). Going from 5 to 4 wasn't really that much of a change, anyway. Going from either of those to GCN is a much bigger change.

I wouldn't describe going from VLIW5 to VLIW4 as a minor refinement, it's a pretty big change that yielded little improvement. And it wasn't really slow, it was as fast as nvidia past flagship GTX480, they probably targeted that card as a competition, it was both much smaller and more energy efficient than GTX480. Unfortunately for AMD GF110 happened. You can't realistically expect a 3XX mm2 chip to be faster than 5XXmm2 chip on the same process. Some AMD fans had overblown expectations that it would be faster than GTX580. Remember that Cayman was originally designed to be fabricated on a canned 32nn process, that was the reason it was late and a bit underwhelming, there's only so much you can do on the same process without increasing die size too much. AMD strategy is to have dual gpu designs compete as possible kings of the hill and it kind of worked. For long periods of time AMD had the fastest cards out there, although nvidia eventually responded with the same.
 
Last edited:

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Most 7970's that I've seen on water hit 1300 core, which is a 41% overclock, I actually compared my cards to those results a few times. LP2 is against a 1300 core 7970, and Shogun 2 still shows 470s being 23% faster (assumed since that user didn't want to share all their settings, mine were maxed).

No question the 7970 is a nice card, but the setup you're talking about with a full cover block is going to run you at least $675 for the card and block alone.

You can't max out shogun2 on a card with less than 2GB of ram. http://www.computerbase.de/artikel/...olor-radeon-hd-6870-x2/26/#abschnitt_shogun_2
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
So your logic is: "if either company sucks at it, then it shouldn't be compared. AMD had a 6-month lead in DX11 hardware, yet has fallen behind Nvidia in DX11 support and because of this, no future games with multi-threaded rendering should be used to compare Nvidia and AMD GPU's until AMD codes in said industry-standard feature."
Way to set up a slippery slope argument, because I said anything of the sort. The point is that reviewers cannot possibly benchmark every application out there, and therefore should set up a benchmark suite to observe performance of a part that can be generalized to as many applications as possible. That said, Civ V is kind of a niche situation and it's obvious that AMD is behind at any tier, so why waste the time? All it's saying is "AMD doesn't support multi-threaded DX11 yet." No one would recommended an AMD graphics card to someone who solely wants to play Civ V, so again, why waste the time?

And if AMD never codes in said industry standard feature, then???????????????
Then that would be a major problem as no doubt more games will support it and they will fall further and further behind.
On the other hand, Nvidia does not support 3+ monitor setups with 1 card. Therefore no multi-monitor setups should ever be used when benchmarking both Nvidia and AMD hardware because Nvidia can't do it so it can't possibly amount to useful information.
You misunderstand the analogy. No one would want to read a single card three-monitor review where AMD had the only cards scoring and NVIDIA had a big fat "0" in every bench because they simply don't support it. Put another way, what if reviewers started using MLAA to replace 4x MSAA in benchmarks when it was released because they found it more efficient or liked the IQ better, but still used 4x MSAA on NVIDIA? The NVIDIA fanboys would run in foaming at the mouth. It's funny how NVIDIA fans used the same argument for tessellation until Fermi came out, I just love the hypocrisy.
What are you talking about? Ryan Smith actually posted an indepth description of this issue almost a year ago here in the VC&G forums. What actually happened is that civ 5 implemented proper multi-threaded rendering. Nvidia gpus can take advantage of this, leading to better performance in civ5 vs AMD gpus which STILL TO THIS DAY EVEN WITH 7970 DON'T IMPLEMENT MULTI-THREADED RENDERING. Civ5 is one of my favorite games, I generally prefer AMD cards, but guess what I've been using for the past few years?
And this goes with what I said above, if you're into Civ V, it'd be foolish to buy an AMD card.
That's exactly what I linked to...
And Ryan mentioned during the 7970 review that AMD still hasn't implemented multi-threaded rendering. Hopefully if enough of us bitch/moan/complain about it, they'll eventually get around to it.
It'll be a combination of market forces, outcry, and resources, as always. As it stands, if they think it's worth it to take a whack in Civ 5 because they have more pressing things to work on, so be it. However that's only delaying the inevitable. Until then, voting with your wallet is a good idea.
Wow, that's funny, you ninja'd your own criticism!
? That post isn't even edited...
Multi-threaded rendering isn't some esoteric BS 3dmark 2006 application with no basis in reality, it's used in one of the most popular games out there and offers a huge boost in performance for any game that properly implements it. It's like the anti-physx b/c it has a real-world performance boost and a fantastic positive impact in gaming experience to those who use it. Why isn't that a reasonable comparison to make? Maybe if reviewers keep using civ5 then eventually AMD will pull their heads out of their asses and implement this dx11 feature in their cards.
Where did anyone claim otherwise? Like I said, it's one game. And PhysX was, is, and will be a load of garbage until they add effects to a game that aren't easily done through alternatives for a fraction of the performance cost. Or if it supports more than a game or two a year. But that's not what this thread is about.
And yet you had no reservations comparing 6950 to 7970 in Crysis 2 and based on that(and some other games where 7970 does very well) say that it is 100% faster. And somehow you completely ignored games like metro2033 where the performance difference is much smaller. We all know that Crysis employs ridiculous levels of tessellation not to improve visuals but to improve nvidia performance relative to radeons.
It is staggeringly faster, it's been shown, what's so difficult about that? Is it 100% faster in every benchmark out there? No. Do I care? No. I'm seeing amazing 100%+ improvement in the games I play at the settings I play at. You also conveniently left out BF3, which shows the same performance improvement but uses no tessellation. Are you hypocritical or ill-equipped for the discussion?
But wasn't civ 5 one of the best-selling games of 2010? How successful does a game need to be to get AMD's driver team to pay attention to it? And nvidia gpus didn't get the code for multi-threaded rendering until after civ5 came out. In fact, they only fully enabled it a week before Ryan's post that you and I linked, so probably somewhere around april 1, 2011. It's been nearly a year and AMD still hasn't done anything about it. Many, possibly even most, repuatable review sites use civ5 as a benchmark game, so now the real question becomes "Is there something about AMD dx11 gpus that prevents them from enabling MTR, or is their driver team too weak/understaffed to do it?".
How would I know? I'm not on AMD's driver team. My point still stands that it lacks as a scientific comparison. I'm not arguing that MTR shouldn't be in AMD's drivers, it should.
Good to know that info about ballathefeared. Apparently, the only thing he "feared" is the truth.
You don't even know half the story. He was claiming that his GTX 470's were "30-50%" faster than a 7970, and when finally put to the test, he tried to hide his benchmark settings and constantly change his data, argument, and parameters. When I finally benchmarked 3DMark11, Crysis 2, and STALKER which showed his setup wasn't even faster than a single 7970, never mind 30-50%, he threw a fit. You'll notice now he's moved on to Shogun 2, and honestly, someone with a broken ego and a personal vendetta against anyone with more than them is not worth wasting time on.
Those are also legitimate situations, and I'd like to see them more often in benchmarks. If nothing else, it could help people to decide on the 2gb 6950 or 3gb gtx 580 if that better-suited their needs.
Do you really think someone looking to spend $250 for a 6950 2GB is also considering a $550 GTX 580 3GB? I was agreeing with most of your points until this.
I'm not too sure why people are so adamant against a game being used because of a difference in driver maturity. When did we suddenly stop ignoring drivers as a potential reason to purchase one piece of hardware over another? I wish someone would have given me a little insider information on how bad the 8800GTX drivers were (at release) before I bought one!

However, I do believe that it is important to substantiate differences in performance between video cards when necessary. This belief would include a situation like Civilization V. A user could choose to ignore the performance benefits in the game if they never play it, and wish to consider that driver optimization as an "outlier" because very few games use that feature.
I agree. When you buy a video card, you're also buying the drivers that come with it, and they are part of your investment. My point isn't that people shouldn't be aware that AMD's drivers don't support multi-threaded rendering or that it's OK they don't. My point is that testing Civ V over and over again without any change/updates in the drivers seems like a waste or resources since no new information is gleaned.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I was talking about the actual settings, to wit I mean DoF, Tess, Soft Shadows, I was well aware they were using 4XAA.

My apologies, I should know better than to omit such information in a place filled with people who pounce on anything and everything.

@ 1080p it run fines with 1.5GB, and when I say fine I mean it runs awful, 46 fps with a 6990, do not want.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126

:\

Imho,

Valid? Valid in what context?

For forums, it is fun and offers a gauge of sorts for comparing one's choice and its popularity - valid in this context.

But, it certainly isn't valid for total sales from the company and one may look to Jon Peddie or Mercury Research.

So in that post you basically say what I have been telling you, you can't use overall sales and revenue to give you an indication of how a segment of products, in this case DX11 discreet cards, fared against another.

But Steam is a good gauge to show popularity of said cards.



So much



So little



As always more data is welcome and appreciated! Premiums are significant! Node Change.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
It is staggeringly faster, it's been shown, what's so difficult about that? Is it 100% faster in every benchmark out there? No. Do I care? No. I'm seeing amazing 100%+ improvement in the games I play at the settings I play at. You also conveniently left out BF3, which shows the same performance improvement but uses no tessellation. Are you hypocritical or ill-equipped for the discussion?
H.

The cases where OC 7970 is 100% faster than OC 6950 are the exception rather than the rule. You could have just said that it is 100% faster in the games you play and not make a general statement that it is 100% faster. Using your logic one could conclude that GTX580 is 50% faster than 6970, all it would take is to pick a few games where gtx580 does a lot better than 6970 like Crysis 2, Lost Planet and CIV5.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I agree. When you buy a video card, you're also buying the drivers that come with it, and they are part of your investment. My point isn't that people shouldn't be aware that AMD's drivers don't support multi-threaded rendering or that it's OK they don't. My point is that testing Civ V over and over again without any change/updates in the drivers seems like a waste or resources since no new information is gleaned.

Huh. Silly me for thinking that it makes more sense for review sites like Anandtech to use a very popular and technically advanced game as a benchmark, not only to show the many, many people how said game performs, but to also serve as a reminder that drivers do matter and that AMD (or Nvidia, depending on the game and situation) needs to be aware of their lack of support.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I wouldn't describe going from VLIW5 to VLIW4 as a minor refinement, it's a pretty big change that yielded little improvement. And it wasn't really slow, it was as fast as nvidia past flagship GTX480, they probably targeted that card as a competition, it was both much smaller and more energy efficient than GTX480. Unfortunately for AMD GF110 happened. You can't realistically expect a 3XX mm2 chip to be faster than 5XXmm2 chip on the same process. Some AMD fans had overblown expectations that it would be faster than GTX580. Remember that Cayman was originally designed to be fabricated on a canned 32nn process, that was the reason it was late and a bit underwhelming, there's only so much you can do on the same process without increasing die size too much. AMD strategy is to have dual gpu designs compete as possible kings of the hill and it kind of worked. For long periods of time AMD had the fastest cards out there, although nvidia eventually responded with the same.

Huh? There was basically no reason for the switch to vliw4. They still increased the size of the gpu, anyway. IMO the big disappointment of 6970 was that it was a "new" process, they had 15 months to build it, it was larger than the card it replaced, yet it was only 15-20% faster. Factor in the time it took to make it and the larger die size, plus the immeciate switch to GCN, and vliw4 starts to look kind of like a turd, doesn't it? Unless, of course, it really is just a minor iterative improvement to vliw5. I was simply giving AMD the benefit of the doubt.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Huh? There was basically no reason for the switch to vliw4. They still increased the size of the gpu, anyway. IMO the big disappointment of 6970 was that it was a "new" process, they had 15 months to build it, it was larger than the card it replaced, yet it was only 15-20% faster. Factor in the time it took to make it and the larger die size, plus the immeciate switch to GCN, and vliw4 starts to look kind of like a turd, doesn't it? Unless, of course, it really is just a minor iterative improvement to vliw5. I was simply giving AMD the benefit of the doubt.

5870 and 6970 are both fabricated on the same process, so big improvements were unlikely. Yes, they had a lot of time but they started designing that chip for 32nn process which was later canned. IMHO Cayman doesn't look like a turd, it's still more efficient per mm2 and per watt than fermi, it just looks like a needless expense on AMDs part.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
lots and lots of words

I'll keep my comments to two parts of your book.

1. My entire point was that civ5 is entirely relevant to use as a benchmark. It is a AAA game, it uses a lot of advanced dx11 features, and everything that it uses is relevant to either improving IQ, FPS, or both. You didn't say anything at all to refute that. You earlier stated that nvidia somehow convinced firaxis to implement a bunch of nvidia-specific improvements into civ5 for their own advantage, yet you and I both later linked to an article which refuted that claim. Are you now saying that nvidia did nothing sneaky here, but that their driver team was more on the ball than AMD's at implementing MTR, or do you still somehow maintain that nvidia is at fault for AMD's incompetence on this issue?

2. When I was referring to benchmarks with larger memory sizes, I only meant that somebody looking at a 6950 1gb vs 2gb should know which games/settings cause the 1gb to run out of memory, and ditto for the 1.25gb vs 2.5 gb gtx 570, or 1.5gb vs 3 gb gtx 580, etc etc. If there are tnough examples where 1gb or even 1.25-1.5gb isn't enough, then maybe a semi-casual buyer will spend the extra $$ to get the larger memory size, right?
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
5870 and 6970 are both fabricated on the same process, so big improvements were unlikely. Yes, they had a lot of time but they started designing that chip for 32nn process which was later canned. IMHO Cayman doesn't look like a turd, it's still more efficient per mm2 and per watt than fermi, it just looks like needless expense on AMDs part.

4890 improved more vs 4870, and it was allegedly on the same architecture with some super-sneaky tweaks factored in. I'm not saying that going from vliw5 to vliw4 wasn't a change, I'm just saying that it apparently wasn't as earth-shattering as we were led to believe. In fact, I think that if they'd just focused on vliw5 straight through until GCN, they would have have had a faster, more competitive 6970 to throw up against nvidia during the last round.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
4890 improved more vs 4870, and it was allegedly on the same architecture with some super-sneaky tweaks factored in. I'm not saying that going from vliw5 to vliw4 wasn't a change, I'm just saying that it apparently wasn't as earth-shattering as we were led to believe. In fact, I think that if they'd just focused on vliw5 straight through until GCN, they would have have had a faster, more competitive 6970 to throw up against nvidia during the last round.
No, 4890 didn't improve more upon 4870, it was basically 4870 tweaked to achieve higher clock speeds. It had 12.5% higher clock speed which resulted in 10% performance gain. Cayman is on average 15-20% faster and it also somewhat addresses the biggest flaw of 5870 namely tessellation performance so in some cases it is 50% faster.
 

WMD

Senior member
Apr 13, 2011
476
0
0
No, 4890 didn't improve more upon 4870, it was basically 4870 tweaked to achieve higher clock speeds. It had 12.5% higher clock speed which resulted in 10% performance gain. Cayman is on average 15-20% faster and it also somewhat addresses the biggest flaw of 5870 namely tessellation performance so in some cases it is 50% faster.

Mainly tessellation improvements. Raw performance is closer to 10%-15% faster than 5870. Part of that is due to the higher clocks and improved ROPs besides the slightly more efficient VLIW4 shaders.

AMD is repeating the same strategy this year with the 7970. Instead of going for higher absolute performance they are allocating more transistors resources to improve tessellation performance in attempt to overtake Nvidia in titles where they are stronger in. This explains the underwhelming results in titles like battlefield 3 and Dirt 2/3 where tessellation was never a bottleneck.
 

nanaki333

Diamond Member
Sep 14, 2002
3,772
13
81
Way to set up a slippery slope argument, because I said anything of the sort. The point is that reviewers cannot possibly benchmark every application out there, and therefore should set up a benchmark suite to observe performance of a part that can be generalized to as many applications as possible. That said, Civ V is kind of a niche situation and it's obvious that AMD is behind at any tier, so why waste the time? All it's saying is "AMD doesn't support multi-threaded DX11 yet." No one would recommended an AMD graphics card to someone who solely wants to play Civ V, so again, why waste the time?

Then that would be a major problem as no doubt more games will support it and they will fall further and further behind.
You misunderstand the analogy. No one would want to read a single card three-monitor review where AMD had the only cards scoring and NVIDIA had a big fat "0" in every bench because they simply don't support it. Put another way, what if reviewers started using MLAA to replace 4x MSAA in benchmarks when it was released because they found it more efficient or liked the IQ better, but still used 4x MSAA on NVIDIA? The NVIDIA fanboys would run in foaming at the mouth. It's funny how NVIDIA fans used the same argument for tessellation until Fermi came out, I just love the hypocrisy.
And this goes with what I said above, if you're into Civ V, it'd be foolish to buy an AMD card.
That's exactly what I linked to...
It'll be a combination of market forces, outcry, and resources, as always. As it stands, if they think it's worth it to take a whack in Civ 5 because they have more pressing things to work on, so be it. However that's only delaying the inevitable. Until then, voting with your wallet is a good idea.

because people don't want to be told to go out and buy a new graphics card if you want to play a game properly. i buy games on sale on steam all the time that i wouldn't normally buy. i'd be pissed if i found out my $800 worth of video cards can't run the game maxed out with no slow down compared to even a $300 nvidia card. </sarcasm on prices>
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
No, 4890 didn't improve more upon 4870, it was basically 4870 tweaked to achieve higher clock speeds. It had 12.5% higher clock speed which resulted in 10% performance gain. Cayman is on average 15-20% faster and it also somewhat addresses the biggest flaw of 5870 namely tessellation performance so in some cases it is 50% faster.

Wasn't the HD4890 based on a tweaked RV770 chip (RV790) that allowed for much higher clock frequencies? I would call that a huge improvement especially since it was the same architecture on the same process node.

Cayman on the other hand increased its transistor count by ~22% on the same process node, resulting in a 16% increase in die size while netting them ~15% performance while changing some of their core architecture for some compute and some DX11 enhancements (Tessellation especially). The performance it brought is less impressive since the changes didn't bring anything ground breaking except fixing some scenarios where the tessellation unit was the bottleneck. Like Bryan pointed out, I wonder what it would have been like if they went with the VLIW5 all the way instead of sacrificing the performance/transistor for some compute capabilities.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
No, 4890 didn't improve more upon 4870, it was basically 4870 tweaked to achieve higher clock speeds. It had 12.5% higher clock speed which resulted in 10% performance gain. Cayman is on average 15-20% faster and it also somewhat addresses the biggest flaw of 5870 namely tessellation performance so in some cases it is 50% faster.

I just looked up tpu's 4890 launch article to prove you wrong, and instead proved you right.

http://www.techpowerup.com/reviews/Powercolor/HD_4890_PCS/28.html

Of course, it would have been much much cheaper for them to just bump up the clocks and throw a bit more tess hardware onto the card...

edit: ah, cookiemonster said it much better.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Cayman on the other hand increased its transistor count by ~22% on the same process node, resulting in a 16% increase in die size while netting them ~15% performance while changing some of their core architecture for some compute and some DX11 enhancements (Tessellation especially). The performance it brought is less impressive since the changes didn't bring anything ground breaking except fixing some scenarios where the tessellation unit was the bottleneck. Like Bryan pointed out, I wonder what it would have been like if they went with the VLIW5 all the way instead of sacrificing the performance/transistor for some compute capabilities.

Performance per transistor doesn't matter and performance per mm2 remained the same. Evergreen was already very efficient per mm2 and per watt so expecting huge improvements on the same process was just wishful thinking. Justification for making VLIW4 at all might well have been APUs and GPUs for next gen consoles, AFAIK both will use VLIW4 derived graphics.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
If the 480 was a bad card because it was only x amount faster than the cheaper 5870 than the 7970 is bad for the same reason.

Where did you get this from? The 480 was a bad card for lots of reasons. The least of which was perf/$. Although, it wasn't good at that metric either.

The price of the 5870 matched the "perceived" competitive climate. AMD believed that the release of a card even better than the 580 was eminent. A 512 core card running at competitive clocks and a 225W TDP was supposed to be out before year's end. They priced the 5870 accordingly. When the 7970 was released when was the rumored release date of it's likely next gen competitor? Q2 2012.

Pulling arbitrary numbers and trying to say they add up to anything meaningful just shows either a complete absence of understanding business. Or, an attempt to baffle people with BS. You reporting that the 7970 is only 6% faster than the 580 makes me believe it's the 2nd reason more than the 1st.

The 7970 is priced where it is because that's where AMD believes they will sell everyone they make as fast as they can make them and still be able to justify it's price against it's competitor's offering.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |