AMD goes fermi

Status
Not open for further replies.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review

So, I am looking at the anantech review of the AMDs new graphics architecture and all I can see is fermi.
It outright says that this is not as good for gaming, but better for compute. And so for necessity they are going that way, providing absolute minimal boost in gaming performance over current gen in graphics due to dumping VLIW4 for SIMD.

I have to say that was not at all what I expected. I thought we would see nVidia backpedal on fermi rather then AMD embrace it.




Closing this thread per the OP's request.

Administrator Idontcare
 
Last edited by a moderator:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Not sure why you thought NVIDIA would backpedal. GPGPU was their response to seeing the integrated on die Intel and AMD graphics getting better and better. Intel shut them out completely with Nehalem.

AMD has to go the same way for GPU, heck they are making products (Llano and coming soon Trinity) that is putting pressure on discrete GPUs.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Why is your title "AMD goes fermi" instead of something less baiting/inflammatory and more apropo like "AMD focuses on GPGPU"?
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review

So, I am looking at the anantech review of the AMDs new graphics architecture and all I can see is fermi.
It outright says that this is not as good for gaming, but better for compute. And so for necessity they are going that way, providing absolute minimal boost in gaming performance over current gen in graphics due to dumping VLIW4 for SIMD.

I have to say that was not at all what I expected. I thought we would see nVidia backpedal on fermi rather then AMD embrace it.
That "absolute minimal boost" in performance was enough for it to smash NVDA's flagship GPU,yet it runs fairly cool,uses a far smaller die and doesn't use a lot of power doing so....other than that..its "just like Fermi"....
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
I agree with Vesku. GPGPU fits in the current AMD vision, of course they would promote it.

I would also say that increase in gaming is greater than "absolute minimal".
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Designing for dual purpose use will boost gpu workstation presence, it brings a slight hit on power usage from the more gaming oriented architectures. But it will be beneficial in the end.
AMD hasn’t provided us with an official typical board power, but we estimate it’s around 220W, with an absolute 250W PowerTune limit. Meanwhile idle power usage is looking particularly good, as thanks to AMD's further work on power savings their typical power consumption under idle is only 15W. And with AMD's new ZeroCore Power technology (more on that in a bit), idle power usage drops to an asbolutely miniscule 3W. Overall for those of you looking for a quick summary of performance, the 7970 is quite powerful, but it may not be as powerful as you were expecting. Depending on the game being tested it’s anywhere between 5% and 35% faster than NVIDIA’s GeForce GTX 580, averaging 15% to 25% depending on the specific resolution in use. Furthermore thanks to TSMC’s 28nm process power usage is upwards of 50W lower than the GTX 580, but it’s still higher than the 6970 it replaces. As far as performance jumps go from new fabrication processes, this isn’t as big a leap as we’ve seen in the past.
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
Designing for dual purpose use will boost gpu workstation presence, it brings a slight hit on power usage from the more gaming oriented architectures. But it will be beneficial in the end.

AMD could use it, they have such a tiny piece of the workstation pie that even an anorexic supermodel could clean her plate.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Why is your title "AMD goes fermi" instead of something less baiting/inflammatory and more apropo like "AMD focuses on GPGPU"?

I didn't think it was at all inflammatory and flaming was not my intention.

Also it is more accurate because they didn't just focus on GPGPU, but did so in a way that looks to be the same as what fermi did to me. Unless I am misunderstanding.

Not sure why you thought NVIDIA would backpedal. GPGPU was their response to seeing the integrated on die Intel and AMD graphics getting better and better. Intel shut them out completely with Nehalem.

Backpedal as in, backpedal for gaming while keeping fermi around for a separate compute line. There is more then enough money in each market to justify that.
And because the architecture makes significant sacrifices in the gaming arena which has cost them.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
And because the architecture makes significant sacrifices in the gaming arena which has cost them.

It seems to me AMD improved in many areas without sacrificing more than 10% performance over a very optimized architecture.

Last time AMD introduced a completely new GPU architecture it was a disaster.



Uploaded with ImageShack.us

Slightly more mature drivers and it is as fast as a 6990.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
It seems to me AMD improved in many areas.

Their improvement is due to a massive shift from 40nm to 28nm. An architecture that sacrifices performance in a field (gaming) will still come ahead with such a huge improvement.
 

Mistwalker

Senior member
Feb 9, 2007
343
0
71
Designing for dual purpose use will boost gpu workstation presence, it brings a slight hit on power usage from the more gaming oriented architectures. But it will be beneficial in the end.
Agreed, AMD made the change to a much more well-rounded GPU so it could be competitive in arenas outside simply gaming.

Doing that while leveraging 28nm so you still get solid gains in gaming, less power and heat, very solid clock scaling, and finally offering compute performance that isn't a complete write-off...from a pure gaming standpoint it may seem like a disappointment, but as a baseline starting part it's solid with a lot of future potential.

That's what it boils down to: it will be beneficial in the long term.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
181
106
Their improvement is due to a massive shift from 40nm to 28nm. An architecture that sacrifices performance in a field (gaming) will still come ahead with such a huge improvement.

Which is exactly on par with the improvements AMD made with the previous node jump, with the 5870 being close to the 4870x2. In this case they are beating the 5970 and even close to the 6990.

Or are you telling me you expected higher performance than 6990?
 

fourdegrees11

Senior member
Mar 9, 2009
441
1
81
AMD is doing their best to become a profitable company. Basic desktop computing isn't where the growth/money is at. CPU development targeted the server market, and GPU is targeting workstations, in addition to their APUs targeting mobile. It's a smart and obvious buisness move to go in these directions.
 

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
They had to do it sometime, just like Nvidia had to do it sometime. GPGPU is the future, and a lot of money can be made on it. I'm just a little surprised both companies used a node shrink to bring out their new GPGPU architecture, considering there are already difficulties just getting the new process to cooperate. But I guess it is better than debuting on the same node, and look like you are going backwards in the gaming aspect of things.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
Let me get this straight, the 7970 when overclocked (without voltage control mind you) trades blows with a 6990, and it's not good for gaming?

Expect 10-20% more performance from drivers 6mo. from now, on top of the overclocking performance for this thing and it should easily beat current dual GPU cards. I do not see how it's bad for gaming.

Obvious driver issues with DX9 also severly limit the perceived performance of this card in comparison to last gen. The 7970 gets 100fps instead of 150fps in dx9 games therefore it must be bad for gaming...

Please people.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,414
401
126
Only thing bad about the 7970 is the price and that's only because we've been spoiled by the 5xxx and 6xxx pricing structure.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Which is exactly on par with the improvements AMD made with the previous node jump, with the 5870 being close to the 4870x2. In this case they are beating the 5970 and even close to the 6990.

Not even close.

You are missing a key difference ==> HD5870 destroyed HD4870 in the most demanding games, often matching HD4870 x2 and GTX295. HD5870 had no problem doubling HD4870's performance in demanding games at the time.

I did a summary of GTX590 vs. HD7970 vs. GTX580 based on Anandtech's review. GTX590 is on average 32% faster than HD7970. HD5870 was nearly as fast as an HD4870 X2 in at least 30-40% of the benchmarks.

Most importantly in demanding games where performance was actually needed the most (2560x1600) the HD7970 is less than impressive (putting that mildly):

1) Battlefield 3
2) Crysis Warhead
3) Crysis 2
4) Metro 2033

HD7970 is especially unimpressive vs. the GTX580 in these demanding games. Barely any improvement. GTX590 and HD6990 mop the floor with the HD7970 by a good 25-30%.

BF3 - Still unplayable at 2560x1600. Almost no performance increase over the 580.



Crysis Warhead - Still unplayable at 2560x1600. Not enough performance increase over the 580 to make any difference.



Crysis 2 - Still unplayable at 2560x1600. Almost no performance increase over the 580.



Metro 2033 - Still unplayable at 2560x1600. Can't even break 40 fps.



and then, it also manages to lose in popular games to older generation of cards, in:

1) Starcraft 2
2) GTAIV
3) WOW







and it has questionable performance increase with Super Sampling enabled in DX9 game engines over previous generations.





On top of all of this, they managed to raise the price $200 over HD6970, and the reference cooler is louder under load.

This card should have been shipped with 1.1ghz clock speeds from the factory.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
Not even close.

HD5870 destroyed HD4870, often matching HD4870 x2 and GTX295. The cases where HD7970 is as fast as a GTX590 and HD6990 are pretty much non-existant. HD5870 had no problem doubling HD4870's performance in demanding games at the time.

I did a summary of GTX590 vs. HD7970 vs. GTX580 based on Anandtech's review. GTX590 is on average 32% faster than HD7970. HD5870 was nearly as fast as an HD4870 X2 in at least 30-40% of the benchmarks.

This is the least performance improvement from 1 generation to the next I ever recall. I'd need to revisit 7800 GTX 256mb vs. 6800 Ultra, but I think even that was more.

Most importantly in situations where performance was needed the most: Crysis 2, Metro 2033, BF3, etc. HD7970 is especially horrible vs. the GTX580. Barely any improvement. GTX590 and HD6990 mop the floor with the HD7970 by a good 25-30%.

Now overclock and factor in driver maturity. I myself am seriously impressed with the performance of this card when considering die size and power draw.

All of those benchmarks you have posted are useless because they are either showing the 7970 nipping at the heels of dual GPU solutions in the most demanding games (demanding due to poor optimization not impressive visuals), or using HUGE amounts of driver enforced AA. The testing methodoligy used in those benchmark is flawed because each companies driver level AA settings do not operate in a similar manner. In game settings are a far better way to achieve true benchmarks.



Seems very playable to me.
 
Last edited:

gorobei

Diamond Member
Jan 7, 2007
3,714
1,069
136
the move to gpgpu or simd is more or less a mandated course given deferred rendering in games and other compute functions. neither amd or nvidia have a choice in the matter, if they want to stay relevant.

watch the dice bf3 presentation and you can get an idea of what is going on. they are dividing the rendering pipeline into large sets of simpler functions calculated in the first pass(color, spec, surface normal, ambient occlusion) and then solving for multiple light sources later. the sheer number of conditionals and dependencies require you to have powerful gpgu style schedulers.

as long as deferred rendering is the vogue for future game engines, older style vliw plug and chug architectures are not an option to pursue. the added benefit of being able to do hpc compute and being able to synergize with cpu development is just gravy as far as amd's motivation to go gpgpu is concerned.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Slightly more mature drivers and it is as fast as a 6990.

Demanding titles from Anandtech's Review = HD6990 vs. HD7970

Crysis 2 (1920x1200) = +32%
Crysis 2 (2560x1600) = +34%
Min. framerate = +44-48% (1920x1200/2560x1600)

Metro 2033 (1920x1200) = +31%
Metro 2033 (2560x1600) = +32%

Shogun 2 (1920x1200) = +25%
Shogun 2 (2560x1600) = +25%

Battlefield 3 (1920x1200) = +34%
Battlefield 3 (2560x1600) = +36%

You think HD7970 is going to gain 30-35% through driver improvements?
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
Demanding titles from Anandtech's Review = HD6990 vs. HD7970

Crysis 2 (1920x1200) = +32%
Crysis 2 (2560x1600) = +34%
Min. framerate = +44-48% (1920x1200/2560x1600)

Metro 2033 (1920x1200) = +31%
Metro 2033 (2560x1600) = +32%

Shogun 2 (1920x1200) = +25%
Shogun 2 (2560x1600) = +25%

Battlefield 3 (1920x1200) = +34%
Battlefield 3 (2560x1600) = +36%

You think HD7970 is going to gain 30-35% through driver improvements?

Some overclocked benchmarks showed almost a 20% improvement without raising voltage. Drivers should easily net them 10-20%.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
All of those benchmarks you have posted are useless because they are either showing the 7970 nipping at the heels of dual GPU solutions in the most demanding games (demanding due to poor optimization not impressive visuals),

They are not useless. People who game at 2560x1600 would find them very useful. Also, people buy modern cards to play demanding games not to max out Portal 1 and Half Life 2.

or using HUGE amounts of driver enforced AA. The testing methodoligy used in those benchmark is flawed because each companies driver level AA settings do not operate in a similar manner. In game settings are a far better way to achieve true benchmarks.

Who said anything about them "overriding" in game settings with AA through control panel? That's only true for Starcraft 2 testing (because that's the only way to force AA in the game. In other tests, the AA/AF settings were chosen from in-game.

Seems very playable to me.

35 fps min and 45 fps average in a FPS on the PC? I guess we have different standards then.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
I'm sorry but if you think that GTA4 and Metro 2033 are well coded PC games that truly show what a GPU is capable of then I'm not going to even bother debating any of this with you because there is obviously no reasoning with you lol.

Like everyone has been saying. 20% overclock on the core with this card is showing for the most part a 20% increase in frames. Now your 35fps and 45fps have become 42fps min. and 54fps max. That's without drivers. Say driver only add 10% we're still well into playable territory at 46fps min. and 59 max.

That is extremely playable.

It is also quite obvious to me that these cards have serious driver problems with DX9 games. That too should be fixed with driver maturity.
 
Last edited:

Nintendesert

Diamond Member
Mar 28, 2010
7,761
5
0
The move to "Fermi" like architecture is easily summed up by looking at AMD's bottom line, looking at the margins for gaming video cards and then the margins for workstation video cards.

Overclocked the 7970 looks pretty impressive. The price however isn't impressive to me. I however was spoiled by a 5870 and will instead be looking at the 7950 when it comes out and probably just wait for the next Nvidia cards instead.

If the 7950 though impresses I might jump sooner.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |