What does AMD have to fight the 780?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

parvadomus

Senior member
Dec 11, 2012
685
14
81
Same applies for Nvidia. Why do you think they built a new chip, aka GK110?
Take GTX 680 vs 7970 for example. GTX 680 offer -5% performance but 22% less TDP, 195W vs 250W. That means that AMD have to push up the thermal envelope by 30% to match Kepler. Its that inefficient.

That continues with bigger chips as well. Say AMD built a 500mm2 too. GTX 780 is 250W. For GCN to match that performance they need to raise the thermal envelope again by 30%. 250W * 1.3 = 325W.

When was the last time you guys saw 325W on a single die? It can`t be done. Too much heat.

If you are a blind fanboy and can't read what I posted above I can't do anything. Kepler IS NOT more efficient than GCN, CURRENT DIES are.
Please, explain me this:

Why GTX660 is much less efficient than HD7870? Despite being clocked lower and being based on the "magical, and very efficient" Kepler architecture? Plus they target the same mid-end market, and even more, HD7870 has A LOT more compute power than GTX660.
Please try to understand FACTS and stop basing your opinion over a single, way less than optimal GCN based die as Tahiti.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Seriously, this forum is full of uneducated people. So many AMD folks who just have to come here and defend AMD although the facts is perfectly clear. This is the last time I`m wasting my energy on you guys.

@Erenhardt:
I was speaking about TDP. My calculations was about TDP. I specifically wrote TDP in pretty much every post. Yet you bring up power consumption. And now you come up with excuses because I called you out that you don`t know the difference between the TDP and power consumption.

And then you come up with 6990 as an example to why AMD could make a 300W+ GPU. Its a dual GPU for crying out loud. I already stated several times that the only option AMD have is to make a dual GPU and spread around the heat between two dies instead of one hot spot. Which is what they did. 7990. Which is the reason why 7970 will be their single GPU greatest of GCN. AMD said it themselves.

As for the chart you brought up which I quoted for you: You either didn`t understand what they wrote or you are just trolling because you know you are wrong.


@parvadomus: Stop taking out words out of context. I wrote "the higher core count you get, the less efficiency".

So stop bringing in Pitcairn because it is a different chip with less core count than 7970.
 

Rezist

Senior member
Jun 20, 2009
726
0
71
honestly, i think nvidia would wait and wait hoping AMD launches first. Unless they know for sure that AMD cant touch it, i think nvidia would be afraid to go first. Perhaps this is a lesson learned back when AMD forced them to drop their GTX280 hundreds of dollars over night.

We all know the gk104 was gonna be the 670ti, to me it seemed they had everything ready (even box designs) but they held off till AMD launched their hand. Since nvidia had the performance crown with the 580, the ball was in their court. They could wait it out as long as it takes with no pressure. I know a lot of people say nvidia was late but I think this is because they were not confident and was way more concerned with AMD than they ever let on.

After Tahiti launched we hear whispers from nvidia saying, "we expected more from AMD". Some people took that as talking smack but i honestly think that statement tells the whole story. Nvidia was in a situation because there was no way they could launch their big die in the foreseeable future and this had them afraid and insecure. The gk104 was all they had to work with so they reluctantly put together a gtx670ti but were hesitant, too unsure of what AMDs line up would be like or how badly it would do against it. Nvidia had no clue what AMD was coming out with but obviously they were truly concerned. This was most likely because AMD had been executing flawlessly lately and since the 4000 series they had been on a roll. I believe Nvidia held out on purpose this time, they held out to let AMD launch first. Once they seen Tahiti, they were completely surprised because they had feared the situation they were in. They were afraid that Tahiti was gonna be much more powerful.

See AMDs flagships the past few generations were great. Nvidias big dies barely notched them out. So I think when nvidia said they expected more, they meant "we could finally breathe again". Once they absorbed the 7970 performance they set to position their gk104 against it. Quickly they discovered that they not only could keep up with the 7970, with the right clocks they could surpass its performance. The 670ti was scratched entirely and they worked the gk104 into becoming the gtx680. Things really worked out well for nvidia.

Was it luck? Tahiti was a transition to a much more fermi like GPU. AMD had little choice but to go this route if they were to ever keep up. Things had to change. Tahiti wasnt bad at all if you considered how bad the original fermi went. Actually, Tahiti on a hardware level was a perfect transition. Look how much their software engineers have been able to squeeze out of it since launching. At launch though, it was a different story. One that could have turned out very very differently.

This is why i believe that nvidia will not launch first at all. Not unless they have an architecture that they are extremely confident cant be touched by AMD, i dont see them doing it. As long as they have the fastest GPUs out, there is little pressure on nvidia to do so.........

my take

Very good post too bad the thread keeps getting crapped on.
 

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
First of all: TDP does NOT equal power consumption.

7970 x Turbo is NOT a 348W TDP GPU
GTX 580 is NOT a 326W TDP GPU. It is 244W.

There is no actual limit that says "325w is impossible". There's a giant practicality limit, as it would be stupid to make, but it doesn't mean anyone CAN'T.

@parvadomus: Stop taking out words out of context. I wrote "the higher core count you get, the less efficiency".

So stop bringing in Pitcairn because it is a different chip with less core count than 7970.
Nvidia have ditched the power demanding FP64 cores to make it more efficient with gaming. AMD have not. That result in a bigger power consumption and heat output. You see that from 680 vs 7970.

As the chart shows, the higher core count you get, the less efficiency. Add more voltage and clocks, you get a even less efficient GPU.

Same applies for Nvidia. Why do you think they built a new chip, aka GK110?
Take GTX 680 vs 7970 for example. GTX 680 offer -5% performance but 22% less TDP, 195W vs 250W. That means that AMD have to push up the thermal envelope by 30% to match Kepler. Its that inefficient.

That continues with bigger chips as well. Say AMD built a 500mm2 too. GTX 780 is 250W. For GCN to match that performance they need to raise the thermal envelope again by 30%. 250W * 1.3 = 325W.

Are you talking about more core counts or GCN. All I see you spouting is misinterpreted nonsense about GCN being inherently worse than Kepler due to having FP64 capability. "This continues onto bigger chips" is about as WRONG as you could get. "This" doesn't EXIST in small chips, but it does in large chips. Your point is right (efficiency usually goes down in larger chips, which anyone with a clue of semiconductors work could tell you) but you are using all the wrong arguments.

GCN is not inherently worse than Kepler in any way, but 7970 size die was pushing a first-gen die to inefficient levels. With a simple rework like 7790, 7970 could easily beat GK110 and GK104 or at least consume as much power. Wait for Hawaii, it's not as far off as everyone thinks :whiste:
 

parvadomus

Senior member
Dec 11, 2012
685
14
81
@parvadomus: Stop taking out words out of context. I wrote "the higher core count you get, the less efficiency".

So stop bringing in Pitcairn because it is a different chip with less core count than 7970.

You still dont explain why Pitcairn is way better than GK106 in perf/watt. I explained every point about Tahiti efficience yet you don't read it, its over. :whiste:

One more chart:


I dont see your point of "the higher core count you get, the less efficiency" for GTX680 vs GTX780. This can be explained in only one way:
-> Both this GPUs have the exact same shaders/ROP ratio (48) and TMUs, plus GTX780 runs at lower speed, so efficience increases.

For AMD we have:
-> cape verde 640/16 = 40 shaders per ROP
-> pitcairn 1280/32 = 40 shaders per ROP
-> tahiti 2048/32 = 64 shders per ROP (this is obviously ROP starved).

Make a perfectly balanced GPU, add the bonaire efficiency improvements and GCN walks all over GTX780 in efficiency.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
@Erenhardt:
I was speaking about TDP. My calculations was about TDP. I specifically wrote TDP in pretty much every post. Yet you bring up power consumption. And now you come up with excuses because I called you out that you don`t know the difference between the TDP and power consumption.
GTX 680 offer -5% performance but 22% less TDP
So how lower TDP is better? If you have card designed around a chip with higher TDP, it uses better components. On top of that TDP is a guidance for board partners. And have nothing to do with efficiency. You just said GTX680 is cheaply made (compared to 7970) graphics card - nothing more.

When was the last time you guys saw 325W on a single die? It can`t be done. Too much heat.
I showed you that 7970Ghz takes more than 325W on single die and at the same time it doesn't heat above 70'C which is super low (ain't those gk110 clocking down couse of them Celsius degrees?)
If a card takes 350Watts and the temps are in normal operating range that means the card was designed around 350W of thermal power.
Now you see what I did there? The HIS card have 300W+ TDP

And then you come up with 6990 as an example to why AMD could make a 300W+ GPU. Its a dual GPU for crying out loud. I already stated several times that the only option AMD have is to make a dual GPU and spread around the heat between two dies instead of one hot spot. Which is what they did. 7990. Which is the reason why 7970 will be their single GPU greatest of GCN. AMD said it themselves.
Second: That is from a heavily overclocked 7970 with custom built PCB and power phases. You won`t see AMD building something like this and guarantee that it will function with that sort of clocks.

That is where 6990 comes in. They designed that monstrosity and it is working just fine. All these power phases and PCBs are doing fine and 400 watts of power ain't a problem.

As for the chart you brought up which I quoted for you: You either didn`t understand what they wrote or you are just trolling because you know you are wrong.
First. Learn to post image. Secondly this chart is not showing TDP. It may be ACP of reference blower design card. You showed power consumption in Crysis2 and nothing more. With a card that is so heavy directed on compute it is hardly anywhere close to what it was designed for. If they would design their cards around average power consumption not even single card will last more than a day.

Seriously, this forum is full of uneducated people.
Seems like you are one of many...
 

ams23

Senior member
Feb 18, 2013
907
0
0
You still dont explain why Pitcairn is way better than GK106 in perf/watt.

This is not necessarily true. The GTX 650 Ti uses the GK106 GPU, and has higher Perf. per watt at 19x12 resolution--the target resolution for midrange cards--than any other card tested at Techpowerup (including Pitcairn-based HD 7870 and HD 7850): http://tpucdn.com/reviews/EVGA/GTX_780_SC_ACX_Cooler/images/perfwatt_1920.gif . It is true that GTX 650 Ti Boost and GTX 660 have lower Perf. per watt though. On the bright side, GK106-based GPU's such GTX 650 Ti, GTX 650 Ti Boost, and GTX 660 have higher Perf. per dollar than Pitcairn-based GPU's: http://tpucdn.com/reviews/EVGA/GTX_780_SC_ACX_Cooler/images/perfdollar_1920.gif .
 
Last edited:

parvadomus

Senior member
Dec 11, 2012
685
14
81
This is not necessarily true. The GTX 650 Ti uses the GK106 GPU, and has higher Perf. per watt at 19x12 resolution--the target resolution for midrange cards--than any other card tested at Techpowerup (including Pitcairn-based HD 7870 and HD 7850): http://tpucdn.com/reviews/EVGA/GTX_780_SC_ACX_Cooler/images/perfwatt_1920.gif . It is true that GTX 650 Ti Boost and GTX 660 have lower Perf. per watt though. On the bright side, GK106-based GPU's such GTX 650 Ti, GTX 650 Ti Boost, and GTX 660 have higher Perf. per dollar than Pitcairn-based GPU's: http://tpucdn.com/reviews/EVGA/GTX_780_SC_ACX_Cooler/images/perfdollar_1920.gif .

We are talking about perf/watt here, price is not important for an architectural discussion.
And yes its obvious that, as you decrease resolution, low-end GPUs show a better perf/watt score than higher-end ones as the CPU becomes the obvious bottleneck, and you can draw as fast with fewer gpu units (shaders, tmus, rops, etc).
The game set tested also defines performance/watt. Look at this:

Back then TPU didnt use SC2, nor WOW, games that are obviously NV biased (are even twice as fast on NV hardware), it took out Alan wake, and max payne 3. This kind of things do impact at the overall charts, but still HD7870 is near the top of efficiency in your own review at any resolution.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
All our talk means nothing. We'll see what they deliver when they deliver their product at the end of the year or beginning of next, provided there are no set backs. Their biggest improvements will come with the new Crossfire drivers that may give them the performance crown for a single card.
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
Seriously, this forum is full of uneducated people. So many AMD folks who just have to come here and defend AMD although the facts is perfectly clear. This is the last time I`m wasting my energy on you guys.

@Erenhardt:
I was speaking about TDP. My calculations was about TDP. I specifically wrote TDP in pretty much every post. Yet you bring up power consumption. And now you come up with excuses because I called you out that you don`t know the difference between the TDP and power consumption.

And then you come up with 6990 as an example to why AMD could make a 300W+ GPU. Its a dual GPU for crying out loud. I already stated several times that the only option AMD have is to make a dual GPU and spread around the heat between two dies instead of one hot spot. Which is what they did. 7990. Which is the reason why 7970 will be their single GPU greatest of GCN. AMD said it themselves.

As for the chart you brought up which I quoted for you: You either didn`t understand what they wrote or you are just trolling because you know you are wrong.


@parvadomus: Stop taking out words out of context. I wrote "the higher core count you get, the less efficiency".

So stop bringing in Pitcairn because it is a different chip with less core count than 7970.

The reason why the 7970 is their biggest GPU is because they dont have a bigger design. Nothing to do with TDP.

GK110 was designed for their Tesla chips. Not a gaming GPU by design. This is obvious because making huge GPU's is costly and without Tesla would probably be unprofitable.

You can scale up any GPU design but the bigger it gets the worse the yields and the higher the cost.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Why do you think that a bigger AMD-GPU with more transistors would not hit the TDP wall?

As for GK110, it's the other way around. Only the high sales numbers (compared to the professional market) of the large GPUs have made it possible to sell these in the professional market as well. Quadro and Tesla sales alone cannot fund R&D of such a large GPU. At least it has been like that until now.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I think the Tahiti design is about at it's TDP limit. At least on 28nm. They could possibly do something bigger, maybe 2560sp on Pitcairn or Bonaire. I'm not sure it would be worth it though if they are planning on releasing 20nm by the end of the year. Or even the beginning of next year.

Remember that GK110 isn't a new design that they are trying to get a ROI in only a few months.
 

ams23

Senior member
Feb 18, 2013
907
0
0
Back then TPU didnt use SC2, nor WOW, games that are obviously NV biased (are even twice as fast on NV hardware), it took out Alan wake, and max payne 3. This kind of things do impact at the overall charts, but still HD7870 is near the top of efficiency in your own review at any resolution.

First of all, World of Warcraft is only slightly faster on NVIDIA hardware (ie. GTX 680 is only 6.8% faster than HD 7970 GHz Ed. at 19x12 resolution): http://tpucdn.com/reviews/EVGA/GTX_780_SC_ACX_Cooler/images/wow_1920_1200.gif . Second of all, TPU is using several "AMD biased" games in their latest test suite including Hitman: Absolution, Sleeping Dogs, and Tomb Raider. TPU also tested 18 games in their latest test suite, which represents a very wide variety of games (with some games favoring AMD's architecture and some games favoring NVIDIA's architecture). And FWIW, systems with midrange GPU's tend to be GPU limited at 19x12 resolution with high IQ settings, not CPU limited, and these same midrange systems tend to be not very playable at 25x16 resolution with high IQ settings. Anyway, the moral of the story is that, with the latest test suite, GK106-based GTX 650 Ti has higher Perf. per watt at the target 19x12 resolution compared to Pitcairn-based HD 7850/7870 and Bonaire-based HD 7790: http://tpucdn.com/reviews/EVGA/GTX_780_SC_ACX_Cooler/images/perfwatt_1920.gif
 
Last edited:

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
Why do you think that a bigger AMD-GPU with more transistors would not hit the TDP wall?

As for GK110, it's the other way around. Only the high sales numbers (compared to the professional market) of the large GPUs have made it possible to sell these in the professional market as well. Quadro and Tesla sales alone cannot fund R&D of such a large GPU. At least it has been like that until now.

Why is there a TDP wall?

They can scale down clock speed and use a way better cooler and you can keep pushing up the TDP.

Also the consumer GPU market is just a place where Nvidia dumps its broken tesla chips. If they didnt have the pro market they wouldnt bother making huge die GPU's at all. They would stick with chips like GK104 which are small and profitable and likely far easier to make than GK110
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Why do you think that a bigger AMD-GPU with more transistors would not hit the TDP wall?

As for GK110, it's the other way around. Only the high sales numbers (compared to the professional market) of the large GPUs have made it possible to sell these in the professional market as well. Quadro and Tesla sales alone cannot fund R&D of such a large GPU. At least it has been like that until now.

HD 7970 Tahiti is the first 28nm chip. It has the least efficiency of all the GCN chips. The performance scaling from Pitcairn to Tahiti is poor. AMD fit a 30% faster chip called Bonaire( HD 7790) into the same 85w TDP of HD 7770 (Cape Verde) for a slight increase in die size. By increasing the sp count to 2560 , ROP to 48 and improving the front end to to remove bottlenecks (4 ACE, 4 tesselator, 4 raster engines), a 30% perf gain is easy. The usage of newer power states found in Bonaire and voltage binning to run at 1.15 - 1.175v at 950 - 1000 Mhz will keep TDP within 250W. HD 7970 Ghz lost power efficiency by running at 1.25 Ghz and did not have the advanced power states of Bonaire.

There is another possibility. HD 9970 Volcanic Islands could be a 28nm chip with 256 bit GDDR6 memory controller with 64 ROPs and 3072 sp in a 420 sq - 440 sq mm die. The reduction in memory bus size will not affect bandwidth as GDDR6 speeds are expected to be 9 - 10 Ghz. On a very mature TSMC 28nm process thats definitely achievable. TSMC 20nm looks to be ramping in Q2 2014 with 2% of total production volume at 20nm in that quarter. TSMC 28nm was 2% of total volume in Q4 2010. So it looks like 20nm cards will launch in July 2014. AMD definitely cannot wait so long. so a Volcanic Islands on 28nm in October is quite realistic.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Why is there a TDP wall?

They can scale down clock speed and use a way better cooler and you can keep pushing up the TDP.

Also the consumer GPU market is just a place where Nvidia dumps its broken tesla chips. If they didnt have the pro market they wouldnt bother making huge die GPU's at all. They would stick with chips like GK104 which are small and profitable and likely far easier to make than GK110

More than 250W isn't really feasible imo. They could gain a little more efficiency, but I doubt it would be enough to beat the 780 and use the same amout of power or even less.

No, the consumer market is what Nvidia is using to cross-finance the big GPUs. You cannot finance a GPU with a couple of 100k GPUs.

HD 7970 Tahiti is the first 28nm chip. It has the least efficiency of all the GCN chips. The performance scaling from Pitcairn to Tahiti is poor. AMD fit a 30% faster chip called Bonaire( HD 7790) into the same 85w TDP of HD 7770 (Cape Verde) for a slight increase in die size. By increasing the sp count to 2560 , ROP to 48 and improving the front end to to remove bottlenecks (4 ACE, 4 tesselator, 4 raster engines), a 30% perf gain is easy. The usage of newer power states found in Bonaire and voltage binning to run at 1.15 - 1.175v at 950 - 1000 Mhz will keep TDP within 250W. HD 7970 Ghz lost power efficiency by running at 1.25 Ghz and did not have the advanced power states of Bonaire.

There is another possibility. HD 9970 Volcanic Islands could be a 28nm chip with 256 bit GDDR6 memory controller with 64 ROPs and 3072 sp in a 420 sq - 440 sq mm die. The reduction in memory bus size will not affect bandwidth as GDDR6 speeds are expected to be 9 - 10 Ghz. On a very mature TSMC 28nm process thats definitely achievable. TSMC 20nm looks to be ramping in Q2 2014 with 2% of total production volume at 20nm in that quarter. TSMC 28nm was 2% of total volume in Q4 2010. So it looks like 20nm cards will launch in July 2014. AMD definitely cannot wait so long. so a Volcanic Islands on 28nm in October is quite realistic.

The 7790 is not really more efficient than say Pitcairn. I doubt you can apply this efficiency fix to Tahiti. Not 1:1 at least.
GDDR6 will only be ready in 2014 iirc. The 7970 GHz is 50% faster than the 7870, that's not too bad.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
The 7790 is not really more efficient than say Pitcairn. I doubt you can apply this efficiency fix to Tahiti. Not 1:1 at least.
GDDR6 will only be ready in 2014 iirc. The 7970 GHz is 50% faster than the 7870, that's not too bad.

Tahiti stands to gain the most from power efficiency optimizations and improvements to front end. Tahiti also is ROP limited. HD 7970 Ghz is not 50% faster than HD 7870. clock for clock the improvement is around 40%.
http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/28.html

HD 7870 - 80
HD 7970 (925)- 105
HD 7970 Ghz(1050) - 115 (80 x 1.44 = 115.2)

http://www.computerbase.de/artikel/grafikkarten/2013/nvidia-geforce-gtx-780-im-test/3/

similar relation here. so at same clocks of 1 Ghz you are looking at 40% improvement for 60% more sp. there is more perf to be extracted.

With a new optimized architecture on a very mature TSMC 28nm process with power state optimizations AMD can get another 30% perf at same TDP. TSMC 28nm process is very mature now. AMD can go for 950 - 1000 mhz clocks and voltage binning with 1.15 - 1.75v to fit inside 250w TDP.
 
Last edited:

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
HD 7970 Tahiti is the first 28nm chip. It has the least efficiency of all the GCN chips. The performance scaling from Pitcairn to Tahiti is poor. AMD fit a 30% faster chip called Bonaire( HD 7790) into the same 85w TDP of HD 7770 (Cape Verde) for a slight increase in die size. By increasing the sp count to 2560 , ROP to 48 and improving the front end to to remove bottlenecks (4 ACE, 4 tesselator, 4 raster engines), a 30% perf gain is easy. The usage of newer power states found in Bonaire and voltage binning to run at 1.15 - 1.175v at 950 - 1000 Mhz will keep TDP within 250W. HD 7970 Ghz lost power efficiency by running at 1.25 Ghz and did not have the advanced power states of Bonaire.

There is another possibility. HD 9970 Volcanic Islands could be a 28nm chip with 256 bit GDDR6 memory controller with 64 ROPs and 3072 sp in a 420 sq - 440 sq mm die. The reduction in memory bus size will not affect bandwidth as GDDR6 speeds are expected to be 9 - 10 Ghz. On a very mature TSMC 28nm process thats definitely achievable. TSMC 20nm looks to be ramping in Q2 2014 with 2% of total production volume at 20nm in that quarter. TSMC 28nm was 2% of total volume in Q4 2010. So it looks like 20nm cards will launch in July 2014. AMD definitely cannot wait so long. so a Volcanic Islands on 28nm in October is quite realistic.

This seems pretty close to what I have seen :ninja:
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
It's funny how some people are cheering for a company and have double standards here. It's like some soccer teams fans. It's also silly that people are arguing about less then 10% difference in performance per watt. When 480 launched those same people didn't care about efficiency, 5870 just destroyed GTX480 in that metric. Many of those same people said that they cared only about out-right performance, only to change their mind when AMD is no longer an undisputed leader in this metric. The fact is that GCN and Kepler are very close in efficiency, VLIW5/4 was way better then Fermi in that metric but back then people didn't care about that and they still bought Fermi cards even tough for 15% of improved performance, power consumption went up by over 70% That was a massive disparity in efficiency, nothing comparable to what we have now and to boot 5870 was available 6 months earlier. And still NV's cards sold better then AMD's, maybe they shouldn't have changed the name from ATI to AMD, due to CPUs uninformed people who are the majority, we are just a drop in the bucket, also think that their GPUs suck because their CPUs suck. You can't expect a consumer to use logic, that's the biggest mistake a marketeer can make.
 
Last edited:

Final8ty

Golden Member
Jun 13, 2007
1,172
13
81
It's funny how some people are cheering for a company and have double standards here. It's like some soccer teams fans. It's also silly that people are arguing about less then 10% difference in performance per watt. When 480 launched those same people didn't care about efficiency, 5870 just destroyed GTX480 in that metric. Many of those same people said that they cared only about out-right performance, only to change their mind when AMD is no longer an undisputed leader in this metric. The fact is that GCN and Kepler are very close in efficiency, VLIW5/4 was way better then Fermi in that metric but back then people didn't care about that and they still bought Fermi cards even tough for 15% performance improvement, power consumption went up by over 70% That was a massive disparity in efficiency, nothing comparable to what we have now and to boot 5870 was available 6 months earlier. And still NV's cards sold better then AMD's, maybe they shouldn't have changed the name from ATI to AMD, due to CPUs uninformed people who are the majority, we are just a drop in the bucket, also think that their GPUs suck because their CPUs suck. You can't expect a consumer to use logic, that's the biggest mistake a marketeer can make.

:thumbsup:
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Tahiti stands to gain the most from power efficiency optimizations and improvements to front end. Tahiti also is ROP limited. HD 7970 Ghz is not 50% faster than HD 7870. clock for clock the improvement is around 40%.
http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/28.html

HD 7870 - 80
HD 7970 (925)- 105
HD 7970 Ghz(1050) - 115 (80 x 1.44 = 115.2)

http://www.computerbase.de/artikel/grafikkarten/2013/nvidia-geforce-gtx-780-im-test/3/

similar relation here. so at same clocks of 1 Ghz you are looking at 40% improvement for 60% more sp. there is more perf to be extracted.

With a new optimized architecture on a very mature TSMC 28nm process with power state optimizations AMD can get another 30% perf at same TDP. TSMC 28nm process is very mature now. AMD can go for 950 - 1000 mhz clocks and voltage binning with 1.15 - 1.75v to fit inside 250w TDP.

Sure and nVidia can it, too.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I would guess AMD will do what they always do, good enough performance at a lower cost.

It's not fair to say that about AMD's graphics division, they had the fastest card for well over a year, for about four months they didn't even have competition. NV managed to best AMD only for a while until AMD released 7970GHz edition. They do that with CPUs, their CPUs are just terrible Only the recently released jaguar is a decent CPU considering what it is.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
looks like we will have a AMD response to Nvidia GTX 700 series in Q3

http://www.chiphell.com/thread-755237-1-1.html

the only difference is previously it was expected that HD 8870 was Hainan XT and HD 8850 was Hainan Pro. Also it does not make sense to use Hainan chip for HD 8950. the Curacao chip is a much larger chip (420 sq mm) and needs a salvage part for improving yields.

http://videocardz.com/34981/amd-radeon-hd-8870-and-hd-8850-specifiation-leaked
http://videocardz.com/39041/meet-aruba-curacao-hainan-and-bonaire-the-codenames-radeon-hd-8000-series

This is my expectations

HD 8970 - 2304 sp, 3 geometry engines, 8 ACE 2.0, 48 ROP, 6 GB GDDR5, 384 bit memory (1100 / 7000) for USD 600.

HD 8950 - 2048 sp, 3 geometry engines, 8 ACE 2.0, 48 ROP, 3 GB GDDR5, 384 bit memory (1000 / 6000) for USD 400 - 450.

HD 8870 - 1792 sp, 2 geometry engines, 8 ACE 2.0, 32 ROP, 2 GB GDDR5, 256 bit memory (1200 / 7000) for USD 300

HD 8850 - 1536 sp, 2 geometry engines, 8 ACE 2.0, 32 ROP, 2 GB GDDR5, 256 bit memory (1100 / 6000) for USD 230

HD 8830 - 1280 sp, 2 geometry engines, 8 ACE 2.0, 32 ROP, 2 GB GDDR5, 256 bit memory (1000 / 6000) for USD 200

also the 8 ACE 2.0 is identical to what the PS4 uses. This lineup would take the fight to Nvidia. God knows we need competition. If AMD can get this lineup released by July August it would be great. This with AMD CF frame pacing driver would restore competition in all segments.
 
Last edited:

Saylick

Diamond Member
Sep 10, 2012
3,385
7,151
136
looks like we will have a AMD response to Nvidia GTX 700 series in Q3

http://www.chiphell.com/thread-755237-1-1.html

the only difference is previously it was expected that HD 8870 was Hainan XT and HD 8850 was Hainan Pro. Also it does not make sense to use Hainan chip for HD 8950. the Curacao chip is a much larger chip (420 sq mm) and needs a salvage part for improving yields.

http://videocardz.com/34981/amd-radeon-hd-8870-and-hd-8850-specifiation-leaked
http://videocardz.com/39041/meet-aruba-curacao-hainan-and-bonaire-the-codenames-radeon-hd-8000-series

This is my expectations

HD 8970 - 2304 sp, 3 geometry engines, 8 ACE 2.0, 48 ROP, 6 GB GDDR5, 384 bit memory (1100 / 7000) for USD 600.

HD 8950 - 2048 sp, 3 geometry engines, 8 ACE 2.0, 48 ROP, 3 GB GDDR5, 384 bit memory (1000 / 6000) for USD 400 - 450.

HD 8870 - 1792 sp, 2 geometry engines, 8 ACE 2.0, 32 ROP, 2 GB GDDR5, 256 bit memory (1200 / 7000) for USD 300

HD 8850 - 1536 sp, 2 geometry engines, 8 ACE 2.0, 32 ROP, 2 GB GDDR5, 256 bit memory (1100 / 6000) for USD 230

HD 8830 - 1280 sp, 2 geometry engines, 8 ACE 2.0, 32 ROP, 2 GB GDDR5, 256 bit memory (1000 / 6000) for USD 200

also the 8 ACE 2.0 is identical to what the PS4 uses. This lineup would take the fight to Nvidia. God knows we need competition. If AMD can get this lineup released by July August it would be great. This with AMD CF frame pacing driver would restore competition in all segments.

NICE! If this is true, we should see a more balanced architecture. The shader/ROP ratio looks better in this case. More importantly, this would indeed bring competition back when we need it most.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |