[VC][TT] - [Rumor] Radeon Rx 300: Bermuda, Fiji, Grenada, Tonga and Trinidad

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tential

Diamond Member
May 13, 2008
7,348
642
121
Ya, I guess I'll just continue to follow hardware like I always do. Like you said, I don't need to upgrade (I am not gaming enough to even justify it. I haven't fired up a game in over a month since I last played Twilight Princess on Dolphin Emu. Still have that, Skyward Sword, Galaxy 2 (Almost finished), and a couple more emulated games to go.), and if I can wait for 16 nm, probably for the best anyway since that'll just mean a "massive" leap hopefully in performance.
 
Feb 19, 2009
10,457
10
76
Didn't Asetek announce that they scored a major deal that was due to ship H1 2015? Is Asetek leasing the design to Cooler Master, who is selling the parts to AMD?

http://asetek.com/press-room/news/2014/asetek-announces-largest-ever-design-win.aspx

Actually, it looks like that might not have been for AMD after all, if this is what the design win was...

http://www.datacenterdynamics.com/focus/archive/2015/02/asetek-sells-35m-liquid-cooling-california

AIO cooling for super computers and HPC servers? But I hear its not reliable... leaky and all that water around is dangerous!
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Ya, I guess I'll just continue to follow hardware like I always do. Like you said, I don't need to upgrade (I am not gaming enough to even justify it.

:thumbsup:

The one advantage if you skip generations (for whatever reasons, maybe you were too busy to play games at that time due to personal reasons, etc.) is that you don't need to get the top cards to see a great benefit. Since we are talking about 390X, I'll illustrate this with this hypothetical example:



@ 1080P, let's say a gamer still has a $250 unlocked HD6950 @ HD6970 => 297 = 100% but hasn't upgraded yet. Let's assume R9 390X is 45% faster than R9 290X => 695x1.45 = 1008 = 339% or 3.39X faster than the HD6970, a massive jump.

Let's assume R9 390 ends up 85% of R9 390X's performance (or 390X is 18% faster) => 1008 * 0.85 = 850 = 2.86X faster, but still an amazing jump!

If you were to upgrade from that 6970, even 2.86X sounds like a huge improvement, while 3.39x sounds like a bonus in that case. The jump of 2.86x is already so large, that it would most likely already leave you feeling greatly satisfied with the upgrade even if the R9 390 non-X card cost $450.

vs.

R9 290X owner who paid $550 for his card on launch date. For him the situation would look:

R9 390 non-X = 850 or just 22% faster than a 290X at $450. Wow, what a horrible upgrade in this case in comparison to the 6970 user, unless the 290X can be sold in the used card market for somewhat decent value. But still, you can tell this is a far worse upgrade from a price/perf point of view.

For that reason I think only the most cutting edge PC enthusiasts upgrade every generation. Otherwise, it's just too costly and the gains are too small.

That's why I personally always look at the time that passed since my last card to gauge if the upgrade seems worth it. I look at R9 290X/780Ti and then I see that I skipped them because well I just didn't need that extra performance during the time those cards reigned supreme. Then 1 year from their launch, I see a 980 that's only 15% faster. I am just using this as a hypothetical example. It didn't have to be a 980, could have been some other NV/AMD flagship card. Then I ask myself OK I waited 2 years and in 3 years only 15% more performance got added - horrible trade-off for 12 months of extra waiting. Sounds like it's better to wait for a way bigger jump. This strategy works if you have a backlog of games or if you haven't hit your favourite genre/game that runs like a total dog on your existing hardware. Alternatively, one can skip this type of upgrading style and just resell old cards often to lower his/her cost of ownership. That strategy has also worked well for many. The one strategy that has consistently proved to be the worst is buying a $600 card and holding on to it for 4-5 years (unless you got it free or at some huge discount).

AIO cooling for super computers and HPC servers? But I hear its not reliable... leaky and all that water around is dangerous!

I would have liked to see more of AMD's AIBs switching to a 3-year warranty though. At least that would instill more confidence in gamers to accept the AIO CLC solution. Sapphire is one of the premium AMD AIBs but it still only offers a 2-year-warranty. There are quite a few PC gamers who prefer NV a lot more because they prefer EVGA's customer service and warranty over all other AIBs. AMD really needs its own EVGA and XFX hasn't quite lived up to that level of customer service despite offering lifetime warranty. I am actually surprised AMD hasn't worked more closely with AIBs to try to negotiate more favourable RMA rates for failed cards so that their AIBs would be more likely to offer a 3-year-warranty.

Also, I can't imagine what Sapphire and EVGA get from AMD/NV for them to be exclusive partners of those brands. I would bet both would make some decent profits if they were brand agnostic. This is going a bit off-topic but I think if AIO CLC have a shot of becoming more mainstream with future generations, a longer warranty would give gamers who fear AIO CLC a better peace of mind.
 
Last edited:

digitaldurandal

Golden Member
Dec 3, 2009
1,828
0
76
Also, I can't imagine what Sapphire and EVGA get from AMD/NV for them to be exclusive partners of those brands. I would bet both would make some decent profits if they were brand agnostic. This is going a bit off-topic but I think if AIO CLC have a shot of becoming more mainstream with future generations, a longer warranty would give gamers who fear AIO CLC a better peace of mind.

Didn't Nvidia punish XFX when they started offering ATI cards?
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,095
458
126
nah, more like they are wining as a brand right now D: AMD needs to step up their PR game.

You need a product out there that helps with the PR game first. The only news that people keep seeing is delay this, delay that, can't build 22nm, etc., etc., etc. Yet in the meantime, Nvidia (who is suffering the same problem with 22nm), made lemonade from the lemons and released a 28nm part that cut power and improved performance. They saw the market and looked at the manufacturing problems and pulled a rabbit out of the hat to fill the void, getting good press releases, and several months of internet sites splashing reviews of the latest Nvidia cards, getting essentially free PR and advertising of the products and product line. Aside from the 970 memory issues (which don't show up in most real game benchmarks yet), it has been all good news, and yet when people go look for information on AMD, all they see are the delays....
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I sure hope so, and about time AMD starts making some big dies again for the GPUs!

Seconded!

I agree with NV's strategy of having small die, middle die and large die to service their desktop and mobile GPU sectors. Since the failed 2900XT, for many years AMD has abandoned this strategy with 4870/5870/6970 and even 7970 which meant their strategy was to use use a mid-size die for the high-end sector and scale it down from there to lower segments, while the ultra-high-end would be serviced by a dual-chip flagship card. AMD actually called it the "small die strategy" but I think it's more reasonable to call it a mid-size die strategy since their goal was to use a mid-size ($300-350 chip) as the high-end and scale it up and down like so:



This 1 decision from an engineering perspective meant that AMD's engineers would have the nearly impossible task of somehow designing a mid-size die chip that could actually compete with NV's huge monolith die chips. You know where this goes....It jumps out right away then that NV's historically ginormous 450-560mm2 die flagship chips would go against AMD's small 325-365mm2 die and what was the chance AMD's engineers/cards could keep up under similar laws of physics and nodes? Basically it was a foregone conclusion since the "small die strategy" that NV would win, we just didn't know by how much each generation. GTX280/285/480/580 have all beaten 4870/4890/5870/6970 with a comfortable lead. It's only in the last 6 months that 780Ti has completely given up its original lead to the 290X. The reason I am not ready to write off AMD is despite NV leading in perf/watt, it's actually remarkable if we look back in time now given AMD's reluctance/outright financial inability to embrace large die flagship cards that a 438mm2 290X is basically tied today with that generation's NV's best chip - a gargantuan 561mm2 780Ti.

Since AMD doesn't have the ability to amortize costs across Quadro and Tesla and GeForce Grid like NV can, AMD was likely unable to make a financial business case for building 500-550mm2 die during the early and mid life-cycle stages of 28nm wafer pricing and yields. NV could, giving them a huge advantage right off the bat. Since 14nm is delayed, 28nm wafer pricing and yields have improved, AMD should now be able to finally have a shot at building that 500mm2 chip.

As we now know in hindsight, this mid-size die strategy for AMD backfired because it resulted in AMD pricing their flagship HD4870 card at $299 as ultimately it wasn't faster than the GTX280. AMD's brand image was tarnished by the failed 2900/3800 series and then strictly price/performance targeted 4850/4870 cards. This contributed to the budget and "also ran" image of AMD. Once 2900/3800 series set up the exodus of buyers moving to GeForce 8, it would be much harder for AMD to get those customers back when GTX260 216/275/280/285 were very good cards themselves. Also, after price cuts, GTX200 series of cards didn't cost that much more compared to AMD's, I mean the situation was nothing like it became with Kepler and Maxwell. Since then, I would imagine a lot of the users who left ATI to GeForce 8/GT200 simply stuck with NV.

ATI would have never allowed NV's top card to be 15-20% faster on average than ATI's best card (and this never happened under ATI's reign since Radeon 8500 days). The 3 main reasons Fermi products didn't absolutely crush the HD5000 series from day 1 were because: (1) Fermi roll-out was at least 6 months late top-to-bottom, (2) AMD's perf/watt allowed it to compete with a smaller die due to Fermi performance being power limited (but once Fermi overclocked, it was all over for the 5870) (3) it took a while for DX11 games to utilize tessellation which delayed 5800 series ultimate demise (thus it gave AMD enough time to scramble and fix their geometry performance just in time for 580 series).

However, this strategy finally caught up to AMD during HD7970 vs. GK104 days, which meant they were suddenly forced to go 438mm2 on 290X. I am not sure if it was RR or Lisa but someone at the top finally realized that you cannot really compete long-term with NV which has a perf/watt lead and simultaneously makes the largest chips, which means they have more functional units. NV even went a step further and efficiently disabled/minimized the impact of DP transistors on gaming power usage and beat AMD in terms of voltage granularity and response with Maxwell.

R9 390X should be the 1st step of AMD realizing they need large(r) die chips to compete with the best from NV. For next iterations starting 14nm, AMD can start focusing a lot more on drastic rework of the GCN architecture for perf/watt. This round they might bring 30-40% increase in perf/watt but it's not going to be 2X the perf/watt increase because it takes 3-4 years, if not 5, to design a brand new architecture from the ground up. GCN 1.3 won't be a built from the ground-up architecture like Maxwell. Because NV has more financial resources, they can do full architectural redesigns more often than AMD imo. This is why it's way harder for AMD to match NV in perf/watt and performance at the same time. Also explains why AMD needed to take a lot more risks with AIO CLC and HBM because they can't just match NV's 2x perf/watt with a 3.5 year old GCN architecture that basically has been enhanced but not fully redesigned from the ground-up.

--

Just wanted to add to the fears of those who are against AIOs but think AIB's won't make after-market coolers capable of sufficiently cooling a 300W card. Gigabyte R9 290X and 780Ti Ghz are both cooled by Gigabyte's excellent after-market heatsinks at good temps and noise levels.



We won't get a card as quiet as a GTX960 but it'll still be a quiet card for a 300W flagship.



"It is incredible to see how poised this heatsink design is. Even when coping with the slightly overclocked R9 290X core, the three fans remain nearly silent or essentially so quiet that you won’t be able to hear them over the sound of most case fans." ~ Source

It will be interesting to see how well 120mm AIO CLC and its VRMs cope when it comes to overclocking against the best air coolers for 390X.
 
Last edited:

n0x1ous

Platinum Member
Sep 9, 2010
2,572
248
106
It would be interesting to see how well 120mm AIO CLC and its VRMs compare when it comes to overclocking against the best air coolers for 390X.

Considering Vesuvious ran at less than 75C on a single 120mm rad, I would expect core temps on Fiji to be low 60s (they may go with thinner rad for cost savings) VRM's cooling will be interesting to see for sure....

Makes me wonder why can't they do full cover block AIO? any reason other than cost that it couldn't or shouldn't be done?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You need a product out there that helps with the PR game first. The only news that people keep seeing is delay this, delay that, can't build 22nm, etc., etc., etc.

In fairness neither GM200 nor R9 300 series is officially delayed since those products never had official launch dates. Chances are the average or mainstream PC gamer if you will doesn't follow the GPU market as closely as we do. They probably don't even know what GM200 or R9 390X are.

Yet in the meantime, Nvidia (who is suffering the same problem with 22nm), made lemonade from the lemons and released a 28nm part that cut power and improved performance. They saw the market and looked at the manufacturing problems and pulled a rabbit out of the hat to fill the void, ...

I don't for a second believe that when NV finished Fermi roll-out in 2012, that they only had 2 years to scrap all their plans and redesign the new Maxwell from scratch after they found out TSMC's 20nm was not going to work. NV designed Maxwell to be efficient from day 1, regardless if it was going to be 20nm or 28nm. They simply had to adopt the architecture later to 28nm because they had no other choice. However, I personally don't believe that NV just on a short notice from TSMC in a span of just 2 years managed to design Maxwell. While it's most likely true that Maxwell had to be adopted for 28nm, I think fundamentally NV changed the focus to perf/watt after Fermi and all future teams that were working on Kepler, Maxwell and Pascal already knew what the priority was, regardless of the node.

It sounds more like the focus on perf/watt was not the same at AMD and because of that AMD got completely caught off guard since their GCN didn't prioritize efficiency enough to handle yet another transition to 28nm. That's why it was AMD that was forced to have major delays. Having a lot less engineering, R&D and cash flow resources already put AMD behind NV to begin with. That's why I said for years now that it's shocking that NV has not beaten AMD's best chip by 30-40%. It's actually shocking to me that AMD kept up with a 438mm2 R9 290X against a 561mm2 780Ti and they both use a similar amount of power.

Although most gamers don't see it that way, AMD has actually closed the gap in performance compared to when they had 2900XT/3870/4870/5870/6970. 7970Ghz actually beat the 680 and 290X is right there with a 780Ti. The biggest problem in recent years for AMD is as you said constantly being late in terms of real world release dates. When you don't show up, well you automatically forfeit market share to more advanced products. Laptop OEMs don't wait for you as they need to refresh their products. That's why NV probably has something like 80%+ market share in the mobile dGPU space by now, maybe even more.

We should all hope that AMD is going to have good products in the laptop space too, not just R9 390/390X that cater to 3-5% of the PC market.

Makes me wonder why can't they do full cover block AIO? any reason other than cost that it couldn't or shouldn't be done?

That's a good point. I don't know a good answer to that other than cost as you have mentioned because I have very limited knowledge for custom WC loops. Maybe there is a also a water-flow/pump technical reason why that might not be optimal?
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
R9 390X should be the 1st step of AMD realizing they need large die chips to compete with the best from NV. For next iterations starting 14nm, AMD can start focusing a lot more on drastic rework of the GCN architecture for perf/watt. This round they might bring 30-40% increase in perf/watt but it's not going to be 2X the perf/watt increase because it takes 3-4 years, if not 5, to design a brand new architecture from the ground up. GCN 1.3 won't be a built from the ground-up architecture like Maxwell. Because NV has more financial resources, they can do full architectural redesigns like that but AMD can't. This is why it's way harder for them to match NV in perf/watt and performance at the same time. Also explains why AMD needed to take a lot more risks with AIO CLC and HBM because they can't just match NV's 2x perf/watt with a 3.5 year old GCN architecture that's basically has been enhanced but not fully redesigned from the ground-up.

As usual RS you are just hyping Maxwell and simply not telling the truth.

http://www.techpowerup.com/reviews/Gigabyte/GTX_960_G1_Gaming/29.html
1080p 1600p
GTX 770 - 87 87
GTX 980 - 140 143

the perf/watt of GTX 980 is 65% higher than GTX 770. The die size has gone up from 294 sq mm to 398 sq mm which is roughly 35% larger die size.

Perf/watt/sq mm or perf/watt normalized to die size has gone up 1.65/1.35 = 1.22 or just 22% higher. So this is why you should stop hyping Maxwell.

AMD R9 390X (550 sq mm) is likely to match or beat a GM200 (650 sq mm). AMD has a significant time to market advantage on HBM (18 - 24 months) and they should leverage that well by keeping up the pace of innovation and continue to make significant microarchitectural improvements to improve perf/watt and perf/sq mm in the range of 20 - 30% every 12 - 15 months. AMD needs to keep making significant improvements on a yearly cadence (12 - 15 months) and return to what made them successful during the HD 4870 - HD 7970 days. They need to stop having large gaps of 18-21 months like they did between HD 7970 and R9 290X and now R9 290X and R9 390X.

Nvidia has mastered the art of refreshing their flagships regularly.

GTX 680 - Mar 2012
Titan - Feb 2013
GTX 780 Ti - Nov 2013
GTX 980 - Sep 2014
GM200 - Q2 2015

AMD have to learn from this. Ironically AMD did this kind of refreshes better than Nvidia during the HD 4870 - HD 7970 days but forgot it completely after that. AMD should also focus on notebook market as thats where they have fallen behind badly in market share. AMD should focus on perf/watt as thats the key to winning in notebook market apart from robust software (Optimus is better than Enduro and AMD need to correct that). The current R9 3xx series presents a golden opportunity to AMD to gain back notebook market share and even leapfrog Nvidia over the next 12 - 18 months. Nvidia is not going to have HBM till late 2016 or early 2017. AMD should utilize this time to market advantage with HBM to gain market share in notebooks over the next 18 months. They should not get complacent and keep working on delivering a steady yearly improvement in perf/watt and perf/ sqmm.

But the biggest way in which AMD can and will gain market share in notebooks is APUs with HBM and competitive CPU performance (both single and multi thread). Nvidia will never have an answer to the level of integration thats possible with an APU which uses HBM as system memory. In 2016 it will be possible to ship a quad core Zen APU with 1280 - 1536 GCN 2.0 cores and 4/8 GB HBM running at 256 GB/s (HBM2). With Zen AMD needs to deliver a CPU core which erases the bad reputation that AMD earned with Bulldozer. If they do that AMD can finally deliver a product which both Nvidia and Intel cannot deliver. Here is hoping for some Jim Keller magic in Zen :thumbsup:
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I found this interesting thread and imo it has some very spot on and thought out comments that now 2.5 years later actually show a foresight of brilliance, especially GodisanAtheist's opening post:

AMD could have charged a higher price for their GPUs (and hence higher margins), maintained ATI's "High end" image instead of looking like the bargain bin value brand, and wouldn't have to face the inevitable "spoiled consumer" backlash that they are facing now with a truly competitive high end GPU.

......

I can't help but look back and think of what a different position AMD might be in today if they charged on ahead with monolithic dies and let Nvidia take on water with their GPGPU strategy. Instead they gave Nvidia the room they needed to create then entrench themselves in the GPGPU market while being competitive in gaming performance.

Your thoughts?

Interestingly enough in a matter of months AMD will go both large die + water against nV.

small die strategy worked well when amd was fighting fermi...

now that nvidia catched up in perf/mm², the strategy just don't work anymore...
we will see the >500mm² again, from amd and nvidia...

So Nvidia played smart this round. They stripped the top of the line chip of its double precision capabilities (it's not completely neutered in the compute department though), which in turn probably saved a good amount of die space. That smaller die size translates into [theoretically] lower production costs, and overall a significant reduction in power consumption.

Even though a lot of people get offended when on our forum people bring up the point of $500 mid-range next gen products, unfortunately until AMD goes large die, it's unlikely for this to change:

It's not semantics when it's $500 out of my pocket for next gen "mid-ranged" performance.

Or $600 for the 7970 prior to that goofy 680.

What it is is mid range cards occupying high end price brackets, what is it is a generation worth skipping, what it isn't is sematics as calling a fiero a high end car just because you can't get a lambo doesn't make it worth 2 million or high end.

I agree with Balla. Even back then he understood the true significance of only having 1 GPU manufacturer aiming for large monolith die chips, while the other company's engineers would be prohibited from exploring the maximum performance potential since they were never allowed to make the best chip they could have made due to strictly imposed die size limits.

The sad part is AMD's small-size die strategy actually hurt almost all PC gamers long-term but it took 2.5 years for this to be cemented with a 980. The end result was $500 GK104, $550 HD7970 and now $550 GM204. Bitcoin mining is the only thing that made me close eyes on the 7970's price tag but now we are all in this together with $500+ mid-range next gen chips. If AMD goes large monolithic die with R9 390X, it should in theory benefit the entire market, assuming AMD can execute in terms of supply and the product runs cool and quiet, which would ensure OEM design wins.

It's also great to see historically knowledgeable GPU enthusiasts such as toyota who in hindsight were spot-on against all the defenders of $500 mid-range die "flagships":

http://forums.anandtech.com/showpost.php?p=33689636&postcount=67

I believe we would instead see longer times between process tech improvements. Oh, there will certainly be 1000$ [GPU] processors just after a factory starts production, but we will see longer concurrent production on older process tech for more years as moors law finally ends due to the limitations of physics.

Also, some excellent points:

Contrary to what the denizens of the microcosm on this board (and others, to be fair) believe, the rest of the world couldn't care less about benchmarks or any reviews. Most profits come from the OEM sectors and there it's what gets the job done for the least amount possible. AMD's small die strategy is probably the only thing that kept them afloat for this long. What's sinking them is their chronic mismanagement, lack of diversification, and poor support/PR/customer relations (which stems from chronic mismanagement I suppose).

^ Anyway, I found this thread very interesting in reflecting the current state of AMD's graphics and the rumoured evolution of AMD finally supposedly moving to a 500mm2+ die size.
 
Last edited:

TheRealSintel

Junior Member
Feb 1, 2014
11
0
66
That entire post is a gross misinterpretation of history, even though it seemed to be in good faith. Reread the RV770 and RV870 stories on Anandtech to refresh your perspective.

First of all, AMD's small die strategy was a resounding success, the RV770 performing between the GTX 2x0 flagships for 100$ less than the slower variant! They priced NVIDIA out of the market at the time, who had a panic knee-jerk reaction with the GTX260-216. Read the launch article of the 4870 again.

The strategy was NOT to beat NVIDIA's big die, but to focus on time-to-market with the very best high-margin, higher-volume midrange chip they could build. And for almost 3 generations it worked perfectly. AMD had the market cornered during the HD 4x00, 5x00 and part of the 6x00 generations. The RV770 and RV870 VLIW5 designs were perfectly matched for maximum graphics performance during the DX9-10 era, while NVIDIA was focusing on designs better matched for general compute, but less efficient in gaming. The HD5870 was also 8(?) months earlier than NVIDIA's flagship (GTX480) and trounced it on efficiency, size and price. NVIDIA had the performance crown yes, but you had to divvy up several hundred dollars for a mere 10-15% improvement and you got a big and hot chip.

In fact, the positioning today is more like a role inversal - the result of AMD failing to continue executing on this strategy and making the mistakes of NVIDIA of the past, and NVIDIA incorporating the strategy AMD pioneered.

* GCN is primarily a compute focused design, with superior compute and FP64 performance for the money (that's the reason they were so wanted during the mining craze) but losing out on efficiency when gaming. Die sizes in the meanwhile have been creeping up and iterations have been coming in slower and slower.
* Since GTX-6x0, NVIDIA started releasing their mid & small die *before* their big die (historically this was never the case!). They also severely scaled back the compute performance for better efficiency during gaming. They're iterating faster now, having several months lead on AMD.

The only lesson AMD had to learn, is that 2 smaller dies are not a replacement for one big die. Beyond that, I just hope they return to their former strategy.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
As usual RS you are just hyping Maxwell and simply not telling the truth.

Perf/watt/sq mm or perf/watt normalized to die size has gone up 1.65/1.35 = 1.22 or just 22% higher. So this is why you should stop hyping Maxwell.

Ya, I realize that 2x perf/watt NV provided for Maxwell stretching reality because after-market 980 cards use ~ 200W but if we compare reference vs. reference, NV came extremely close!

980 average = 156W, peak = 184W
680 average = 166W, peak = 175W (same graph)

Taking CPU limitations out of it with higher rez @ 1440P
980 = 122%
680 = 70%

Perf/watt based on average power usage:
980 = 122% / 156W = 0.78/Watt (+86%)
680 = 70% / 166W = 0.42/Watt

Perf/watt based on peak power usage:
980 = 122% / 184W = 0.66/Watt (+65%)
680 = 70% / 175W = 0.40/Watt

65-86% depending on how you look at it. I don't think R9 390X will match GM200 in perf/watt but I personally don't care about 50-100W of extra power when discussing 400W+ overclocked i5/i7 rigs. Some people on our forum seem to care though.

AMD R9 390X (550 sq mm) is likely to match or beat a GM200 (650 sq mm). AMD needs to keep making significant improvements on a yearly cadence (12 - 15 months) and return to what made them successful during the HD 4870 - HD 7970 days. They need to stop having large gaps of 18-21 months like they did between HD 7970 and R9 290X and now R9 290X and R9 390X.

I am not sure what will matter more in 2015-2017 games though -- the huge memory bandwidth of 390X or the 6GB of VRAM of the GM200. Even if they are 10% apart at launch, those 2 factors could have huge consequences later. Look at 680 2GB now, it's basically a write-off in modern games compared to an HD7970Ghz 3GB. In this case it won't be as clear cut since AMD would have a huge memory bandwidth advantage but less VRAM.

I agree with your 2nd point about AMD needing to add performance to the top cards more often, even if it means just 10-15% more just to maintain the momentum.

I am not sure how they are going to do it with 390X though. If they push 390X to near limits to try and beat GM200, I doubt it will be that easy to refresh it with 10-15% faster card in 2016 without relying on 14nm/16nm.

Ironically AMD did this kind of refreshes better than Nvidia during the HD 4870 - HD 7970 days but forgot it completely after that.

I am not sure if they forgot. I think some huge mismanagement happened where either a refresh of 290X failed in testing to show any significant improvements, or AMD simply took the risk of saving their limited financial resources/engineering resources to go all in with a 30-50% increase with HBM+WC. Perhaps they didn't anticipate HBM to be delayed for that long? That could be 1 factor why 390X is significantly behind 290X on the timeline.

AMD should also focus on notebook market as thats where they have fallen behind badly in market share. AMD should focus on perf/watt as thats the key to winning in notebook market apart from robust software (Optimus is better than Enduro and AMD need to correct that).

That's why I am having a really hard time believe that AMD will just do a mild-refresh of Hawaii as Grenada for 380/380X, and have flagship 390/390X but nothing much more power efficient. I am having a hard time believing AMD has just given up completely on the laptop discrete GPU market. If they haven't, we are ought to see something much better than R9 290M/295M.

The current R9 3xx series presents a golden opportunity to AMD to gain back notebook market share and even leapfrog Nvidia over the next 12 - 18 months. Nvidia is not going to have HBM till late 2016 or early 2017. AMD should utilize this time to market advantage with HBM to gain market share in notebooks over the next 18 months. They should not get complacent and keep working on delivering a steady yearly improvement in perf/watt and perf/ sqmm.

I don't think it's humanly possible. I bet the market share split is 80/20% in laptops, if not worse. If AMD managed to gain just 10% in the mobile dGPU sector in 18 months, that already would be remarkable. I also don't think HBM has sufficient volume to go into millions of low-end and mid-range mobile chips. If AMD actually manages to go HBM with laptops in 2015, I will be very impressed.

But the biggest way in which AMD can and will gain market share in notebooks is APUs with HBM and competitive CPU performance (both single and multi thread). .....If they do that AMD can finally deliver a product which both Nvidia and Intel cannot deliver. Here is hoping for some Jim Keller magic in Zen :thumbsup:

Isn't Zen slated for 2H of 2016? That's still a long way to go. Intel has an undisputed manufacturing node advantage for high-performance parts over anyone in the world today. Even if AMD could match Intel's IPC, I don't see them matching Intel in perf/watt or absolute performance. Also, as you remember even when Athlon XP+ and A64 whipped Intel's P3/P4s and P-D, OEMs and consumers still bought Intel CPUs primarily.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I am not sure what will matter more in 2015-2017 games though -- the huge memory bandwidth of 390X or the 6GB of VRAM of the GM200. Even if they are 10% apart at launch, those 2 factors could have huge consequences later. Look at 680 2GB now, it's basically a write-off in modern games compared to an HD7970Ghz 3GB. In this case it won't be as clear cut since AMD would have a huge memory bandwidth advantage but less VRAM.

You think AMD will fall behind in VRAM after leading for 3 generations ( GTX 580 1.5 GB vs HD 6970 2GB, GTX 680 2GB vs HD 7970 3 GB, GTX 780 Ti 3GB vs R9 290X 4GB). I don't think so. Either AMD will go for 6GB HBM or a hybrid memory system using HBM + GDDR5. I think AMD will surprise everyone with 6GB HBM. Even though the first gen HBM will be restricted to 1GB per memory stack in a 4 Hi config, there is no restriction on the number of memory stacks which can be connected to the GPU. The interposer size, yields and cost are the determining factors.

https://www.youtube.com/watch?v=9KjtVjis8Ps&feature=youtu.be&t=2h15m25s

Bryan Black of AMD talks of interposer sizes as large as 40 x 40 sq mm or 50 x 50 sq mm to fit as many components as needed.

GF shows a 2.5D logic memory solution with 6 memory stacks in their Advanced 2.5D packaging presentation (video at 9:00)

https://www.youtube.com/watch?v=po29B53bpic

So don't draw any premature conclusions that AMD will have lesser VRAM.

I am not sure how they are going to do it with 390X though. If they push 390X to near limits to try and beat GM200, I doubt it will be that easy to refresh it with 10-15% faster card in 2016 without relying on 14nm/16nm.
AMD will push for a 300 - 350 sq mm at Samsung 14LPP with atleast 25 - 35% higher performance than R9 390X and 8GB or 16 GB 2nd gen HBM. AMD need to follow up in a 2017 with a real tock. They need a significant microarchitectural improvement and look to hit 450 sq mm to be able to compete against Nvidia Pascal which is likely to ship the big die > 500 sq mm flagship GPU with 1 TB/s and 16 GB capacity.

I am not sure if they forgot. I think some huge mismanagement happened where either a refresh of 290X failed in testing to show any significant improvements, or AMD simply took the risk of saving their limited financial resources/engineering resources to go all in with a 30-50% increase with HBM+WC. Perhaps they didn't anticipate HBM to be delayed for that long? That could be 1 factor why 390X is significantly behind 290X on the timeline.
yeah I think GF could not deliver 2.5D packaging and Hynix was not ready with HBM any earlier than early Q1 2015. So now we will see products in Q2 2015.

That's why I am having a really hard time believe that AMD will just do a mild-refresh of Hawaii as Grenada for 380/380X, and have flagship 390/390X but nothing much more power efficient. I am having a hard time believing AMD has just given up completely on the laptop discrete GPU market. If they haven't, we are ought to see something much better than R9 290M/295M.
I am confident of a top to bottom R9 3xx GPU stack with completely new ASICs built at GF 28SHP. I expect R9 390X, R9 390, R9 380X and R9 380 to be HBM based. I expect a 4096 sp Bermuda XT flagship GPU with total VRAM > 4GB ( 6GB HBM or 4GB HBM and 4GB GDDR5) and a 3072 sp Fiji XT with 4GB HBM.

Bermuda - 4096sp (6 GB HBM or 4GB HBM and 4 GB GDDR5)
Fiji - 3072sp ( 4 GB HBM)
Treasure Island - 1536 sp (2GB or 3 GB GDDR5)

I don't think it's humanly possible. I bet the market share split is 80/20% in laptops, if not worse. If AMD managed to gain just 10% in the mobile dGPU sector in 18 months, that already would be remarkable. I also don't think HBM has sufficient volume to go into millions of low-end and mid-range mobile chips. If AMD actually manages to go HBM with laptops in 2015, I will be very impressed.
AMD has been hit the most in enthusiast notebook GPUs where they are now non existent. HBM will help AMD gain market share at the high end where they need it the most. Fiji with 3072 sp and 4 GB HBM will easily beat GTX 980 both in desktop and notebook form factors. You think AMD will go with HBM on desktop and not use that advantage in notebooks. Wow you sure don't have an idea. AMD will leverage HBM in enthusiast notebook GPUs where they stand to gain the most.

Isn't Zen slated for 2H of 2016? That's still a long way to go. Intel has an undisputed manufacturing node advantage for high-performance parts over anyone in the world today. Even if AMD could match Intel's IPC, I don't see them matching Intel in perf/watt or absolute performance. Also, as you remember even when Athlon XP+ and A64 whipped Intel's P3/P4s and P-D, OEMs and consumers still bought Intel CPUs primarily.
AMD APUs are a completely different ball game once HBM is incorporated. With DX12 and Mantle reducing the CPU bottleneck and AMD aiming to build a competitive CPU core with Zen its not required for AMD to match Intel IPC as long as they are close (perf gap of <= 10%). On the other hand GCN with HBM will murder Intel HD graphics
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Bermuda - 4096sp (6 GB HBM or 4GB HBM and 4 GB GDDR5)
Fiji - 3072sp ( 4 GB HBM)
Treasure Island - 1536 sp (2GB or 3 GB GDDR5)

I just don't see them going this route, not at all. Splitting up your VRAM between two entirely different memory pools that have entirely different transfer rates and potentially even access speed/timing?
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
It will be 4GB. They will have to abandon the VRAM advantage over Nvidia's high end for the first time since the 4870 and 280. Although they will still be able to boast of a memory advantage over the 970 with its 512MB of slower memory.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That entire post is a gross misinterpretation of history, even though it seemed to be in good faith. Reread the RV770 and RV870 stories on Anandtech to refresh your perspective.

You are only looking at a short-term perspective of the consumer winning on price/performance temporarily. If the firm cannot sustain this strategy financially or gain substantial market share using it, it's a failure long-term.

This strategy ONLY works until your competitor catches up or beats you in perf/watt at which point their 550-600mm2 die chips and 400mm2 chips pummel your 325-350mm2 chips into the ground. Secondly, selling small- to mid-die chips against superior large die competing products means lower average selling prices, lower margins, and worse brand equity.

First of all, AMD's small die strategy was a resounding success, the RV770 performing between the GTX 2x0 flagships for 100$ less than the slower variant! They priced NVIDIA out of the market at the time, who had a panic knee-jerk reaction with the GTX260-216. Read the launch article of the 4870 again.

I am well aware of all the NV and AMD launches in the last 15 years. It was only a resounding success for the consumers but for the GPU market as a whole overall, it was a horrible outcome because it meant once NV caught up and exceeded AMD in perf/mm2 and perf/watt, they could charge flagship prices for mid-range chips = this all started with Kepler. Kepler proved AMD's small-die strategy was not sustainable long-term.

The strategy was NOT to beat NVIDIA's big die, but to focus on time-to-market with the very best high-margin, higher-volume midrange chip they could build.

There is no way selling HD4870 for $299, HD5870/6970 for $369-379 produced high margins for AMD. NV sold 256-bit small die chips for $500 with GTX680, and $550 for GTX980. Those are high margins mid-range chips. Also, the small die strategy has no contingency plan. If you are late with a mid-range die, you lose the ENTIRE market until next generation. With the large die strategy, even if your large die chip is late, you still have solid low-end and mid-range chips.

And you seem to not get this point at all - the main reason AMD hang on to market share in the HD4000-6000 generations was because NV had much more inefficient perf/watt and perf/mm2 architectures. Once NV fixed that, it's game over for that AMD strategy.

-> If your large die chip is 500-600mm2+, then you can build mid-range 350-400mm2 chips. If AMD's flagship chip is not even 400mm2, how do they expect to even compete in the highest segments? They won't.

HD4000 vs. GTX280 generation

HD4870 = 256mm2
HD4890 = 282mm2
vs.
GTX280 = 576mm2

HD4870 was only as fast as NV's 3rd top card - GTX260 216.

AMD's small die strategy didn't work in terms of profits or large market share gains. They started off with 38% market share vs. 61% for NV, but market share collapsed as low as 31% and went up as high as 43.8%. By the beginning of HD5800 generation, AMD was at about 40% to 60% for AMD, or barely a 2% increase after releasing a $199 4850 and $299 4870. Ouch. :hmm:

HD5000 vs. GTX400 generation


HD5870 = 334mm2
vs.
GTX480 = 526mm2

HD5870 did beat GTX470 initially but with time, lack of geometry performance meant a GTX470 OC would crush a 5870. AMD started off that generation at about 40% and by the time Barts (6800) and Cayman (HD6900) came out, AMD had 38.8% market share. OK so going from HD4870 with 38% market share all the way until the end of HD5870 generation, AMD was still at just 38.8% market share. Are you starting to see a trend now? And all this time AMD is not making a lot of $ on GPUs.

During this generation, HD5850 significantly undercut NV's 470 and 5870 the GTX480. It took 9 months for GTX460 to come out to match a 5850 but AMD still lost market share. :thumbsdown: AMD basically beat NV by 6 months with GTX470/480 and 9 months with GTX460 and below. This huge head-start, and amazing price/performance didn't even make a dent in the market share by the end of the Fermi vs. HD5000/6000 generation; and neither did this make AMD a lot of money!

HD6000 vs. GTX500 generation

HD6970 = 389mm2
vs.
GTX580 = 520mm2

This is where AMD is starting to realize you can't have your cake and eat it too. You can't magically perform next generation DX11 effects like geometry/tessellation and global illumination for "free". You need to grow your die size with more functional units of you'll get run over.

Shockingly, despite HD6950 selling for $299 and unlocking to an HD6970, basically undercutting GTX570 and having the option of effectively almost buying dual 6950s for the price of a single 580 1.5GB, AMD doesn't gain market share.

At the start of that generation AMD was at 38.8% and by HD7970 launch, AMD is at 37.8%, another 1% loss of market share. :thumbsdown:

All this time a gamer could use HD4000/5000/6000 for mining which made those upgrades all free.

HD7000 vs. GTX600 generation

HD7970/7970Ghz = 352mm2
vs.
GTX680 = 294mm2

Now, the turning point. Finally what many have said would become true did. Once the competitor catches up or beat AMD on perf/watt and perf/mm2, AMD's small die strategy would come crashing down. AMD even beat GTX680 to launch by 2.5 months, and even after GTX680 came out, AMD recaptured the performance crown as of June 2012 and held on to it until the Titan. HD7770/7850/7870/7950 all beat every single Fermi GTX500 chip in perf/watt and VRAM and performance but still failed. NV easily coasted until GTX660Ti and below launched 6+ months later.

AMD started this generation with a market share of 37.8%, and it went down as low at 33% (!). By the time R9 290/290X launched in Q4 2013, AMD's market share fell to just 35%, or another 2.8% fall in market share from the end of the HD6970 generation. :thumbsdown:

Recall the green market share number of 38% - that's where AMD started off with HD4870, and by the end of HD7970Ghz generation, they ended with just 35%. Therefore, the small die strategy absolutely failed what it set out today -- it failed to make $ and it failed to gain market share. :\

You pretty much know today the market share is close to 70%/30%, which means AMD lost even more since R9 290/290X.

Data:



And for almost 3 generations it worked perfectly. AMD had the market cornered during the HD 4x00, 5x00 and part of the 6x00 generations.

This is not reality. It only had the market cornered from a price/performance point of view but since most of the market didn't respond to price/performance, the strategy didn't work. I compiled market share numbers for you but I won't spend time compiling the financial because you know AMD's GPU division performed horribly for the last 4 years.

NVIDIA had the performance crown yes, but you had to divvy up several hundred dollars for a mere 10-15% improvement and you got a big and hot chip.

A lot of people must have paid extra for NV because AMD's price/performance & small die strategy did not result in market share conversion long-term. Those are the facts. NV's large monolithic die strategy prevailed, despite NV being late with both Fermi and Kepler.

In fact, the positioning today is more like a role inversal - the result of AMD failing to continue executing on this strategy and making the mistakes of NVIDIA of the past, and NVIDIA incorporating the strategy AMD pioneered.

NV's strategy is nothing like AMD's. AMD's focus is on GPGPU compute, OpenCL and small to mid-size die chips, using dual-chips for uber flagships. HD5870, 6970 and 7970 all had a massive amount of double precision compute, not just gaming performance. NV has tried all it can to make efficient gaming cards and has continued to neuter compute performance to maximize gaming efficiency. Maxwell just barely caught up to GCN in some areas of compute and that's only because it's 1 full generation ahead of GCN, but NV's DP ratio is now usually 16:1, 24:1 or even 32:1 vs. SP. Even their Kepler products had a fixed function compute unit which used way less power.

NV only continues to grow the die size of their flagship chips, and it's now able to use a mid-range chip as "flagship" for 2 consecutive generations because they have surpassed AMD in perf/mm2 and perf/watt. GK104 and GM204 are priced at $500 and $550 because gamers keep paying those prices for next-gen mid-range chips and because NV is now so far ahead of AMD in perf/watt and perf/mm2.

* Since GTX-6x0, NVIDIA started releasing their mid & small die *before* their big die (historically this was never the case!).

This was not the original AMD small die strategy. NV simply releases the smaller die chips first as flagships because of how much more ahead they have gotten in terms of efficiency. However, they have their high-end GK110/GM200 series of cards on top of the mid-range. AMD's mid-range strategy never had any large die chip sitting above; instead they used a dual-chip card to fill-in that niche.

The only lesson AMD had to learn, is that 2 smaller dies are not a replacement for one big die. Beyond that, I just hope they return to their former strategy.

These 2 sentences are almost contradictory. The whole point AMD should go back to large monolith dies (if financially possible) is exactly because you can't use 2 smaller dies to try and fight the uber-high-end market segment. Not only will AMD improve its brand equity, it would be able to raise prices, increase profit margins, and provide competition so that we are not being sold $550 mid-range next gen chips. That's why a 500mm2+ die is the best thing to happen to the GPU industry if true. It would mean having the old ATI-style mentality of wanting the single chip performance crown back.

Also, another consequence of having large die chips other than strong pricing competition, is we as gamers get a lot more performance. Imaging taking a perf/watt efficient Maxwell, Volta, Pascal architecture and extrapolating that performance to a 550-600mm2 chip. If both AMD and NV are onboard for superior perf/watt and perf/mm2, we will get huge performance increases in the next 5 years and if AMD executes on time, hopefully next gen mid-range chips won't be sold at $500-550. Although I think it's very likely that both AMD and NV will bifurcate a generation from now on since yields on newer nodes are going to get worse and well financially it's more beneficial for both of those firms to have 2 flagships every 2 years since it makes them more $.

Maybe it's going to become this new trend:

1st revision on a new node = 350-400mm2
2nd revision on that node = 500mm2
3rd revision of that node = 550-600mm2

NV is more likely to be gunning for a 500mm2 earlier on since they have a proven track record of great execution with large monolith die chips and AMD is only now supposedly making their first such chip!

You think AMD will fall behind in VRAM after leading for 3 generations ( GTX 580 1.5 GB vs HD 6970 2GB, GTX 680 2GB vs HD 7970 3 GB, GTX 780 Ti 3GB vs R9 290X 4GB). I don't think so.

Based on all the information I've read so far, I am inclined to say yes. I've never read any rumour of R9 390X card being anything other than a 4GB HBM product.

You think AMD will go with HBM on desktop and not use that advantage in notebooks. Wow you sure don't have an idea. AMD will leverage HBM in enthusiast notebook GPUs where they stand to gain the most.

Right now we don't have any info to say if AMD will use HBM for laptops or not this round. I am going to say I just don't know than guess. My point was that I didn't believe AMD would having nothing better than R9 290M/295M Tonga but I don't know if they will continue with GDDR5 or HBM for 2015 high-end laptop GPUs. I don't have any info to support either case.
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
You are only looking at a short-term perspective of the consumer winning on price/performance temporarily. If the firm cannot sustain this strategy financially or gain substantial market share using it, it's a failure long-term.

This strategy ONLY works until your competitor catches up or beats you in perf/watt at which point their 550-600mm2 die chips and 400mm2 chips pummel your 325-350mm2 chips into the ground. Secondly, selling small- to mid-die chips against superior large die competing products means lower average selling prices, lower margins, and worse brand equity.



I am well aware of all the NV and AMD launches in the last 15 years. It was only a resounding success for the consumers but for the GPU market as a whole overall, it was a horrible outcome because it meant once NV caught up and exceeded AMD in perf/mm2 and perf/watt, they could charge flagship prices for mid-range chips = this all started with Kepler. Kepler proved AMD's small-die strategy was not sustainable long-term.



There is no way selling HD4870 for $299, HD5870/6970 for $369-379 produced high margins for AMD. NV sold 256-bit small die chips for $500 with GTX680, and $550 for GTX980. Those are high margins mid-range chips. Also, the small die strategy has no contingency plan. If you are late with a mid-range die, you lose the ENTIRE market until next generation. With the large die strategy, even if your large die chip is late, you still have solid low-end and mid-range chips.

And you seem to not get this point at all - the main reason AMD hang on to market share in the HD4000-6000 generations was because NV had much more inefficient perf/watt and perf/mm2 architectures. Once NV fixed that, it's game over for that AMD strategy.

-> If your large die chip is 500-600mm2+, then you can build mid-range 350-400mm2 chips. If AMD's flagship chip is not even 400mm2, how do they expect to even compete in the highest segments? They won't.

HD4000 vs. GTX280 generation

HD4870 = 256mm2
HD4890 = 282mm2
vs.
GTX280 = 576mm2

HD4870 was only as fast as NV's 3rd top card - GTX260 216.

AMD's small die strategy didn't work in terms of profits or large market share gains. They started off with 38% market share vs. 61% for NV, but market share collapsed as low as 31% and went up as high as 43.8%. By the beginning of HD5800 generation, AMD was at about 40% to 60% for AMD, or barely a 2% increase after releasing a $199 4850 and $299 4870. Ouch. :hmm:

HD5000 vs. GTX400 generation


HD5870 = 334mm2
vs.
GTX480 = 526mm2

HD5870 did beat GTX470 initially but with time, lack of geometry performance meant a GTX470 OC would crush a 5870. AMD started off that generation at about 40% and by the time Barts (6800) and Cayman (HD6900) came out, AMD had 38.8% market share. OK so going from HD4870 with 38% market share all the way until the end of HD5870 generation, AMD was still at just 38.8% market share. Are you starting to see a trend now? And all this time AMD is not making a lot of $ on GPUs.

During this generation, HD5850 significantly undercut NV's 470 and 5870 the GTX480. It took 9 months for GTX460 to come out to match a 5850 but AMD still lost market share. :thumbsdown: AMD basically beat NV by 6 months with GTX470/480 and 9 months with GTX460 and below. This huge head-start, and amazing price/performance didn't even make a dent in the market share by the end of the Fermi vs. HD5000/6000 generation; and neither did this make AMD a lot of money!

HD6000 vs. GTX500 generation

HD6970 = 389mm2
vs.
GTX580 = 520mm2

This is where AMD is starting to realize you can't have your cake and eat it too. You can't magically perform next generation DX11 effects like geometry/tessellation and global illumination for "free". You need to grow your die size with more functional units of you'll get run over.

Shockingly, despite HD6950 selling for $299 and unlocking to an HD6970, basically undercutting GTX570 and having the option of effectively almost buying dual 6950s for the price of a single 580 1.5GB, AMD doesn't gain market share.

At the start of that generation AMD was at 38.8% and by HD7970 launch, AMD is at 37.8%, another 1% loss of market share. :thumbsdown:

All this time a gamer could use HD4000/5000/6000 for mining which made those upgrades all free.

HD7000 vs. GTX600 generation

HD7970/7970Ghz = 352mm2
vs.
GTX680 = 294mm2

Now, the turning point. Finally what many have said would become true did. Once the competitor catches up or beat AMD on perf/watt and perf/mm2, AMD's small die strategy would come crashing down. AMD even beat GTX680 to launch by 2.5 months, and even after GTX680 came out, AMD recaptured the performance crown as of June 2012 and held on to it until the Titan. HD7770/7850/7870/7950 all beat every single Fermi GTX500 chip in perf/watt and VRAM and performance but still failed. NV easily coasted until GTX660Ti and below launched 6+ months later.

AMD started this generation with a market share of 37.8%, and it went down as low at 33% (!). By the time R9 290/290X launched in Q4 2013, AMD's market share fell to just 35%, or another 2.8% fall in market share from the end of the HD6970 generation. :thumbsdown:

Recall the green market share number of 38% - that's where AMD started off with HD4870, and by the end of HD7970Ghz generation, they ended with just 35%. Therefore, the small die strategy absolutely failed what it set out today -- it failed to make $ and it failed to gain market share. :\

You pretty much know today the market share is close to 70%/30%, which means AMD lost even more since R9 290/290X.

Data:





This is not reality. It only had the market cornered from a price/performance point of view but since most of the market didn't respond to price/performance, the strategy didn't work. I compiled market share numbers for you but I won't spend time compiling the financial because you know AMD's GPU division performed horribly for the last 4 years.



A lot of people must have paid extra for NV because AMD's price/performance & small die strategy did not result in market share conversion long-term. Those are the facts. NV's large monolithic die strategy prevailed, despite NV being late with both Fermi and Kepler.



NV's strategy is nothing like AMD's. AMD's focus is on GPGPU compute, OpenCL and small to mid-size die chips, using dual-chips for uber flagships. HD5870, 6970 and 7970 all had a massive amount of double precision compute, not just gaming performance. NV has tried all it can to make efficient gaming cards and has continued to neuter compute performance to maximize gaming efficiency. Maxwell just barely caught up to GCN in some areas of compute and that's only because it's 1 full generation ahead of GCN, but NV's DP ratio is now usually 16:1, 24:1 or even 32:1 vs. SP. Even their Kepler products had a fixed function compute unit which used way less power.

NV only continues to grow the die size of their flagship chips, and it's now able to use a mid-range chip as "flagship" for 2 consecutive generations because they have surpassed AMD in perf/mm2 and perf/watt. GK104 and GM204 are priced at $500 and $550 because gamers keep paying those prices for next-gen mid-range chips and because NV is now so far ahead of AMD in perf/watt and perf/mm2.



This was not the original AMD small die strategy. NV simply releases the smaller die chips first as flagships because of how much more ahead they have gotten in terms of efficiency. However, they have their high-end GK110/GM200 series of cards on top of the mid-range. AMD's mid-range strategy never had any large die chip sitting above; instead they used a dual-chip card to fill-in that niche.



These 2 sentences are almost contradictory. The whole point AMD should go back to large monolith dies (if financially possible) is exactly because you can't use 2 smaller dies to try and fight the uber-high-end market segment. Not only will AMD improve its brand equity, it would be able to raise prices, increase profit margins, and provide competition so that we are not being sold $550 mid-range next gen chips. That's why a 500mm2+ die is the best thing to happen to the GPU industry if true. It would mean having the old ATI-style mentality of wanting the single chip performance crown back.

Also, another consequence of having large die chips other than strong pricing competition, is we as gamers get a lot more performance. Imaging taking a perf/watt efficient Maxwell, Volta, Pascal architecture and extrapolating that performance to a 550-600mm2 chip. If both AMD and NV are onboard for superior perf/watt and perf/mm2, we will get huge performance increases in the next 5 years and if AMD executes on time, hopefully next gen mid-range chips won't be sold at $500-550. Although I think it's very likely that both AMD and NV will bifurcate a generation from now on since yields on newer nodes are going to get worse and well financially it's more beneficial for both of those firms to have 2 flagships every 2 years since it makes them more $.

Maybe it's going to become this new trend:

1st revision on a new node = 350-400mm2
2nd revision on that node = 500mm2
3rd revision of that node = 550-600mm2

NV is more likely to be gunning for a 500mm2 earlier on since they have a proven track record of great execution with large monolith die chips and AMD is only now supposedly making their first such chip!



Based on all the information I've read so far, I am inclined to say yes. I've never read any rumour of R9 390X card being anything other than a 4GB HBM product.



Right now we don't have any info to say if AMD will use HBM for laptops or not this round. I am going to say I just don't know than guess. My point was that I didn't believe AMD would having nothing better than R9 290M/295M Tonga but I don't know if they will continue with GDDR5 or HBM for 2015 high-end laptop GPUs. I don't have any info to support either case.

Fantastic post RS.

This is one of the best-written, well thought-out posts I have seen here on AT. I agree with your analysis and I think you are spot-on.

I don't care what color (red or green) but I have a spot waiting in my build for a big-die performer. Maybe 2 spots once we hit 14nm, since 20nm is a bust.

Edit: Leaning 'Red' because 'Green' has been pissing me off lately... 3.5GB debacle, 9xxM series OC lock-down, to name a few.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Fiji is coming with cooler master heatsink.
Anyone knows of a good cooler master heatsinks for GPUs?
Does heat-sink mean air (non-water) cooling?
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I'm surprised at just how little marketshare changed when Bitcoin mining was big and AMD was ruling it. Hardly a few percent at most, from the mindshare they had I would've thought they went up a lot more than that...
 

96Firebird

Diamond Member
Nov 8, 2010
5,714
316
126
Fiji is coming with cooler master heatsink.
Anyone knows of a good cooler master heatsinks for GPUs?
Does heat-sink mean air (non-water) cooling?

I don't think they've made their way into the GPU sector yet, but they made some decent CPU coolers. The term heat sink is typically used for air cooling, but a radiator is basically a heat sink that cools water.

Could they be using their newly designed heat sink/fan combo?
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Nahh.. Don't think so.

They may want to go with cooler master for their band name for cheesy marketing puprose.

Maybe they listen. Maybe there will be two flavors, water cooler from Asetek and air cooler from cooler master.

Maybe its one hybrid cooler? CM does board heatsink cooler, and asetek does WC for the core?
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
65-86% depending on how you look at it. I don't think R9 390X will match GM200 in perf/watt but I personally don't care about 50-100W of extra power when discussing 400W+ overclocked i5/i7 rigs. Some people on our forum seem to care though.

It should matter to enthusiasts to some extent, if for nothing else other than HEADROOM. Hawaii OCs like crap and GK110 overclocks like a champ. If Fiji comes along with water cooling standard and uses 265 watts under load, neato you got a 60 C quiet card taking up 4 spots that can't fit into many SFX cases and I can almost guarantee it won't have much headroom left. AMD will already be squeezing out as much as they feasibly can with regards to yield.

Also, with regards to heat output, I remember back when Fermi hit the market and EVERYONE laughed off Fermi's temps and power draw, calling it a space heater. Being inexperienced at the time with really high power use cards, I dismissed those arguments as well. But I was clearly wrong, My twin frozr GTX 465, despite running very cool and quiet, put off quite a bit of heat after an hour or two of gaming in a normal sized bedroom. Power draw matters, no matter how much you say it doesn't.
 
Last edited:

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
It should matter to enthusiasts to some extent, if for nothing else other than HEADROOM. Hawaii OCs like crap and GK110 overclocks like a champ. If Fiji comes along with water cooling standard and uses 265 watts under load, neato you got a 60 C quiet card taking up 4 spots that can't fit into many SFX cases and I can almost guarantee it won't have much headroom left. AMD will already be squeezing out as much as they feasibly can with regards to yield.

It does matter to enthusiasts that run their cards at 100% load all the time. I do gpu crunching with BOINC and when I was running 290s the heat from those cards would make the room hot, I don't even want to discuss the heat from bitcoin/scrypt mining. With my 980s the room barely warms up. The difference is significant in this use case.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |