[WCCF] AMD Radeon R9 390X Pictured

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Also, I do believe that 14nm/16nm + new architectures + HBM2 will provide a bigger boost in performance than this generation will. Since HD7970/R9 200 series have depreciated so much at this point, it's not as if selling those cards now is strategic anymore -- that point is long gone.

I will take a bet that perf increase from R9 290X to R9 390X (> 50%) will be greater than from R9 390X to R9 490X (< 40%). Same for GP104 vs GM200 Titan-X compared to Titan-X vs GTX 780 Ti. People are going overboard here with 16/14nm FINFET predictions. This node is the most difficult node ever both for Intel and foundries. Heck even the mighty Intel had serious challenges ramping 14nm FINFET.

However, if you buy a $600-800 28nm card today such as GM200, that card will likely lose $200-300 in value in 24 months. Upgrade to 2 of those and you are looking to lose $400-600 in resale value because 14nm/16nm GPUs will level this gen's flagships as far as price/performance goes - you can count on it (i.e., recall $330 970 ~ $700 780Ti just 10 months from launch. OUCH). Therefore, if money is a factor, if someone missed the 'perfect' time to resell his/her old card, it isn't such a bad idea to skip a generation(s) if a gamer is OK turning down some settings and doesn't need to run everything on Ultra in every game.
I am going to dispute this. AMD and Nvidia have been steadily rising prices for the last few gen.

HD 5870 - USD 380
HD 6970 - USD 370
HD 7970 - USD 500
R9 290X - USD 550
R9 390X - ? My guess is atleast USD 650 - 700 as I believe R9 390X will legitimately compete with GM200 for the GPU crown and will be as fast or faster.

GTX 480 - USD 500
GTX 580 - USD 500
GTX 680- USD 500
GTX Titan - USD 1000
GTX 980 - USD 550
GTX Titan-X - USD 1000

The 16/14nm FINFET processes are much more costlier compared to 28nm and with much lower yields. HBM2 will only be ramping in Q2 2016 and that should be costly as well.

http://www.kitguru.net/wp-content/uploads/2015/03/sk_hynix_tsv_roadmap_hbm.png

It will take further time to see HBM2 in end products (lead time from Hynix to AMD/Nvidia) so the earliest I would say is Q3 2016. Remember HBM was in high volume manufacturing in early Q1 2015 and we are going to see the GPU products roughly 5 months later. 2.5D GPU manufacturing lead times are longer than traditional GPU manufacturing lead times.

Further more Nvidia has zero experience with 2.5D GPU manufacturing so expect its fair share of challenges too. And you still believe that 16/14nm FINFET GPUs will level this gen's flagship as far as price peformance goes. Well I would say you are too optimistic. You can expect a perf improvement something like the GTX 580 - > GTX 680 transition. Anything more is unrealistic.

http://www.hardwarecanucks.com/foru...616-nvidia-geforce-gtx-680-2gb-review-29.html

As for prices of the first FINFET flagship GPUs do not be surprised if they move upward to USD 650 - 700. So I would say something similar to say a GTX 580 to GTX 680 improvement in price/perf (roughly 35% better) . I am going to go out on a limb here and say that we should prepare ourselves for a 250 -300 sq mm mid range FINFET GPU with 8 or 16 GB HBM2 at USD 650-700. btw I do not expect the first FINFET flagship GPUs to be more than 40% faster than the R9 390X and Titan-X. I do not expect a > 500 sq mm 16/14nm FINFET FPU before Q3 2017. We will see true 2x leaps wrt R9 390X and Titan-X sometime in 2017 more likely mid-Q3 2017.

I think people are forgetting that Intel has been selling tiny 82 sq mm 14nm FINFET Core M dies for the last 6 months and are only now beginning to to ship > 130 sq mm Broadwell quad core dies in volume. The Intel Xeon chips with die sizes > 300 sqmm are not likely to ship before Q1 2016. If thats how difficult it is for Intel who has 3+ years experience with high volume manufacturing on 22nm FINFET and is on its 2nd gen 14nm FINFET what is the hope for TSMC and Samsung/GF who are only now beginning to ramp their first FINFET products ever and that too mobile SOCs at die sizes < 100 sq mm and <5W TDP.

I repeat the majority of GPU volume for the full year 2016 will be on 28nm and I expect the ratio of 28nm to 16/14nm GPUs from both Nvidia and AMD to be roughly 2:1. I do not expect to see a 16/14nm FINFET GPU before Q3 2016. So for 1st half the 16/14nm GPU volume will be 0. The ramp of 16/14nm GPUs will start in Q3 and pick speed in Q4 2016 and H1 2017.

I also predict AMD will have a smoother transition to 16/14nm FINET GPUs with HBM2 due to their experience with 2.5 manufacturing with R9 390 series and their role in the co-invention of HBM with Hynix. I also expect AMD's 2nd gen HBM memory controller to be better than Nvidia's first gen HBM memory controller. Remember the HD 5870 could run GDDR5 at 1200 Mhz while GTX 480 could run at only 900 Mhz. AMD had experience with HD 4870 which ran gddr5 at 900 Mhz and was thus able to improve on that and run at faster speeds than Nvidia.

In summary do not have unrealistic expectations from FINFET GPUs in 2016. There will be huge leaps in power efficiency but given the yield difficulties I expect AMD/Nvidia to be conservative with die sizes (< 300 sq mm). I also foresee a bump up in prices to USD 650 - 700 especially if there are yield challenges with both FINFET, HBM2 and 2.5D manufacturing. Furthermore AMD/Nvidia are waiting in line after Apple and Qualcomm for FINFET wafer allocation and so the 16/14nm GPU volume in 2016 is going to be much lesser than 28nm GPU volume.
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
We'll see.

I definitely don't expect NV to rush to get their big chips out, although they might sneak some small things out quite early.

Slightly less sure about AMD actually. If they aren't doing a big refresh in a few weeks - which they still might of course! - they'll be fairly desperate to get to 14nm for as much as possible as soon as its even remotely plausible.

Even if its moderately expensive for them to do it, it'll be better than continuing to give up on those markets That'd just come to when its technically possible really.
 

Majcric

Golden Member
May 3, 2011
1,377
40
91
Do you guys seriously think next gen GPUs, on a brand new unproven node is going to be smooth sailing its way to a timely launch with good volume & prices?

It could be a very long wait.

^^This. And why I will be looking to buy GM200 this summer. Or AMDs Flagship if it has at least 8gb Vram.
 

tolis626

Senior member
Aug 25, 2013
399
0
76
Even though I will probably be buying the 4GB version of the 390x (Or even the 390, depending on price), I really want AMD to bring an 8GB version to the market. I think it's not necessary, but I'd love to see what the people bashing them now will have to say. I suppose NVidia guys will start praising good ol' GDDR5 and power efficiency, even if we're talking about <50W differences.
 

imaheadcase

Diamond Member
May 9, 2005
3,850
7
76
I guess people know that is just a rendered photograph by now, but just saying it if you did not know.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
Do you guys seriously think next gen GPUs, on a brand new unproven node is going to be smooth sailing its way to a timely launch with good volume & prices?

It could be a very long wait.
then wait for whatever fits your wants/needs. it just means any 4gb gpu is not right for you. doesn't mean a gpu that doesn't fit your wants = fail ^_^
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
12 pages of replies about some fake photos??!!! Really??!!!
Much wow, lulz lives.

It is very partisan here, Nvidia / AMD and Intel / AMD in the CPU section. Many AMD threads go for hundreds of replies for some reason. This is par for the course when AMD is launching new hardware.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I will take a bet that perf increase from R9 290X to R9 390X (> 50%) will be greater than from R9 390X to R9 490X (< 40%). Same for GP104 vs GM200 Titan-X compared to Titan-X vs GTX 780 Ti.

You would be right if R9 490X is a small-sized 14nm/16nm chip aka designed in similar way to GK204/GM204 (680/980). My point was more in-line that a 250-275W R9 490X would be faster in performance than R9 390X would vs. 290X. Of course if AMD focuses on power usage, than there is no way a 180-190W R9 490X will be > 50% faster than the R9 390X.

As far as your comparison for GP204 vs. GM200 goes, that's a next gen mid-range vs. last gen flagship, but I was inferring to the entire generation as in GP200/210 vs. GM200.

I am going to dispute this. AMD and Nvidia have been steadily rising prices for the last few gen.

HD 5870 - USD 380
HD 6970 - USD 370
HD 7970 - USD 500
R9 290X - USD 550
R9 390X - ? My guess is atleast USD 650 - 700 as I believe R9 390X will legitimately compete with GM200 for the GPU crown and will be as fast or faster.

GTX 480 - USD 500
GTX 580 - USD 500
GTX 680- USD 500
GTX Titan - USD 1000
GTX 980 - USD 550
GTX Titan-X - USD 1000

The 16/14nm FINFET processes are much more costlier compared to 28nm and with much lower yields. HBM2 will only be ramping in Q2 2016 and that should be costly as well.

But the new architectures in Pascal + HBM will ensure that even if R9 390X / GM200 6GB are $650-750 cards, a $450 Pascal will beat them. Remember $330 970 matched $700 780Ti, while R9 290 $400 was at least as fast as a $650 780. Newer cards and especially new generation on a new node bring massive improvements in price/performance relative to older cards.

That's kinda my point of why overspend $650-700+ on an R9 390X 8GB version when one could just grab a $500 R9 390 4GB get 90% of the performance and resell it in 18-24 months and use $150-200 saved to get a way better next gen card? Think about HD5850 vs. 5870 or 6950 vs. 6970 or 7950 vs. 7970 or 290 vs. 290X as a reference point here. The highest end AMD cards are always worse value than the 2nd tier AMD card. R9 390X 8GB vs. R9 390 4GB should be no exception in this case. For those gamers on 1080P-1440P monitors only buying 1 of these cards, if R9 390 OC is within 7-10% of the R9 390X OC, the 8GB isn't a good selling point imo.

Historically speaking, 2nd tier AMD cards when overclocked are 90-95% as fast as the flagship. In the case of an unlocked 6950 vs. 6970, the performance was 100% identical.

It will take further time to see HBM2 in end products (lead time from Hynix to AMD/Nvidia) so the earliest I would say is Q3 2016.

I don't disagree but in reference to spending a rumoured $700 or 1400 on R9 390X or a pair of those vs. Q3 2016 isn't that far away. With lower prices of VRAM and HBM2, 6-8GB cards should become far more affordable. I wouldn't be surprised if Pascal GP204 has options for with 6-8GB of HBM2.

And you still believe that 16/14nm FINFET GPUs will level this gen's flagship as far as price peformance goes. Well I would say you are too optimistic. You can expect a perf improvement something like the GTX 580 - > GTX 680 transition.

When comparing MSRP vs. MSRP, yes. GTX680 was 35%+ faster than GTX580 for $499. So even though NV did raise GTX560Ti's price to $500 with the 680, 680 was still a superior bang-for-the-buck than the $380-500 580 1.5-3GB cards. Not to mention HD7970 OC destroyed 580 OC by 40-80%.

Think about it, 970/980 are barely faster than R9 290/290X but compared to GTX670/680, it's a 60-70% increase in performance. Their Pascal successors priced at $350-400 and $500-550 should be 60-70% faster than GTX970/980. That puts them well above Titan X performance. I wouldn't be surprised if GTX970 GP204 successor was 95% as fast as the Titan X.

Sweclockers has Titan X 59% faster than a 970 at 1440P.

TPU has it 49% faster at 1440P.

Both AMD and NV should deliver cards priced at $399-449 that are as fast as the Titan X/R9 390X by Q4 2016 next year.

I am going to go out on a limb here and say that we should prepare ourselves for a 250 -300 sq mm mid range FINFET GPU with 8 or 16 GB HBM2 at USD 650-700.

I think you are way too conservative. Are you suggesting NV/AMD will raise next gen mid-range prices to $650-700? In other words, you are saying the successor to a 980 will be $650+? If even true, the successor to the 970 should be priced well below $500 and provide us with Titan X/R9 390X tying or beating level of performance, but way better power usage and newer features. That's how GPU generations has always worked.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I think many of us are going to be in for a disappointment when we expect an immature 16nm/14nm FinFET process to beat a mature 28nm process by a massive margin.

It's important to remember that 28nm didn't come into its own all at once. Nvidia could never have created a massive chip like the 601 sq. mm. GM200 back in 2012 when the process debuted. Back then, their biggest chip was the 294 sq. mm. GK104, with 3.5 billion transistors. Likewise, AMD's biggest chip at the time (and the first chip fabbed on 28nm) was Tahiti, with 352 sq. mm. and 4.3 billion transistors.

High-density libraries have helped somewhat. Both Tonga and Maxwell have higher densities than their ancestors, despite being on the same process node; Tonga is nearly the same size as Tahiti, but has 700 million more transistors. Let's assume that these techniques will continue to work on 16nm/14nm FinFET (I don't know if this is true or not). TSMC claims that 16nm FF SHP will have double the transistor density of 28nm SHP. So if we assume AMD's first 14nm FinFET GPU will be about the same size as Tahiti and Tonga are, we can assume it will probably have about 10 billion transistors. In comparison, the GM200 (Titan X GPU) has 8 billion transistors. On the other hand, AMD doesn't seem to use their transistors as efficiently as Nvidia. Hawaii with its 6.2 billion transistors gets beaten solidly by GM204 with its 5.2 billion transistors in almost anything except Double Precision. Then there's the question of how HBM will affect this.

16nm/14nm FinFET will doubtless be an improvement but it won't suddenly let us get better-than-Titan-X performance at 100W and $250 or anything crazy like that.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Hawaii with its 6.2 billion transistors gets beaten solidly by GM204 with its 5.2 billion transistors in almost anything except Double Precision.

That comparison is strange because you aren't comparing like-for-like GPU architectures from similar generations. Hawaii's direct competitor is GK110 and Hawaii handily beats when you consider perf/mm2, perf/transistor, including DP performance. GM204 comparison would only be valid against AMD's R9 300 series since only then you'd be comparing like-for-like GPU generations.

Then there's the question of how HBM will affect this.

16nm/14nm FinFET will doubtless be an improvement but it won't suddenly let us get better-than-Titan-X performance at 100W and $250 or anything crazy like that.

It won't be that dramatic but as I already said next gen mid-range cards made on a new node tend to be as fast or faster as last gen high-end.

From the NV side:

GeForce 3 Ti 500 < GeForce 4 Ti 4200
GeForce 4600/4800 < GeForce 5700U
GeForce 5900U/5950U < 6600GT
GeForce 6800 UE < 7800GT/7900GT
GeForce 7900GTX < 8800GT (actually with time even 8600GTS beat the 7900GTX)
GeForce 8800GTX Ultra < 9800GTX+/GTS250 (or perhaps you might consider GTX260 as the mid-range)
GeForce GTX280/285 < GTX460 1GB
GeForce 480/580 < GTX680 (in fact 670 or even 660Ti)
GeForce Titan / GTX780Ti < GTX980 (and even 970 is beating it in modern titles

GeForce Titan X / GM200 6GB < GTX980 successor at $500-550 (and we should see a 970 successor come extremely close too).

Therefore, while we won't get a $250 card with 30% more performance vs. the Titan X, we should have 14nm HBM2 $400-450 cards that beat Titan X and R9 390X easily.

Also, based on AMD's trends, next gen mid-range card should tie the last gen flagship ($500 HD7970Ghz --> $299 R9 280X). That means we should have a card priced at $299-399 on 14nm/16nm with HBM2 that will match the $550-700 R9 390X.

I think you guys are also not accounting that AMD could finally most on to a post-GCN architecture in late 2016/early 2017:

AMD may roll-out GCN successor in 2016

"Graphics architectures from ATI Technologies and Advanced Micro Devices live active life for about four years and sustain three or four iterations that improve performance and bring in new features. AMD&#8217;s graphics core next (GCN) architecture was introduced three years ago and next year it will be time for AMD to announce its successor. Apparently, it looks like this is exactly what the company wants to do. Next year AMD may reveal details about its new architecture and in 2016 it is expected to ramp it up.

At present no details about post-GCN graphics architecture are known. However, expect it to support all the features that DirectX 12 application programming interface brings and some additional capabilities. Development of the post-GCN architecture started around 2010 &#8211; 2011, when Eric Demers was the chief technology officer of AMD&#8217;s graphics products group. After Mr. Demers left AMD in early 2012, AMD hired John Gustafson, a renowned expert in parallel and high-performance computing, who probably had an influence on the development of the architecture. Mr. Gustafson left AMD in 2013, so Raja Kodouri (who returned to AMD from Apple) took the development from where his predecessor left. Therefore, the new architecture will be inspired by three great graphics engineers. AMD&#8217;s 2016 graphics processing units are expected to be made using 14nm or 16nm FinFET process technologies at GlobalFoundries and/or Taiwan Semiconductor Manufacturing Co."


Also, NV already announced that Pascal's HBM2 will go to 800GB/sec-1TB/sec, while performance/watt should double. Pascal and AMD's post-GCN architectures, coupled with HBM2 and 14nm/16nm nodes could be the biggest leap in GPU graphics in 1 generation since the olden days where GPUs increased 80-100% every gen. The last time NV introduced a new architecture + a new node, we got 2X the performance increase with 580-->780Ti.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
You would be right if R9 490X is a small-sized 14nm/16nm chip aka designed in similar way to GK204/GM204 (680/980). My point was more in-line that a 250-275W R9 490X would be faster in performance than R9 390X would vs. 290X. Of course if AMD focuses on power usage, than there is no way a 180-190W R9 490X will be > 50% faster than the R9 390X.

But the point of the matter is the FINFET nodes are a huge challenge to everyone and so do not expect to see a > 500 sq mm FINFET GPU in 2016. The earliest you can see such a chip provided TSMC and Samsung/GF solve the yield problems is sometime in Q3 2017. So its going to be 2+ yrs before you see a true big die FINFET GPU with 2x the perf of R9 390X or Titan-X.

Lets look at it this way. A dumb shrink of a 550 sq mm R9 390X GPU would give you a 300 sq mm GPU at roughly half the power. Trying to crank clock speeds to gain perf on a very immature FINFET process node is a recipe for disaster. So you need significant architectural improvements with improved perf/sp, perf/watt and perf/sq mm to get a perf increase of 30-35% while staying around 300 sq mm.

btw there is always an efficient freq/voltage point on a curve for a certain process node and that too changes with the maturity of the process. Thats the reason Maxwell clocked better than Kepler. The TSMC 28nm process maturity had a role to play in that. So architectural efficiency and die size are the key drivers of performance alongwith the process node (as it allows more transistors to be crammed for less power) and its maturity . Expecting a 300 sq mm FINFET R9 490X GPU to beat a R9 390X by > 50% is completely unrealistic.

But the new architectures in Pascal + HBM will ensure that even if R9 390X / GM200 6GB are $650-750 cards, a $450 Pascal will beat them. Remember $330 970 matched $700 780Ti, while R9 290 $400 was at least as fast as a $650 780. Newer cards and especially new generation on a new node bring massive improvements in price/performance relative to older cards.
yeah it would but whats so special about that. GTX 670 beat GTX 580 and thats the same we will see with a cut down GP204 or PK204 beating GTX Titan-X. The point is these cards are more than 12-15 months away. I expect AMD to be out with their FINFET GPUs before Nvidia because of their experience with 2.5D manufacturing and HBM. If Nvidia launch at the same time its a huge achievement as they have zero experience with HBM and 2.5D GPU manufacturing.

That's kinda my point of why overspend $650-700+ on an R9 390X 8GB version when one could just grab a $500 R9 390 4GB get 90% of the performance and resell it in 18-24 months and use $150-200 saved to get a way better next gen card? Think about HD5850 vs. 5870 or 6950 vs. 6970 or 7950 vs. 7970 or 290 vs. 290X as a reference point here. The highest end AMD cards are always worse value than the 2nd tier AMD card.
thats a well understood fact. The price/perf of the salvage is always better. If you want to upgrade often and want to save money do not buy the flagship.

R9 390X 8GB vs. R9 390 4GB should be no exception in this case. For those gamers on 1080P-1440P monitors only buying 1 of these cards, if R9 390 OC is within 7-10% of the R9 390X OC, the 8GB isn't a good selling point imo.
I would wait till the official launch. I don't see lower than 8 GB on R9 390 and R9 390X. I definitely expect to see a R9 380X (even if salvage instead of new chip )with 4GB.

I don't disagree but in reference to spending a rumoured $700 or 1400 on R9 390X or a pair of those vs. Q3 2016 isn't that far away. With lower prices of VRAM and HBM2, 6-8GB cards should become far more affordable. I wouldn't be surprised if Pascal GP204 has options for with 6-8GB of HBM2.
If somebody spends USD 700 on a R9 390X they can comfortably wait for the big die successors in H2 2017 which will be true successors with 2x perf. Its completely a personal choice. Its very simple. If you do not upgrade your GPU frequently and would like to use it for 3+ years I would say that the currrent gen is not ideal and waiting for FINET is fine. For anybody who upgrades on 1-2 yr cycle the R9 390X is fine as he can wait for the big die successors on a mature FINFET node in H2 2017.

When comparing MSRP vs. MSRP, yes. GTX680 was 35%+ faster than GTX580 for $499. So even though NV did raise GTX560Ti's price to $500 with the 680, 680 was still a superior bang-for-the-buck than the $380-500 580 1.5-3GB cards. Not to mention HD7970 OC destroyed 580 OC by 40-80%.

Think about it, 970/980 are barely faster than R9 290/290X but compared to GTX670/680, it's a 60-70% increase in performance. Their Pascal successors priced at $350-400 and $500-550 should be 60-70% faster than GTX970/980. That puts them well above Titan X performance. I wouldn't be surprised if GTX970 GP204 successor was 95% as fast as the Titan X.

Sweclockers has Titan X 59% faster than a 970 at 1440P.

TPU has it 49% faster at 1440P.

Both AMD and NV should deliver cards priced at $399-449 that are as fast as the Titan X/R9 390X by Q4 2016 next year.
Its well known that the 2016 next gen SKU below flagship will beat the current flagship. As for how much a person wants to spend on a GPU thats a personal choice. I think there is a sizeable crowd which does not want to pay USD 1000 and will buy something in the USD 500 - USD 700 price range. That crowd can time their purchases with GM200 6GB or R9 390X and then wait for 2 yrs for the true successors in H2 2017.

I think you are way too conservative. Are you suggesting NV/AMD will raise next gen mid-range prices to $650-700? In other words, you are saying the successor to a 980 will be $650+? If even true, the successor to the 970 should be priced well below $500 and provide us with Titan X/R9 390X tying or beating level of performance, but way better power usage and newer features. That's how GPU generations has always worked.
yeah I am. I can say multiple factors are likely to cause it - an immature 16/14nm FINFET node, HBM2 yields in the inital ramp stage and 2.5D manufacturing with 16/14nm GPUs. All are bleeding edge and likely to have significant challenges atleast for the first 12 months. I do agree that the SKU below the flagship will be faster than the current R9 390X and Titan-X at lower power and lower price. But thats expected. As I said its the frequency of a user's GPU purchases and how the user chooses to time his purchases which will decide whether he wants to wait till Q3 2016 for the first FINFET GPUs.
 
Last edited:
Feb 19, 2009
10,457
10
76
then wait for whatever fits your wants/needs. it just means any 4gb gpu is not right for you. doesn't mean a gpu that doesn't fit your wants = fail ^_^

If all AMD got is a 4GB GPU, I can guarantee you its going to be fail because the competition (with 6GB GM200 SKUs) is going to use vram as the new metric and hammer home a glorious victory.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
If all AMD got is a 4GB GPU, I can guarantee you its going to be fail because the competition (with 6GB GM200 SKUs) is going to use vram as the new metric and hammer home a glorious victory.

Are you trying to be deliberately inflammatory?

4 GB vram on a titan x class gpu isn't sufficient. Nobody is trying to spin anything. Its the simple truth. And if you check post history you will see that one of the strongest '4 GB is enough for the 390X' supporters went and blasted the 980 because it only had 4 GB. Now all of a sudden 4 GB is plenty on a card with 30-50% more performance than a 980.

If the 390X releases with 4 GB it will probably be fine for a year or so. But 2 years down the road it is going to run into vram problems, just as the 680 did.
 
Feb 19, 2009
10,457
10
76
@Enigmoid
I'm the one being flamed for declaring Titan X+ class performance with 4GB isn't enough. I don't think its enough now, definitely not 1 year later let alone 2 yrs.

I typically hold onto my GPU for 2-3 years before upgrading so when I buy something, I like it to still have some grunt for that long.

I mean in Watch Dogs, a 7970 can run with ultra textures at 1080p with good performance (without MSAA), but a 680 2gb cannot, period. History repeats itself, we will see games that Titan X+ perf can run with ultra textures IF it had more than 4gb vram. That's almost assured given game progress.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
If all AMD got is a 4GB GPU, I can guarantee you its going to be fail because the competition (with 6GB GM200 SKUs) is going to use vram as the new metric and hammer home a glorious victory.
so in your opinion 970 and 980 are trash also? as long as it is yes, I will respect your opinion nothing wrong with not wanting something that doesn't fit your needs :thumbsup:
 
Feb 19, 2009
10,457
10
76
so in your opinion 970 and 980 are trash also? as long as it is yes, I will respect your opinion nothing wrong with not wanting something that doesn't fit your needs :thumbsup:

No, i clearly said Titan X+ class performance.

970/980 are already old, they aren't going to last the distance (2-3 years).
 

SimianR

Senior member
Mar 10, 2011
609
16
81
I think his point is that when AMD were more power efficient but not ahead in performance, performance was the most important metric. When AMD was ahead in performance then efficiency became the most important metric. And he is assuming that if they launch an R9 390X that trades blows with the Titan-X, if it only has 4GB of memory, everyone will say its crap because now 4gb of vram won't be "enough". I do sort of agree with him though, launching this late you sort of hope they have most of the bases covered or it will be a hard sell.
 
Feb 19, 2009
10,457
10
76
If its limited to 4gb, its a bit of both Vagabond. Rushing out while its premature offers some advantages but also disadvantages. Hopefully the 4gb rumors are crap because I look forward to a nice upgrade & a nice 4K monitor.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Are you trying to be deliberately inflammatory?

4 GB vram on a titan x class gpu isn't sufficient. Nobody is trying to spin anything. Its the simple truth. And if you check post history you will see that one of the strongest '4 GB is enough for the 390X' supporters went and blasted the 980 because it only had 4 GB. Now all of a sudden 4 GB is plenty on a card with 30-50% more performance than a 980.
I think his point is that when AMD were more power efficient but not ahead in performance, performance was the most important metric. When AMD was ahead in performance then efficiency became the most important metric. And he is assuming that if they launch an R9 390X that trades blows with the Titan-X, if it only has 4GB of memory, everyone will say its crap because now 4gb of vram won't be "enough". I do sort of agree with him though, launching this late you sort of hope they have most of the bases covered or it will be a hard sell.


You made this reference before and I already responded to your assertions but you keep repeating the same thing. Ok, once again:

#1. The reason I wanted 980 to have 4 and 8GB versions is given its price at the time and that it was a mid-range chip I didn't think a $550 price was warranted. First of all, the card was barely faster than a 970 but cost nearly $200 more. It was only logical that at least NV threw a bone and offered 8GB version to help justify the price. Otherwise, it had little to do with a single 980 requiring 8GB for the performance as it was obvious it would hardly benefit from 8GB because the card wasn't fast enough. I didn't like the idea of NV raising prices yet again from 680's $499 level and excluding the 8GB version altogether, which meant for 980 SLI owners keep their setups > 3 years, this wasn't great. After all, I am sure some 980 SLI users wouldn't have minded paying $50-100 more per card for 8GB of VRAM considering they were already spending $1100.

I repeat this point for you 1 more time: never was it stated that 980 was a 'worthless' or 'pointless' product just because it lacked 8GB of VRAM. That is the exact position being reiterated today with R9 390/390X for 1080P-1440P gamers, interested in buying just 1 of those cars. In that sense, the position is exactly the same as was on 980's launch => have 4GB and 8GB versions, rather than 4GB is now preferred to 8GB as you are trying to spin it....

#2. At that time, I made the incorrect prediction that PS4/XB1 games would start using a very high amounts of VRAM at 1080P or even 1440P very soon and that those users going 980 SLI should probably want 8GB if they intended to keep their cards for > 3 years. A lot of game developers kept stating at the start of the PS4/XB1 generation that the more VRAM, the better. That made me nervous and I assumed that 4GB might not be enough in a very near future. However, looking at what has happened since PS4/XB1 and 980's launch, this isn't really happening. The VRAM increases are very slow and we have hardly reached a point where even 3.5GB of VRAM is a problem as 970 SLI is fine in 99.9% of games at 1440P and blow. Most games have increased VRAM requirements from 2GB to 3-4GB, but not beyond that. PC games have basically leveled off at the 4GB mark. 4GB isn't a bottleneck for even 980 SLI at 1440P. As far as I've read, no professional review site shows 4GB to be an actual bottleneck at playable frame rates and 980 SLI when scales outperforms the Titan X in every benchmark at 1080P or 1440P. If you have data to the contrary, please provide it.

And finally, under no way am I 'supporting' 4GB > 8GB scenario. You are just putting words into people's mouths. You clearly cannot grasp the concept being discussed here:

> Buy a much cheaper R9 390, save $200-300 from expensive flagships, resell it next year and get a next gen card with 6-8GB of VRAM that will be faster and have more advanced features. Alternatively, for those users who do plan to keep their cards for 3+ years OR are 100% gaming on 4K-5K monitors (or similar), then yes they should get 6-8GB cards. So no, absolutely there is no promotion here for 4GB > 8GB.

What's being debated here is that an R9 390 non-X priced at $450-500 with only 4GB of VRAM would be an awesome sweet-spot should it have 85-88% of the Titan X performance for gamers on 1080P-1440P monitors. So next time, please pay attention to what's actually being debated.

Also, your implication that if R9 390X is 30-40% faster than 980 so it MUST have 8GB of VRAM or it's a fail is 100% unsubstantiated unless you can provide real world data where 980 SLI 4GB actually runs into major VRAM bottlenecks at 1440P and below. Secondly, as already discussed it completely misses the entire discussion -- R9 390 4GB for $450-500 and R9 390X 8GB for $600+. To imply that all R9 390 series cards must have 8GB of VRAM is ludicrous considering millions of PC gamers purchased 970 SLI and 980 SLI or a single 980 for 1440P gaming for the last 8 months. If you make the assertion that 4GB of VRAM makes 390/390X irrelevant for PC gaming, then I guess all those NV users should throw their 970 SLI and 980 SLI setups into the trash? Games determines VRAM amount, not just GPU speed increase over the 980.

Considering not even the might Titan X (by itself) today shows tangible benefits of 12GB of VRAM over 980 SLI, the assertion that > 4GB of VRAM is required for gaming has no merit unless data is provided to back this up.

If its limited to 4gb, its a bit of both Vagabond. Rushing out while its premature offers some advantages but also disadvantages. Hopefully the 4gb rumors are crap because I look forward to a nice upgrade & a nice 4K monitor.

Silver, I think the 4GB vs. 8GB got way to your head and you are just following the hype.

Answer these basic questions:

1) If a gamer wants to buy just 1 card (no SLI or CF), would an AMD card 15-20% faster than a 980 priced at $449-499 be a fail?

2) Since 980 SLI beats the Titan X in all scenarios where SLI scales and 4GB of VRAM is not a problem for 980 SLI in those cases, what makes you think 4GB makes cards like 390/390X obsolete? You keep saying how 4GB of VRAM is really beneficial for VSR/DSR at 1440P but have you actually tried playing games with super-sampling at 1440P? From HardOCP's review, they couldn't even max out GTA V on a Titan X at 1440P, what super-sampling are you talking about? You think a card even 30-40% faster than a 980 can suddenly handle insane amounts of super-sampling? It might if you play OLD games at 1080P, but those titles are not VRAM heavy.

3) If 4GB of VRAM is a fail on paper, are you suggesting 98% of all PC gamers should stop playing PC games because if you assert that 4GB of VRAM for a card slower than 980 SLI (which R9 390X certainly will be) is a fail, then what should most PC gamers do, buy a $700-1000 GM200 6GB or Titan X, if not buy a console or stop playing PC games?

I think his point is that when AMD were more power efficient but not ahead in performance, performance was the most important metric. When AMD was ahead in performance then efficiency became the most important metric. And he is assuming that if they launch an R9 390X that trades blows with the Titan-X, if it only has 4GB of memory, everyone will say its crap because now 4gb of vram won't be "enough". I do sort of agree with him though, launching this late you sort of hope they have most of the bases covered or it will be a hard sell.

Yup, that's the expected response and I know that's what NV PR will try to spin.

However, let's all go on record and admit that the same posters bashing the 4GB VRAM scenario not once brought up VRAM bottlenecks in these situations:

GTX470/570 1.2GB vs. Unlocked HD6950 2GB
GTX480/580 1.5GB vs. HD6970 2GB

The entire forum basically bashed 7970 for delivering only "30%" stock performance increase over a 580 1.5GB but almost NONE talked about 7970 doubling the VRAM of a stock 580.

670 2GB vs. 7950 3GB
680 2GB vs. 7970/7970Ghz 3GB
770 2GB vs. 7970Ghz 3GB/280X 3GB
670/680 2GB SLI vs. HD7950/7970/7970Ghz 3GB CF
780 3GB vs. 290 4GB
780Ti 3GB vs. 290X 4GB
780/780Ti SLI 3GB vs. R9 295X2 4GB

Normally, it was gamers who acknowledged that AMD cards had superior price/performance and they said: "Oh, and btw, you get extra VRAM as a bonus."

Do you honestly remember people recommending against buying a 570 1.28GB vs. unlocked 6950 2GB or a GTX680 2GB vs. 7970Ghz just because of the VRAM differences? That never happened on AT.

But now, the minute the discussion entered the possibility of R9 390 non-X 4GB having 87-88% of the Titan X's performance priced at $450-500, it's a fail.

so 970/980 are trash, glad we got that clear up

12 pages and I also learned that 970 SLI, 980 SLI, R9 295X2, R9 290X CF, 780TI SLI are all trash now for 1080P-1440P gaming. Who would have thought. Poor gamers with those setups, how in the world are they playing games? Poor lads. Apparently everyone in the world now has a 4K monitor too and if you aren't using super-sampling at 1440P, you are gaming wrong.

Also, if MSI after-burner shows VRAM usage, that's apparently required, not dynamic. If an NV card uses 5-6GB of VRAM, then surely the game requires it or it will run at 15 fps or crash.



But you know what's the most amazing part? 970 SLI that's a whopping 75% faster, and 980 SLI that is 96% faster than R9 290X at 1440P but both have either 3.5GB or 4GB of VRAM were perfectly fine options purchased by tens of thousands of PC gamers since September, but apparently a 390/390X duo that will be slower than 970 SLI and way slower than 980 SLI are a fail without 8GB. The bullet-proof logic is Nobel Prize worthy on this one. But hey, because GM200 will have 6GB and Titan X has 12GB, all cards without 6-12GB are a fail. Darn it, time to donate my 7970s to a retirement home so that they can play 1999 PC games.

 
Last edited:
Feb 19, 2009
10,457
10
76
@RS
1) It will be fine. I just don't think AMD's flagship is that slow. I'm expecting Titan X+!

2) There needs to be a focus on frame times rather than just min/avg when we're talking about games that are shown to consume more than 4GB vram, it will show whether they need it (to dynamic cache, smoother gameplay) or they allocate because they can.

3) It will be fail going forward, ~2 years of lifespan at the top won't be secure with 4GB vram.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |