[Sweclockers] Radeon 380X coming late spring, almost 50% improvement over 290X

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

biostud

Lifer
Feb 27, 2003
18,409
4,968
136
One of the good things about PC's is that you have a lot of different hardware you can choose from and modify as you like. The fact that there's not one solution that fits all, is why PC's are more interesting than consoles and Mac when you look at the hardware side. If I could get a card with a good AIO cooler without paying to much premium, I would definitely chose it over air. One the reasons I chose my motherboard is that it has double spacing between the two PCIe x16/x8 slots, so it might be able to use AIO coolers on the next video cards I purchase.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Wat? When has any of this been said? I've simply argued for a simple heatsink cooler. I've questioned what would happen if AMD shipped the default cooler as a closed water loop. I have never said it'd be "too good" - quite the contrary, I think closed loop systems are a terrible solution.

Yet more conclusory remarks with no evidence or logical argument to support them. "Closed loops are a terrible solution" WHY? No one cares what your opinion is if you don't provide reasoning - debate 101. If you came here to just deposit your opinion and not to discuss the topic, then post it on Twitter or something instead...
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
Yet more conclusory remarks with no evidence or logical argument to support them. "Closed loops are a terrible solution" WHY? No one cares what your opinion is if you don't provide reasoning - debate 101. If you came here to just deposit your opinion and not to discuss the topic, then post it on Twitter or something instead...

I guess you didn't read previous comments, either.

Cards that ship with an AIO as default (regardless of reason) are in my eyes, a worse value than a proper heatsink solution. Moreover, I think if AMD's default solution was a AIO with a heatsink you need to mount on the back of the case, they will see serious issues. In past, cards like the 5970 were too long to fit some cases. With an AIO, some cases won't have a area to mount to in reach, or some users might already have a H100 or similar taking up that space. They limit themselves by adding the "will it fit" questions when buying their card. It's one thing for a 5970 or a 295x2 to have this problem, given it is such an expensive card. It's another thing if a more mainstream card, such as a 380, has that problem.

Somewhere else I mentioned MTBF. But hey, if I don't (re)post my reasons ever 2 pages, clearly I'm not offering any reasons.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
So according to you, Crysis 3 is AMD biased? :/

No, according to you anything you deemed bias is biased. Watch Dogs runs equally bad on all hardware and your idea of dual sponsored is laughably stupid. Nvidia is not going to sponsor a game which says "AMD GRAPHICS" when it starts up and doesn't say Nvidia anywhere. AMD has Tomb Raider and Nvidia has Far Cry 4, everything else in that lineup is fair IMO despite having more GE titles.

Use your energy to complain about something legit.
 

_UP_

Member
Feb 17, 2013
144
11
81
You took a sensible post and distorted it, just FYI.

Well, let me explain then, and then tell me if I distorted it.

It's a superior solution, but again the point I have argued is that, it's not the best, due to the possible issues, let alone potential case-fit issues. People may have room for one, not two. People have removed drive cages and turned front intake into exhaust, which is two problems immediately for many people (they need the cage, and front exhaust = inefficient airflow, which may compromise other cooling).

And a good reason to worry with AIO/CLC as reference: everyone argues that both CLC and Air will be available, but this is simply guessing, because all other reference (air) has been supplanted by custom solutions. Nobody changed the CLC on the 295X2, did they? And others have added water as a solution, due to greater efficiency. Custom cards, as RS alluded to, will also be delayed. Now, how motivated will AIBs be to release a custom cooling solution is is inferior to the reference cooler? I highly, highly doubt any custom air design will be cooler and quieter and offer more OC headroom than the CLC. So, my bet is that the only custom air designs are simply to appease those who can't fit the CLC cooler, and those cards won't likely receive nearly as many beneficial aftermarket tweaks as their main designs will, which will either have the reference-brand cooler or a custom-design that is still a CLC.

I would love to be wrong about this, really, I would. And it is useless to argue now, because AMD will be making up their mind however they see fit, and we'll just have to wait and see how the market and AIBs handle that situation.

If you read the post, the main idea of the middle paragraph, especially of the part that is bold, is that CLC is basically, so good - e.g. colder and quieter than any other solution, that AIBs will just have no justification in giving alternative air coolers. Call me crazy, but to me, that makes no sense.
CLC coolers for CPUs have been around for several years now, and are pretty successful. Water is the future. It is quieter, cooler, and if done right, more elegant. There's no reason to fear advances and innovation. We, especially as enthusiasts, should embrace them. Sure, some solutions will fail, but we should give it a go. Especially before it arrives, we should wait and see. I think it will be great, but even if I didn't, I'd be happy to give it a shot. The market will decide, eventually, not you, not me, and not AT VC&G.

Last point. As was mentioned quite a few times, the 295x2 example is not perfect as a "they didn't make air coolers for those" since it has X1.66 TDP. Dissipating so much heat is possible (as the PowerColor example shows), but is not easy, is huge, heavy, loud, and doesn't cool it enough. So probably not the best solution. Luckily this is only 300W. Can't wait for the 395x2 or 385x2 or whatever it'll be called. Who knows, maybe we'll get a 240 rad for that one
 

tential

Diamond Member
May 13, 2008
7,355
642
121
I would have never considered water cooling before this GPU release so I'll be very happy if the R9 380x is at a pricepoint I can justify for the performance it gives.
 
Feb 19, 2009
10,457
10
76
No, according to you anything you deemed bias is biased. Watch Dogs runs equally bad on all hardware and your idea of dual sponsored is laughably stupid. Nvidia is not going to sponsor a game which says "AMD GRAPHICS" when it starts up and doesn't say Nvidia anywhere. AMD has Tomb Raider and Nvidia has Far Cry 4, everything else in that lineup is fair IMO despite having more GE titles.

Use your energy to complain about something legit.

What short term memory you have.

Frostbite engine is both AMD & NV optimized, look at the DICE presentation to see logos from both. Crytek again is both (a far cry from Crysis 2's tessellated flat surfaces & invis ocean).

Watch Dogs is GameWorks. It just so happen to run like a dog on most hardware. Which is the other part of my point.

[H] is an enthusiast site yet they have so few games. Of their games, 2 are Ubisoft/GameWorks and broken after many months of patches. They may as well add ACU & Dying Light to their list eh? This is why I don't take much from their reviews anymore, until they update their list of games for more recent, non-broken titles.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Some food for thought - Not accounting for OCing, 290X is 47% faster than the original 925mhz HD7970 at 1440p/4K:





Now let's assume ~45% increase for flagship 300 over 290X (4096 SPs/2816 SPs, 256 TMUs/176 TMUs, 512GB/sec/320GB/sec memory bandwidth) and another ~45% for 2Q 2017 14nm Arctic Islands or w/e it's called (I accounted for a slight delay from 2016 for the flagship 14nm card):

4K Gaming standings:
HD7970 = 112%
290X = 165%
-> Fiji = 165% x 1.45 = 239%
-> 14nm 2Q 2017 = 239% x 1.45 = 347%

So essentially in the next 2 years we should have an NV/AMD card at least 3X faster than the original 7970 and 2X faster than the 980. I also think my estimates are conservative because I feel 14nm Pascal/AMD's 14nm flagship will be more than 45% faster than GM200/flagship 300 card.

Exciting times ahead for 4K gaming! I think because of these projections, 2017 is when 4K gaming will truly take off. It gives GPUs time to at least double in performance over the 980 and for 4K monitor prices to drop even further. Also, GSync vs. FreeSync will have 2 years to duke it out to see which one emerges as the industry standard, and we should start seeing HDMI 2.0 and DP 1.3 across all modern monitors by then.
 
Last edited:

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Flagship Pascal is still 2.5 years away, if not more. All of this year, all of 2016 and H2 2017.

I'm guessing we're getting 4K@60 already with GM200/R9-300 in most games save for the cutting edge.

Then again, the top of the line game in 2017 at 4K is not the same as it is today. Think a game like Star Citizen. At any rate, Carmack has said as much that GPU progress has to increase if VR is to take off.

Samsung, Sharp and AUO are all making 4K displays for mobile phones this year, which means that 4K for VR is just around the corner(maybe even for Oculus C1). 4K for VR is the bare minimum and there's no way that VR will be a mobile experience for the first few years except for the most rudimentary tasks.

And for VR, you'd need a high refresh rate minimum, too, so 60 fps isn't good enough, you need to hit around 100 fps consistently. All in all, all the good pressures to have.


P.S. Crazy that the 290X is around 9.5% slower on average at 1440p than the GTX 980.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Some food for thought - Not accounting for OCing, 290X is 47% faster than the original 925mhz HD7970 at 1440p/4K:





Now let's assume ~45% increase for flagship 300 over 290X (4096 SPs/2816 SPs, 256 TMUs/176 TMUs, 512GB/sec/320GB/sec memory bandwidth) and another ~45% for 2Q 2017 14nm Arctic Islands or w/e it's called (I accounted for a slight delay from 2016 for the flagship 14nm card):

4K Gaming standings:
HD7970 = 112%
290X = 165%
-> Fiji = 165% x 1.45 = 239%
-> 14nm 2Q 2017 = 239% x 1.45 = 347%

So essentially in the next 2 years we should have an NV/AMD card at least 3X faster than the original 7970 and 2X faster than the 980. I also think my estimates are conservative because I feel 14nm Pascal/AMD's 14nm flagship will be more than 45% faster than GM200/flagship 300 card.

Exciting times ahead for 4K gaming! I think because of these projections, 2017 is when 4K gaming will truly take off. It gives GPUs time to at least double in performance over the 980 and for 4K monitor prices to drop even further. Also, GSync vs. FreeSync will have 2 years to duke it out to see which one emerges as the industry standard, and we should start seeing HDMI 2.0 and DP 1.3 across all modern monitors by then.

A few points to ponder

1. Both AMD and Nvidia are unlikely to push die sizes beyond 300 - 350 sq mm for their first gen 14nm/16nm FINFET chips. 14/16nm FINFET is a more difficult node than 28nm for the entire industry when it comes to yields. Even Intel is having difficulty with 14nm yields so I don't see TSMC / Samsung having it any easier.

2. Given that AMD and Nvidia will be pushing the limits of 28nm production in terms of die size, expecting the first gen 14/16nm FINFET chips to be 45% faster than the 28nm flagships GPUs which will measure 550 sq mm (with HBM) for AMD and 650 sq mm (with GDDR5) for Nvidia might not be so easy.

3. If Nvidia gets super aggressive and pushes for a 4096 cc,8 GPC, 384 bit memory bus at 8 Ghz ( because I doubt Nvidia will go for 512 bit for cost, yield and power efficiency reasons) maybe there is a chance that they get close to 2x the GM204 aka GTX 980 perf. But I doubt that will happen as the die size will be above 350 sq mm even at 14/16 nm FINFET.

4. Both AMD and Nvidia might use the massive power efficiency gains from 14/16nm FINFET to push TDP down to 200w for first gen FINFET flagships.

5. I see the 2017 designs from Nvidia and AMD being very ambitious as the process would be mature and yield would be much better and their understanding of the process node also much better.

6. I suspect the following sequence of events -
1) A GM304 in Q2 2016 followed by a GP404 in Q4 2016 or Q1 2017.
2.) A GM400 (big Pascal) with 1 TB/s bandwidth and 16 GB HBM by late Q3 or Q4 2017.
3.) A R9 4xx flagship GPU in Q2 2016 with 8 GB / 16 GB HBM.
4.) A R9 5xx big die flagship with 1 TB/s bandwidth and 16 GB HBM in H2 2017 with a next generation architecture. This next gen architecture could be the basis for the next gen PS5 and next gen Xbox.

So in summary a doubling of GTX 980 perf in 2016 is possible but not a surety. But I agree that 2017 would be a major leap in 4K graphics performance :thumbsup:
 
Last edited:

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
A few points to ponder

1. Both AMD and Nvidia are unlikely to push die sizes beyond 300 - 350 sq mm for their first gen 14nm/16nm FINFET chips. 14/16nm FINFET is a more difficult node than 28nm for the entire industry when it comes to yields. Even Intel is having difficulty with 14nm yields so I don't see TSMC / Samsung having it any easier.

2. Given that AMD and Nvidia will be pushing the limits of 28nm production in terms of die size, expecting the first gen 14/16nm FINFET chips to be 45% faster than the 28nm flagships GPUs which will measure 550 sq mm (with HBM) for AMD and 650 sq mm (with GDDR5) for Nvidia might not be so easy.

3. If Nvidia gets super aggressive and pushes for a 4096 cc,8 GPC, 384 bit memory bus at 8 Ghz ( because I doubt Nvidia will go for 512 bit for cost, yield and power efficiency reasons) maybe there is a chance that they get close to 2x the GM204 aka GTX 980 perf. But I doubt that will happen as the die size will be above 350 sq mm even at 14/16 nm FINFET.

4. Both AMD and Nvidia might use the massive power efficiency gains from 14/16nm FINFET to push TDP down to 200w for first gen FINFET flagships.

5. I see the 2017 designs from Nvidia and AMD being very ambitious as the process would be mature and yield would be much better and their understanding of the process node also much better.

6. I suspect the following sequence of events -
1) A GM304 in Q2 2016 followed by a GP404 in Q4 2016 or Q1 2017.
2.) A GM400 (big Pascal) with 1 TB/s bandwidth and 16 GB HBM by late Q3 or Q4 2017.
3.) A R9 4xx flagship GPU in Q2 2016 with 8 GB / 16 GB HBM.
4.) A R9 5xx big die flagship with 1 TB/s bandwidth and 16 GB HBM in H2 2017 with a next generation architecture. This next gen architecture could be the basis for the next gen PS5 and next gen Xbox.

So in summary a doubling of GTX 980 perf in 2016 is possible but not a surety. But I agree that 2017 would be a major leap in 4K graphics performance :thumbsup:

Interesting thoughts. In 2016 the doubling of 980 performance I believe is possible because 28nm is so mature now. As you said it looks like 2017 for a true 4K beast in a single gpu card.

WRT point number 4, why do you think AMD is going to get the next gen console? That's pretty bold considering the current/future R&D pinch caused by poor company profits.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Interesting thoughts. In 2016 the doubling of 980 performance I believe is possible because 28nm is so mature now. As you said it looks like 2017 for a true 4K beast in a single gpu card.

WRT point number 4, why do you think AMD is going to get the next gen console? That's pretty bold considering the current/future R&D pinch caused by poor company profits.

The reason I see AMD getting the next gen console contracts is x86 + world class GPU IP combined on a single chip. Intel's GPU IP is not as strong as AMD's and Nvidia does not have a x86 license. I do not think Sony and Microsoft are going to change ISA after going to x86 primarily at the request of developers to simplify console game development. AMD's next gen Zen architecture which is being designed by Jim Keller and his team holds promise. Zen would be ready for next gen consoles. At 14/16nm FINFET AMD can fit double the transistors as they did at 28nm into a similar power budget of 100W. AMD can easily fit 8 Zen cores clocked at 2 Ghz into similar power as 8 Jaguar cores at 1.6 Ghz at 28nm. AMD could easily maintain architectural compatibility with the current gen GCN console GPUs but still bring significant architectural and efficiency improvements. This matters a lot for Sony and Microsoft as backwards compatibility with current PS4 and Xbox One games is a major advantage as the existing library of PS4 and Xbox One games will run on next gen consoles from launch. I also foresee HBM as the system memory for next gen consoles and most likely 16 GB with 512 GB - 1 TB/s bandwidth. If you add the pieces as to what AMD can deliver for the next gen consoles with x86 and backwards compatibility combined with a single chip SOC with next gen HBM its really compelling. I don't think Intel or Nvidia can do the same.
 

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
The reason I see AMD getting the next gen console contracts is x86 + world class GPU IP combined on a single chip. Intel's GPU IP is not as strong as AMD's and Nvidia does not have a x86 license. I do not think Sony and Microsoft are going to change ISA after going to x86 primarily at the request of developers to simplify console game development. AMD's next gen Zen architecture which is being designed by Jim Keller and his team holds promise. Zen would be ready for next gen consoles. At 14/16nm FINFET AMD can fit double the transistors as they did at 28nm into a similar power budget of 100W. AMD can easily fit 8 Zen cores clocked at 2 Ghz into similar power as 8 Jaguar cores at 1.6 Ghz at 28nm. AMD could easily maintain architectural compatibility with the current gen GCN console GPUs but still bring significant architectural and efficiency improvements. This matters a lot for Sony and Microsoft as backwards compatibility with current PS4 and Xbox One games is a major advantage as the existing library of PS4 and Xbox One games will run on next gen consoles from launch. I also foresee HBM as the system memory for next gen consoles and most likely 16 GB with 512 GB - 1 TB/s bandwidth. If you add the pieces as to what AMD can deliver for the next gen consoles with x86 and backwards compatibility combined with a single chip SOC with next gen HBM its really compelling. I don't think Intel or Nvidia can do the same.

I can see HBM in the next console. If that happens then the consoles might actually approach PC levels of performance. Hopefully the next gen of PC ports pays more attention to the needs of the PC gamer.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I never said doubling the performance of a 980 in 2016 -- Q2 2017 as per my post, or roughly 2 years + 2 months gets us to Q2 2017.

Also, your point #3 isn't likely as NV already stated that they will use HBM for Pascal.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I never said doubling the performance of a 980 in 2016 -- Q2 2017 as per my post, or roughly 2 years + 2 months gets us to Q2 2017.

yeah I misread that part. For 2017 I can see some seriously powerful GPUs from Nvidia and AMD. The flagship Pascal big die > 500 sq mm and the AMD next gen flagships should mostly launch in H2 2017 and will be awesome 4k performers. 16 GB HBM and 1 TB/s bandwidth and some mouth watering performance to look forward to.

Also, your point #3 isn't likely as NV already stated that they will use HBM for Pascal.
The first gen 14/16nm FINFET chips in 2016 from Nvidia are Maxwell shrinks. If you are thinking that Nvidia will go to a grounds up new DX12 architecture (Pascal)with a new high bandwidth low latency CPU-GPU GPU-GPU interconnect called Nvlink and a state of the art High bandwidth memory system on a bleeding edge immature 14/16nm process which is likely to have yield struggles then you are woefully wrong. The last time Nvidia did that was Fermi GF100 and we all know how it turned out(new process, new architecture, GDDR5 memory system). Nvidia will not repeat such a mistake. The earliest debut for Pascal is likely to be Q4 2016 in the form of a GP404. Big die Pascal will follow 9 -12 months later in Q3 - Q4 2017.
 
Last edited:

parvadomus

Senior member
Dec 11, 2012
685
14
81
4K Gaming standings:
HD7970 = 112%
290X = 165%
-> Fiji = 165% x 1.45 = 239%
-> 14nm 2Q 2017 = 239% x 1.45 = 347%

Fiji will be more than 45% faster than Hawaii. I bet it like ~60%, it will simply destroy everything if it really has 4096 shaders + double the bandwidth, not even GM200 will be able to catch it at 4K.
You cant simply multiply the shader count when the bandwidth doubles, and the architecture changes.
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
What I am wondering about is what sort of price range should be expected for the 390.
 
Feb 19, 2009
10,457
10
76
My guess is the X variant will have reference hybrid water and will go for a higher premium this time, moreso than their prior R290/X separation.

Either way, Q2 is too long, they need it out yesterday.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Possible specs leaked:

395X Fiji x2
390(X) is Fiji
380(X) is 280(X) re-branded


http://www.3dcenter.org/news/amd-ra...-als-380x-fiji-als-390x-und-bermuda-als-395x2


No, the 380X would be a 290X, not the 280X.
It would also have lower TDP and higher clocks.

The 360(X) are slated to come out earlier than the flagships, around March. The Computex timeframe is still the estimate for the 390/390X. 395X2 is for Christmas.

I personally don't see why wait that long for 395X2. Maybe they want to save it to attack GM200 in case their 390X gets beaten by flagship GM200.
 
Feb 19, 2009
10,457
10
76
I personally don't see why wait that long for 395X2. Maybe they want to save it to attack GM200 in case their 390X gets beaten by flagship GM200.

Asetek maxed production capacity a limiting factor due to the reliance on their AIO (which most AIO brands use!)?
 

biostud

Lifer
Feb 27, 2003
18,409
4,968
136
No, the 380X would be a 290X, not the 280X.
It would also have lower TDP and higher clocks.

The 360(X) are slated to come out earlier than the flagships, around March. The Computex timeframe is still the estimate for the 390/390X. 395X2 is for Christmas.

I personally don't see why wait that long for 395X2. Maybe they want to save it to attack GM200 in case their 390X gets beaten by flagship GM200.

ah, yeah typo

they will probably wait because of cost and better chip binning.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Interesting story by Fudzilla:

AMD is using what is calls a 2.5D-IC silicon interposer, which means that there will be two separate chips on the same silicon interposer and package substrate. Fiji in 28nm will be one of these chips, and the second batch of chips will be the High Bandwidth Memory (HBM) memory designs. However, there is a catch with AMD's approach.

From what we've learned, Fiji is limited to 4GB memory. With the current memory technology the GPU would simply be too big to put on an interposer and package. The interposer should be viewed as a stack of conductors that lets the GPU and HBM memory communicate at much higher speeds than ever before. The interposer then gets into the package that goes on PCB. You could say the interposer is the middle-man that makes things faster.

Read the whole thing
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |