RampantAndroid
Diamond Member
- Jun 27, 2004
- 6,591
- 3
- 81
Well, I quoted it. Post 470.
You took a sensible post and distorted it, just FYI.
Well, I quoted it. Post 470.
Wat? When has any of this been said? I've simply argued for a simple heatsink cooler. I've questioned what would happen if AMD shipped the default cooler as a closed water loop. I have never said it'd be "too good" - quite the contrary, I think closed loop systems are a terrible solution.
Yet more conclusory remarks with no evidence or logical argument to support them. "Closed loops are a terrible solution" WHY? No one cares what your opinion is if you don't provide reasoning - debate 101. If you came here to just deposit your opinion and not to discuss the topic, then post it on Twitter or something instead...
Cards that ship with an AIO as default (regardless of reason) are in my eyes, a worse value than a proper heatsink solution. Moreover, I think if AMD's default solution was a AIO with a heatsink you need to mount on the back of the case, they will see serious issues. In past, cards like the 5970 were too long to fit some cases. With an AIO, some cases won't have a area to mount to in reach, or some users might already have a H100 or similar taking up that space. They limit themselves by adding the "will it fit" questions when buying their card. It's one thing for a 5970 or a 295x2 to have this problem, given it is such an expensive card. It's another thing if a more mainstream card, such as a 380, has that problem.
So according to you, Crysis 3 is AMD biased? :/
You took a sensible post and distorted it, just FYI.
It's a superior solution, but again the point I have argued is that, it's not the best, due to the possible issues, let alone potential case-fit issues. People may have room for one, not two. People have removed drive cages and turned front intake into exhaust, which is two problems immediately for many people (they need the cage, and front exhaust = inefficient airflow, which may compromise other cooling).
And a good reason to worry with AIO/CLC as reference: everyone argues that both CLC and Air will be available, but this is simply guessing, because all other reference (air) has been supplanted by custom solutions. Nobody changed the CLC on the 295X2, did they? And others have added water as a solution, due to greater efficiency. Custom cards, as RS alluded to, will also be delayed. Now, how motivated will AIBs be to release a custom cooling solution is is inferior to the reference cooler? I highly, highly doubt any custom air design will be cooler and quieter and offer more OC headroom than the CLC. So, my bet is that the only custom air designs are simply to appease those who can't fit the CLC cooler, and those cards won't likely receive nearly as many beneficial aftermarket tweaks as their main designs will, which will either have the reference-brand cooler or a custom-design that is still a CLC.
I would love to be wrong about this, really, I would. And it is useless to argue now, because AMD will be making up their mind however they see fit, and we'll just have to wait and see how the market and AIBs handle that situation.
No, according to you anything you deemed bias is biased. Watch Dogs runs equally bad on all hardware and your idea of dual sponsored is laughably stupid. Nvidia is not going to sponsor a game which says "AMD GRAPHICS" when it starts up and doesn't say Nvidia anywhere. AMD has Tomb Raider and Nvidia has Far Cry 4, everything else in that lineup is fair IMO despite having more GE titles.
Use your energy to complain about something legit.
Some food for thought - Not accounting for OCing, 290X is 47% faster than the original 925mhz HD7970 at 1440p/4K:
Now let's assume ~45% increase for flagship 300 over 290X (4096 SPs/2816 SPs, 256 TMUs/176 TMUs, 512GB/sec/320GB/sec memory bandwidth) and another ~45% for 2Q 2017 14nm Arctic Islands or w/e it's called (I accounted for a slight delay from 2016 for the flagship 14nm card):
4K Gaming standings:
HD7970 = 112%
290X = 165%
-> Fiji = 165% x 1.45 = 239%
-> 14nm 2Q 2017 = 239% x 1.45 = 347%
So essentially in the next 2 years we should have an NV/AMD card at least 3X faster than the original 7970 and 2X faster than the 980. I also think my estimates are conservative because I feel 14nm Pascal/AMD's 14nm flagship will be more than 45% faster than GM200/flagship 300 card.
Exciting times ahead for 4K gaming! I think because of these projections, 2017 is when 4K gaming will truly take off. It gives GPUs time to at least double in performance over the 980 and for 4K monitor prices to drop even further. Also, GSync vs. FreeSync will have 2 years to duke it out to see which one emerges as the industry standard, and we should start seeing HDMI 2.0 and DP 1.3 across all modern monitors by then.
A few points to ponder
1. Both AMD and Nvidia are unlikely to push die sizes beyond 300 - 350 sq mm for their first gen 14nm/16nm FINFET chips. 14/16nm FINFET is a more difficult node than 28nm for the entire industry when it comes to yields. Even Intel is having difficulty with 14nm yields so I don't see TSMC / Samsung having it any easier.
2. Given that AMD and Nvidia will be pushing the limits of 28nm production in terms of die size, expecting the first gen 14/16nm FINFET chips to be 45% faster than the 28nm flagships GPUs which will measure 550 sq mm (with HBM) for AMD and 650 sq mm (with GDDR5) for Nvidia might not be so easy.
3. If Nvidia gets super aggressive and pushes for a 4096 cc,8 GPC, 384 bit memory bus at 8 Ghz ( because I doubt Nvidia will go for 512 bit for cost, yield and power efficiency reasons) maybe there is a chance that they get close to 2x the GM204 aka GTX 980 perf. But I doubt that will happen as the die size will be above 350 sq mm even at 14/16 nm FINFET.
4. Both AMD and Nvidia might use the massive power efficiency gains from 14/16nm FINFET to push TDP down to 200w for first gen FINFET flagships.
5. I see the 2017 designs from Nvidia and AMD being very ambitious as the process would be mature and yield would be much better and their understanding of the process node also much better.
6. I suspect the following sequence of events -
1) A GM304 in Q2 2016 followed by a GP404 in Q4 2016 or Q1 2017.
2.) A GM400 (big Pascal) with 1 TB/s bandwidth and 16 GB HBM by late Q3 or Q4 2017.
3.) A R9 4xx flagship GPU in Q2 2016 with 8 GB / 16 GB HBM.
4.) A R9 5xx big die flagship with 1 TB/s bandwidth and 16 GB HBM in H2 2017 with a next generation architecture. This next gen architecture could be the basis for the next gen PS5 and next gen Xbox.
So in summary a doubling of GTX 980 perf in 2016 is possible but not a surety. But I agree that 2017 would be a major leap in 4K graphics performance :thumbsup:
Interesting thoughts. In 2016 the doubling of 980 performance I believe is possible because 28nm is so mature now. As you said it looks like 2017 for a true 4K beast in a single gpu card.
WRT point number 4, why do you think AMD is going to get the next gen console? That's pretty bold considering the current/future R&D pinch caused by poor company profits.
The reason I see AMD getting the next gen console contracts is x86 + world class GPU IP combined on a single chip. Intel's GPU IP is not as strong as AMD's and Nvidia does not have a x86 license. I do not think Sony and Microsoft are going to change ISA after going to x86 primarily at the request of developers to simplify console game development. AMD's next gen Zen architecture which is being designed by Jim Keller and his team holds promise. Zen would be ready for next gen consoles. At 14/16nm FINFET AMD can fit double the transistors as they did at 28nm into a similar power budget of 100W. AMD can easily fit 8 Zen cores clocked at 2 Ghz into similar power as 8 Jaguar cores at 1.6 Ghz at 28nm. AMD could easily maintain architectural compatibility with the current gen GCN console GPUs but still bring significant architectural and efficiency improvements. This matters a lot for Sony and Microsoft as backwards compatibility with current PS4 and Xbox One games is a major advantage as the existing library of PS4 and Xbox One games will run on next gen consoles from launch. I also foresee HBM as the system memory for next gen consoles and most likely 16 GB with 512 GB - 1 TB/s bandwidth. If you add the pieces as to what AMD can deliver for the next gen consoles with x86 and backwards compatibility combined with a single chip SOC with next gen HBM its really compelling. I don't think Intel or Nvidia can do the same.
I never said doubling the performance of a 980 in 2016 -- Q2 2017 as per my post, or roughly 2 years + 2 months gets us to Q2 2017.
The first gen 14/16nm FINFET chips in 2016 from Nvidia are Maxwell shrinks. If you are thinking that Nvidia will go to a grounds up new DX12 architecture (Pascal)with a new high bandwidth low latency CPU-GPU GPU-GPU interconnect called Nvlink and a state of the art High bandwidth memory system on a bleeding edge immature 14/16nm process which is likely to have yield struggles then you are woefully wrong. The last time Nvidia did that was Fermi GF100 and we all know how it turned out(new process, new architecture, GDDR5 memory system). Nvidia will not repeat such a mistake. The earliest debut for Pascal is likely to be Q4 2016 in the form of a GP404. Big die Pascal will follow 9 -12 months later in Q3 - Q4 2017.Also, your point #3 isn't likely as NV already stated that they will use HBM for Pascal.
4K Gaming standings:
HD7970 = 112%
290X = 165%
-> Fiji = 165% x 1.45 = 239%
-> 14nm 2Q 2017 = 239% x 1.45 = 347%
Possible specs leaked:
395X Fiji x2
390(X) is Fiji
380(X) is 280(X) re-branded
http://www.3dcenter.org/news/amd-ra...-als-380x-fiji-als-390x-und-bermuda-als-395x2
I personally don't see why wait that long for 395X2. Maybe they want to save it to attack GM200 in case their 390X gets beaten by flagship GM200.
No, the 380X would be a 290X, not the 280X.
It would also have lower TDP and higher clocks.
The 360(X) are slated to come out earlier than the flagships, around March. The Computex timeframe is still the estimate for the 390/390X. 395X2 is for Christmas.
I personally don't see why wait that long for 395X2. Maybe they want to save it to attack GM200 in case their 390X gets beaten by flagship GM200.
AMD is using what is calls a 2.5D-IC silicon interposer, which means that there will be two separate chips on the same silicon interposer and package substrate. Fiji in 28nm will be one of these chips, and the second batch of chips will be the High Bandwidth Memory (HBM) memory designs. However, there is a catch with AMD's approach.
From what we've learned, Fiji is limited to 4GB memory. With the current memory technology the GPU would simply be too big to put on an interposer and package. The interposer should be viewed as a stack of conductors that lets the GPU and HBM memory communicate at much higher speeds than ever before. The interposer then gets into the package that goes on PCB. You could say the interposer is the middle-man that makes things faster.