Cookie Monster
Diamond Member
- May 7, 2005
- 5,161
- 32
- 86
I really wish they improve their blower reference cooler. Aftermarket is all great, but im one of those that need a blower type cooler.
I really wish they improve their blower reference cooler. Aftermarket is all great, but im one of those that need a blower type cooler.
-- Roadmap fixed and removal of codenames(since they always change) --
28nm TSMC Q4 2013 : High-end GPU, VI 1.0
28nm GlobalFoundries Q4 2014 : Mid-end GPU and Low-end GPU, VI 2.0
28nm GlobalFoundries Q2 2015 : High-end GPU, VI 2.0
20nm GlobalFoundries Q4 2015 : Mid-end GPU and Low-end GPU, PI 1.0
20nm GlobalFoundries Q2 2016 : High-end GPU, PI 1.0
14nm GlobalFoundries Q4 2016 : Mid-GPU and Low-GPU, PI 2.0
14nm GlobalFoundries Q2 2017 : High-GPU, PI 2.0
Nope,the new card will feature enhanced GCN architecture,it wont be just a clock speed and voltage increase...in my opinion.295x sound about right
or
290x GHz+ be fine too.
chances are it will be nothing more than a 290x that is overclocked with an aio water cooling slap on.
20nm is short lived if they are launching it in late next year, I mean what's the point? People will know its going to be obsolete by 14/16nm stuff coming not long after since 20nm for dGPU is just so delayed. The gap will probably be less than 1 year. I wouldn't pay top $$ for 20nm stuff.
2. 2015 will indeed be the year of high end 20 nm GPUs. In other words: 20 nm Maxwell is coming.
20 nm GPUs died a quiet death in 2014.
Keep your uninformed (Intel) PR spin to yourself, 14nm for desktop/laptop isn't out yet & you'll be lucky to see 10nm PC parts anytime before 1H of 2017, funny how you continually push Intel's (gen x) graphics as the next big thing when in fact they'll always be 4th grade, behind Nvidia/AMD/ARM D:20nm in Q4 2015? That's about half a year before Intel introduces 10nm with next-gen successor of Tri-Gate (a new material will replace silicon, probably germanium). With the advantage of 2 important technologies and 2 nodes ahead, Intel's IGPs might eat into the performance territory of dGPUs all the way up to the high-end, if they launch Gen10 with GT5 Cannonlake. And closer in time, 14nm Broadwell could disrupt the midrange market if it competes for a year against 28nm.
is it 2015 and no one told me??
we are still in 2014, maxwell isn't coming (at least not on 20nm), and 28nm made a death rattle with the 295X2 but will draw it's final breath with a dual GK110 if that ever materializes.
2014 is going to be a very boring year in hardware, GPU- and CPU-wise.
So, maybe a test run for HBM memory? Slap it on Hawaii see how goes and tweak it for 20nm. There was also that talk of Hawaii having 3072 SPs, so maybe a fully enabled chip?
if it will be affordable maybe. iris is too pricey.
Keep your uninformed (Intel) PR spin to yourself, 14nm for desktop/laptop isn't out yet & you'll be lucky to see 10nm PC parts anytime before 1H of 2017, funny how you continually push Intel's (gen x) graphics as the next big thing when in fact they'll always be 4th grade, behind Nvidia/AMD/ARM D:
Maybe a binned, high clock 290? R9 290XTX would be my guess.
You keep forgetting other important parts of the GPU performance. It would be incredible complex and pricy to add the memory for a GPU like GK100 in an APU. I think we are far longer into the future before that will happen.Iris for LGA doesn't exist yet, so you can't know that. And when they shrink to 14 and 10nm, Intel can put 2 and 4 times as much EUs in their SKUs without a die area penalty (so the same 1xx mm² as today), and they could offer a more expensive GTn+1 with 2xx mm² die area. At 10nm, Intel will be almost 2 nodes ahead in terms of density against TSMC's 16nm, so they could put around 3 times as much EUs within the same die area, meaning an Intel Core CPU + (16nm equivalent of) GTX Titan (550mm²) APU could fit within just ~230mm², with better power and performance too.
The reason for the potential delay for the high-end GPU. Is that peak full volume for HBM isn't expected till 1H 2015. So, the smaller designs being first which use less HBM stacks are given priority.First, you have no high-end AMD GPU from now until Q2 2015! How is AMD going to compete with a supposed 28nm GM204 that's slated by end of this year? Or are they going to drop prices on R9 290X to stay competitive until Q2 2015?
AMD would need a hawaii respin like nvidia did with GF110. Make some tweaks, reduce leakage, bump clocks, add a 7Ghz memory and call it 295X.Maybe a binned, high clock 290? R9 290XTX would be my guess.
Even on water how much gain would that net? Beat a OC'd 780Ti on air?
AMD would need a hawaii respin like nvidia did with GF110. Make some tweaks, reduce leakage, bump clocks, add a 7Ghz memory and call it 295X.
Crappy Elpida RAM on reference cards, is a different issue.
anyone over clock only memory to see if 290x will continue to scale with increased memory bandwidth? save me the attempt.
Only tessellation performance improves, typically. If you're not tessellation-bound in the game you're playing you'll see very small (~5%) gains. This is on a stock clock (947MHz) 290, at 1080p and 1440p.
anyone over clock only memory to see if 290x will continue to scale with increased memory bandwidth? save me the attempt.
Correct me if I am wrong.
But it would scale only if it was Bandwidth starving in the first place. If it isn't, then you aren't going to see much improvements.
While we are at it, also if somebody is willing to provide some data on how Nvidia cards scale with memory clock in games. I don't have one, & when I did I only tested Valley scaling, which loves any memory OC any way.
HBM is a graphics and networking spec.Seems like Carrizo is a good candidate for this technology.