[Benchlife] R9 480 (Polaris 10 >100w), R9

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
To add to my last post, the exact same mistakes were made during HD7970/GTX680 launch with people focusing in on 1080p, thus automatically penalizing 7970/680 in non-GPU intensive/CPU limited games during their launch dates. The data is all there.

Rating - 1680x1050 4xAA / 16xAF
580 = 100%
7970 = 119%

Rating - 1920x1080 4xAA / 16xAF
580 = 100%
7970 = 122%

Rating - 2560x1600 4xAA / 16xAF
580 = 100%
7970 = 133%

It took 2-3 more years for modern PC games to come out to not bottleneck the 7970. By early 2015, R9 285 was 44% faster than 580 and R9 280X (7970GHz) was 71% faster than 580 at 1080p.

http://www.computerbase.de/2015-05/...h/2/#diagramm-rating-1920-1080-hohe-qualitaet

Point is if the test suite is littered with old games, and we are using 1080p 60Hz resolutions, we are double penalizing future GPUs and flagship GPUs -- we are CPU bottlenecking them at 1080p and we are using old games/engines that don't benefit from newer architectures.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Most engines are moving toward some form of GI which is multitudes more expensive than traditional lighting and then we multiply that with some volumetric lighting and voila, 4K isn't even moving the richter scale anymore and 1440p is playable where it was smooth.

That's a very short-term outlook. GPUs will continue improving in performance by 50-70% every 2-2.5 years. Last year we got 980Ti/Fury X. For simplicity's sake, let's assume Big Pascal beats 980Ti by 70% at 4K June 2017, then Big Volta beats Big Pascal by 70% at 4K June 2019. That means by September 2020, $350 next gen card should be 1.7x1.7 = 3.4X faster than 980Ti.

Even though the overall GPU volume unit sales are declining or at best stagnant, the $350+ GPU market segment is growing.



These gamers who are buying 2015-2016 $350+ GPUs are likely going to be looking to get a 1440p-4K monitor, and later FreeSync/GSync, 120-144Hz 1440p, HDR, etc. Why? Because, the types of gamers who are still interested in desktop PC gaming are spending $$$. They want cutting edge tech. It's why Intel is posting record i7 and record K series CPU sales quarter after quarter. Think about it -- more i7s sold per quarter in the last 2-3 quarters in Intel's history than even during the i7 920-930, i7 860, i7 2600K, i7 3770K generations and we know how insanely popular those CPUs were.

Even on this forum, many people have ditched 1920x1080 in favour of 2560x1440, 2560x1600, 3440x1440, 2560x1080, 4K, etc.

Think about this, by February 2015, Steam already had 125M users.
https://www.vg247.com/2015/02/24/steam-has-over-125-million-active-users-8-9m-concurrent-peak/

It wouldn't be unrealistic to extrapolate that by April 2016, there are already 140M Steam users, if not more.

If 7.5M gamers had 970/290X or above by February 2016, let's assume that base grew to 8M by now. Let's assume 90% of these gamers use Steam => 90% * 8M = 7.2M.

7.2M / 140M Steam userbase = 5.1%

Now let's look at gamers with above 1920x1200 = 2.171%

http://store.steampowered.com/hwsurvey

That means potentially a large number of PC gamers with R9 290X/970 and above are gaming above 1920x1080 already. Of course not all of them are using monitor above 1920x1200 but the point I am trying to make is the average Steam PC with a 1920x1200 and below monitor should not be compared to the average gaming system with an R9 290X/970. Therefore, when we start discussing Fury X/980Ti level of GPU power and 1080p gaming benchmarks, it's not the proper context of how I would imagine a lot of R9 290X/970 and above gamers are gaming.

Getting revved up for 4K was a waste, 1440p not so much but it's not as if games are going to stop evolving especially with the nextgen's just released. Though Vega and Volta may be able to clean up 1440p for some time, developers may start to use cleaner real time lighting methods and then your back in the hole again.

So you are saying June 2015-June 2016, Fury X and 980Ti were great 1440p cards but the minute Polaris 10 comes out, suddenly they are 1080p cards, that day?

I think your going to find 1080p being the goal post resolution for the foreseeable future - it's best to just write 4K off fttb.

If 980Ti becomes only good for 1080P in 2016, that will be mostly the result of horrendously optimized console to PC ports like Hitman/Gears of War or if we get a wave of true next generation titles that blow us away graphically (aka the next Crysis 1/3, Metro 2033). Alternatively, a barrage of Unreal Engine 4 games that run like crap on almost everything (ARK Survival Evolved).

As of February 2016, AMD estimated that there were only 7.5M PC gamers with GTX970/290X and above.



I would be surprised if it even reached 10M by now. If Fury X/980Ti level of GPU horsepower becomes the norm for 1080p gaming in 2016, then it spells doom for the PC gaming industry, unless you think the remaining 120M+ steam users are all going to be upgrading?

What's going to happen with new gen cards is Fury X/390X/980/980Ti levels of performance will become more affordable. That doesn't mean the minute that happens in June 2016 that Fury X/980Ti are now 1080p level cards.
 
Last edited:

tajoh111

Senior member
Mar 28, 2005
305
322
136
If Polaris 10 is a 120W ACP card, having 2,5 times performance/watt means that it is faster than even the Fury X.

The problem with this is most companies inflate their performance per watt estimate. E.g AMD said Nano doubled performance per watt over a 290x. It didn't.

From anandtech's benchs and tech powerups, its more in the 66% range. Not bad by any means, but that's missing 2x by a decent margin.

I think we are more likely to see 2x tonga performance times 0.66(180watt tonga vs 120 watt polaris 10) which puts it more in line with 390 like performance.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Yup The current top end cards actually seem to hit something like '960@1080p' equivalent at 4k, but the sort of people who buy cards that expensive/high TDP want more than that of course!

Big Pascal/Vega - with the bandwidth from HBM2 and the die shrink? Really can't see how they won't chew up gaming at 4k very easily. That - VR maybe - is definitely how they should be judged.

Hopefully GP104's bandwidth will hold up at 4k. Can't see Polaris doing it, although small(er!) vega definitely has chances to get ahead at 4k when it arrives with the HBM2.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
R7 470 is what im looking forward to the most. Sub 75W and performance on par with Gtx960? Probably a pipe dream but sub 100W is quite possible.

Amazing. It's been so long since a node shrink that a lot of people have forgotten what that even means.

You're setting your standards way too low. AMD previewed a Polaris 11 chip with performance matching a GTX 950 at 80W total system power. That means the GPU was probably using about 40W-45W. Ramp up the clocks until you reach a TDP of 75W (the maximum a desktop card can go without needing a PCIe connector) and they should easily be able to beat GTX 960. Most likely we'll see performance at or slightly below the level of a R9 380X. That would be a roughly 2.5x improvement in perf/watt over Tonga, as promised.
 

el etro

Golden Member
Jul 21, 2013
1,581
14
81
The problem with this is most companies inflate their performance per watt estimate. E.g AMD said Nano doubled performance per watt over a 290x. It didn't.

From anandtech's benchs and tech powerups, its more in the 66% range. Not bad by any means, but that's missing 2x by a decent margin.

I think we are more likely to see 2x tonga performance times 0.66(180watt tonga vs 120 watt polaris 10) which puts it more in line with 390 like performance.

I believe that no, and very much. 14nm is a node and half ahead of 28nm plus Finfets, by process alone is a huge jump. And GCN4 is a real new architecture with tons of new optimizations, so the performance watt jump suggested that will happen is really huge. Not hard at all to deliver 2.5x perf/w over Fury X. I expect Pascal to deliver the claimed 2x perf/w jump over Maxwell, expecting it to double the performance at every TDP point too.


Demonstrations of Polaris 11 performance(same FPS at 60% of the total system power) and of Polaris 10(carved 60 FPS at Hitman 1440p, hard even for Fury X to maintain this) shows that performance improvements of FF GPUs will be great.
 

omek

Member
Nov 18, 2007
137
0
0
I would be surprised if it even reached 10M by now. If Fury X/980Ti level of GPU horsepower becomes the norm for 1080p gaming in 2016, then it spells doom for the PC gaming industry, unless you think the remaining 120M+ steam users are all going to be upgrading?

It's happened before on a smaller scale and generally every time there is a new wave of consoles, developers utilize the increased power and it translates into PC gaming.

It's reminiscent of NVIDIA's GPGPU initiative within Tesla/GF 8 which saw a very large performance increase and complete architectural rework, the unified shader architecture was a full-on reformat. One of NVIDIA's most renowned architectures which was also a design with longevity which was released right after the PS3 and 360, mind you. It was very different from the GF 7 series. People were going GF 8 (8800GT/GTX) and R700 in droves...
I'm also willing to bet that more than half of those Steam users are going to be more tolerant of sub 60fps, even sub 30fps, and end up living with it.

What's going to happen with new gen cards is Fury X/390X/980/980Ti levels of performance will become more affordable. That doesn't mean the minute that happens in June 2016 that Fury X/980Ti are now 1080p level cards.

Definitely not implying that but I'm reminded that games do, eventually, evolve. It's been a long (emphasis on long) time since any game has really punished our hardware and now that it's happening via lighting in newer triple A's our wires are crossing and people are shorting out -- perpetuated by 4K's light which was getting brighter/looking viable -- because it's been such a long time I think people's expectations fell dormant becoming accustomed to the increase in GPU power without the increase in demand.

I'm rambling but graphics technology generally does advance quicker than the demand for it though this particular form of lighting (compounding everything else being rendered on top of it) could be executed at the umpteenth power if needed which would absolutely cripple anything current including in the products in the immediate pipeline. GI is essentially ray tracing at it's roots and these early forms are severely cut down implementations of it with low sampling. Everything which is large budget is doing some type of global illumination atm.
http://www.thepolygoners.com/tutorials/GIIntro/GIIntro.htm
My main point is that the PS4 & XBO's increase in power prompted developers to utilize the power through a better lighting model and this movement is slowly filtering beyond the console and throughout the industry. It's irritating but the consoles have most of the control in the industry's direction as far as gaming in concerned. There are other factors which compound the "decrease" in performance like heavier use of compute through physics, lighting ect.

I do find it funny that a lot of people assume the freight train which carries no load will perpetually continue to speed up achieving warp 4k/8k (whatever). A load eventually get's put on, the train slows down and people are up-in-arms. "Who the hell put a load on my train which was designed to haul crap?"
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,586
1,746
136
Nice find :thumbsup:

28 nm with HBM2 is interesting. Trying to think where this could be used in the upcoming products. Do you think fiji with hbm2 would be released to hold over before Vega in 2017? Or maybe Bristol ridge with HBM2? Could consoles use this additional bandwith to maybe reach 4K resolutions? Northwest logic said they have 7 customer designs in process but did not indicate what size.

I'm not so sure that is dealing with the GPU itself. It sounds like the PHY which is at the bottom of the HBM2 stack. In that case, there's no reason a 14nm GPU couldn't use a stack with a 28nm or 14nm PHY.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
"AMD Radeon 400 Series Polaris GPUs Land Major Apple Design Wins – Perf Per Watt & Perf Per Dollar Strides Pay Off

"From what we’ve been hearing Polaris is no exception. In fact our sources have confirmed that the major OEM design win that we had reported on last year is indeed for Apple.

The Sunnyvale, California based chip maker secured wins for both of its upcoming Radeon 400 series 14nm FinFET graphics chips, Polaris 10 and Polaris 11. Previously known as “Ellesmere” and “Baffin”, both of which are Arctic Islands. The chips have since been renamed to Polaris 10 and 11 respectively, in line with AMD’s newly adopted Astronomy based architectural code naming scheme which Koduri had instated after the Radeon Technologies Group was established last year.

The Polaris 10 and 11 chips will go into new desktops and notebooks from Apple, which the company plans to bring to market later this year. And although these Apple design wins may not be significant volume contributors they are very profitable."


http://wccftech.com/amd-polaris-radeon-400-series-gpus-sweep-apple-design-wins/#ixzz46F8NScJv"

"Earlier today Siri confirmed WWDC 2016, with the event set to kick off on June 13 through 17.

http://wccftech.com/apple-officially-confirms-wwdc-2016-launches-ticket-lottery/#ixzz46FAxNyvU"
 
Feb 19, 2009
10,457
10
76
^ Pretty reliable rumor really.

Apple ships a ton of notebooks and their Mac Pro lineup is very popular. That's my concern for Polaris 11 and 10 on PC, that AMD is dedicating all the fully good chips for Apple, that's the worse situation for PC gamers.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
^ Pretty reliable rumor really.

Apple ships a ton of notebooks and their Mac Pro lineup is very popular. That's my concern for Polaris 11 and 10 on PC, that AMD is dedicating all the fully good chips for Apple, that's the worse situation for PC gamers.

The entire rMBP, iMac and Mac Pro tower are all outdated and are all going to see major updates in 2016. That means there is a chance AMD will lock in all 3 of them in. Haswell was in Apple computers for a long time now and based on anecdotal evidence, it seems there are a decent number of hold outs for Skylake. The Mac Pro hasn't been updated in almost 3 years; so that one should have crazy high demand. I haven't figured out what they would use for Mac Pro though since Polaris 10 isn't a FirePro card yet and Radeon Pro Duo uses too much power for that tower -- not to mention is limited by 4GB of VRAM. Hmm....Mac Pro is actually a prime candidate for Big Pascal or Vega 10 16GB HBM2 but those options are way out on the horizon. Maybe they won't update the Mac Pro until December 2016?

This part is interesting:

"In a statement released earlier this year, AMD claimed that the new 14 nm Polaris GPUs will offer over double the performance per watt of their 28 nm predecessors. This news also confirmed AMD's use of Global Foundries' 14 nm FinFET process, rather than TSMC's 16 nm process, which Nvidia will use. While AMD confirmed the use of TSMC for its higher power product offerings, any products developed from that process node would be destined for the Mac Pro only, as Apple has traditionally used mobile GPUs for its notebook and iMac product lines."
http://www.macrumors.com/2016/04/07/new-graphics-amd-nvidia/

I am trying to recall when AMD said they will manufacture Polaris with GloFo and Vega with TSMC because that's what that paragraph is insinuating. I thought both are being made at GloFo?

BTW, on Apple forums, top voted posts are all crapping on AMD's 14nm GPUs already...suggesting they'll be delayed to 2020 or how they'll barely match performance of "PC graphics" made in 2014.

http://www.macrumors.com/2016/04/07/new-graphics-amd-nvidia/
 
Last edited:
Feb 19, 2009
10,457
10
76
AMD has never said they will dual source Polaris or Vega, these are forum rumors.

Polaris is 14nm FF. AMD have not said anything about Vega besides HBM2 and higher performance and perf/w with the roadmap at end of 2016.

AMD's win with apple is all but assured. Apple loves OpenCL and hates propriety (unless their own!!).
 

tajoh111

Senior member
Mar 28, 2005
305
322
136
I believe that no, and very much. 14nm is a node and half ahead of 28nm plus Finfets, by process alone is a huge jump. And GCN4 is a real new architecture with tons of new optimizations, so the performance watt jump suggested that will happen is really huge. Not hard at all to deliver 2.5x perf/w over Fury X. I expect Pascal to deliver the claimed 2x perf/w jump over Maxwell, expecting it to double the performance at every TDP point too.


Demonstrations of Polaris 11 performance(same FPS at 60% of the total system power) and of Polaris 10(carved 60 FPS at Hitman 1440p, hard even for Fury X to maintain this) shows that performance improvements of FF GPUs will be great.

The setting were never mentioned about hitman, just the resolution and fps. AMD has said on the record that they didn't mention settings on purpose to hide the performance of polaris 10.

And I wouldn't be too sure on 14nm being better than 16nm.

When it was discovered that there was a different chip in the the iphone 6s, they did tests which is the better chip.

The TSMC performed better.

https://www.reddit.com/r/iphone/com...esults_iphone_6s_samsung_14nm_vs_tsmc/cvpns61

http://m.mydrivers.com/newsview/449771.html?ref=

http://arstechnica.com/apple/2015/10/samsung-vs-tsmc-comparing-the-battery-life-of-two-apple-a9s/

In both cases they used first generation finfet. They didn't use finfet 16nm plus or 14 LPP, but what it means is 16nm could be better than 14nms. Samsung hasn't been making big chips and high performance while TSMC has been making them for a while. Smaller and having higher transistor density is not always better. If that was the case, hawaii would have been more efficient.
 
Feb 19, 2009
10,457
10
76

Backwards compatibility bro!

That is what everyone should have seen coming as soon as consoles went x86.

That's a custom Polaris, though 4 CU short, 36/40 for the normal Polaris 10. Being a semi-custom, they can put in as many CU as they need.

Zen isn't ready for prime-time yet and AMD is going to need it for server markets first. It makes sense for a shrink of jaguar, higher clocked, enough CPU power for low level console API.

I expect this APU with console API & HSA techniques to be able to handle 4K games at 30 fps. Just as the current PS4 targets 1080p 30/60 fps.

We're looking at a quick launch window here since they don't need to wait for Zen at all.

Now I am even more worried about AMD's volume on 14nm FF for PC gamers with Sony & Nintendo having insane volumes on their consoles.
 
Last edited:

showb1z

Senior member
Dec 30, 2010
462
53
91
Bigger bump on the CPU side would have been nice. It's already clocked at 1.75Ghz in the Xbox, this is only a 20% increase on top of that.
Some games are already CPU limited so doubling the GPU won't help there at all.
 

BlitzWulf

Member
Mar 3, 2016
165
73
101
"Backwards compatibility bro!

That is what everyone should have seen coming as soon as consoles went x86."



"Now I am even more worried about AMD's volume on 14nm FF for PC gamers with Sony & Nintendo having insane volumes on their consoles. "

I feel bad for all those PS4 XBONE owners who arent used to seeing their 400 dollar purchase become obsolete in 2 years.

Indeed it is troubling, we may see another launch like Fiji if AMD isn't careful .
 

dzoni2k2

Member
Sep 30, 2009
153
198
116
I feel bad for all those PS4 XBONE owners who arent used to seeing their 400 dollar purchase become obsolete in 2 years.

Indeed it is troubling, we may see another launch like Fiji if AMD isn't careful .

If AMD isn't careful? What does AMD have to do with Sony deciding to refresh a PS?
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Potentially having their volume eaten up producing console chips, so horrible supply to the consumer market.

Not that AMD will mind too much, as they'll be very happy to have some reasonable secure revenue!

They can't bump the CPU because of that backwards compatibility thing. Won't be sending the current consoles obsolete for a while either I imagine. Sell them together for a while, then sunset the current ones in a few years time.

These upgraded ones probably have to last to the next die shrink.
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
The setting were never mentioned about hitman, just the resolution and fps. AMD has said on the record that they didn't mention settings on purpose to hide the performance of polaris 10.

And I wouldn't be too sure on 14nm being better than 16nm.

When it was discovered that there was a different chip in the the iphone 6s, they did tests which is the better chip.

The TSMC performed better.

https://www.reddit.com/r/iphone/com...esults_iphone_6s_samsung_14nm_vs_tsmc/cvpns61

http://m.mydrivers.com/newsview/449771.html?ref=

http://arstechnica.com/apple/2015/10/samsung-vs-tsmc-comparing-the-battery-life-of-two-apple-a9s/

In both cases they used first generation finfet. They didn't use finfet 16nm plus or 14 LPP, but what it means is 16nm could be better than 14nms. Samsung hasn't been making big chips and high performance while TSMC has been making them for a while. Smaller and having higher transistor density is not always better. If that was the case, hawaii would have been more efficient.
That can go both ways, depending on the review site of your choice ~ http://www.tomshardware.com/news/iphone-6s-a9-samsung-vs-tsmc,30306.html

As for Samsung's 14nm it was developed alongside GF & IBM teams, probably to manufacture POWER CPU's as well, so I'd be less skeptical about what the process characteristics are, especially when comparing it to 16nm FF+ :thumbsup:
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
The setting were never mentioned about hitman, just the resolution and fps. AMD has said on the record that they didn't mention settings on purpose to hide the performance of polaris 10.

And I wouldn't be too sure on 14nm being better than 16nm.

When it was discovered that there was a different chip in the the iphone 6s, they did tests which is the better chip.

The TSMC performed better.

https://www.reddit.com/r/iphone/com...esults_iphone_6s_samsung_14nm_vs_tsmc/cvpns61

http://m.mydrivers.com/newsview/449771.html?ref=

http://arstechnica.com/apple/2015/10/samsung-vs-tsmc-comparing-the-battery-life-of-two-apple-a9s/

In both cases they used first generation finfet. They didn't use finfet 16nm plus or 14 LPP, but what it means is 16nm could be better than 14nms. Samsung hasn't been making big chips and high performance while TSMC has been making them for a while. Smaller and having higher transistor density is not always better. If that was the case, hawaii would have been more efficient.
iPhone 6S and SE chips are developed on Samsung LPE process, not LPP. Apple had to buy volume from TSMC, thats why they went dual sourcing with A9, because A9X was not enough alone. That was the only reason. If Apple would be able to handle it all without TSMC - they would only buy from Samsung chips.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Potentially having their volume eaten up producing console chips, so horrible supply to the consumer market.

Not that AMD will mind too much, as they'll be very happy to have some reasonable secure revenue!

They can't bump the CPU because of that backwards compatibility thing. Won't be sending the current consoles obsolete for a while either I imagine. Sell them together for a while, then sunset the current ones in a few years time.

These upgraded ones probably have to last to the next die shrink.

The nice thing, for AMD, is that they now have their own private Foundry (Global Foundries) licensing Samsung 14LPP technology from Samsung. AMD don't have to compete with Apple etc over at TSMC for wafers like NVIDIA do. AMD are also using Samsung, so two manufacturer foundries at their disposal.

XBox One and PS4 chips are still being made over at TSMC.

So this gives AMD the ability to manufacture 14LPP chips on a massive scale. The new consoles won't be eating up their capacity much due to this added flexibility.

I'm actually more worried about NVIDIA being able to produce enough for both the consumer and HPC markets considering the large HPC deals they've signed. AMDs recent HPC deals are also at TSMC because they're based on 28nm planar chips.

NVIDIA is more constrained than AMD.
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
Backwards compatibility bro!

That is what everyone should have seen coming as soon as consoles went x86.

That's a custom Polaris, though 4 CU short, 36/40 for the normal Polaris 10. Being a semi-custom, they can put in as many CU as they need.

Zen isn't ready for prime-time yet and AMD is going to need it for server markets first. It makes sense for a shrink of jaguar, higher clocked, enough CPU power for low level console API.

I expect this APU with console API & HSA techniques to be able to handle 4K games at 30 fps. Just as the current PS4 targets 1080p 30/60 fps.

We're looking at a quick launch window here since they don't need to wait for Zen at all.

Now I am even more worried about AMD's volume on 14nm FF for PC gamers with Sony & Nintendo having insane volumes on their consoles.
I'm hoping it's Puma+ cores & not the first gen Jaguar. The IPC for the second gen Jaguar was higher, IIRC something like 5~10% & so should be the logical choice for PS4.5 or PS4K, as the case may be.

I think Samsung has that covered, they'll be the backup to GF's 14nm & will likely work with the IBM foundry team (now a part of GF) for the next few node shrinks as well.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Last edited:
Feb 19, 2009
10,457
10
76
Potentially having their volume eaten up producing console chips, so horrible supply to the consumer market.

Not that AMD will mind too much, as they'll be very happy to have some reasonable secure revenue!

They can't bump the CPU because of that backwards compatibility thing. Won't be sending the current consoles obsolete for a while either I imagine. Sell them together for a while, then sunset the current ones in a few years time.

These upgraded ones probably have to last to the next die shrink.

It's not an issue of not being able to refresh the CPU due to backwards compatible, once you're on x86 you're compatible. That's why Skylake x86 CPUs still are capable of running DOS era x86 games.

The instruction set is backwards compatible already and if the OS itself (Win 10 for Xbone DX12) is the same, games will be just fine.

Zen isn't ready. If they want to launch NX soon this year, and PS4K later this year, Zen just isn't in that time-frame because production of the APU would have to already be happening RIGHT NOW to generate enough volume for a launch 3-6 months later.

MS needs to refresh if Sony is doing it and the NX smashes the Xbone out of the park based on Polaris vs 7770 haha. So they may be late to this refresh, in 2017, they can have a Zen + Polaris 10 APU for their Xbone Next then.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |