Ashes of the Singularity User Benchmarks Thread

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
Looking over the documentation I can't find anything that says that AMD's 8 queues per ACE are submitted in parallel. The ACE units operate in parallel and can manage 8 queues but nothing appears to say that those 8 queues are done in parallel.

Furthermore AMD's recent blog post states this.

http://developer.amd.com/community/blog/2015/06/05/concurrency-in-modern-3d-graphics-apis/



  • Copy queues support all kinds of copy operations, including format conversions, multi-sample anti-aliasing (MSAA) resolves, and swizzling
  • Compute queues are a superset of copy queues, and also support dispatching compute tasks
  • Graphics queues are a superset of compute queues and also support rendering operations
AMD drivers currently support one queue of each type.
Which seems to indicate that while the hardware is there, more than 1 queue of each type is not enabled by drivers yet.

If anything can find anything on this it would be appreciated.

Each ACE has up to 8 queues, but they cannot handle different queue tasks at the same time, copy, compute, or graphics in the queue, not more than 1 type at once in the queue is my understanding. There's more info in the recent 2015 SIGGRAPH.

Here's the thing though, GCN has a separate CP (hence the 1 + 8 engines, or 1+64 queues) that handles rendering so the ACEs can focus on compute tasks.

And it is true that GCN is under-utilizing its shaders (front-end bottleneck, cos the ACEs are idling in DX11). Computerbase.de's review of Fury X has a summary with their interview with AMD on the uarch and they mention this specifically, its why it sucks at 1080/1440p on DX11 and gets better at 4K (& DX12).

Also, posted on b3d by gamedevs, they had this to say regarding queues, basically don't be fooled by the total queue count of an engine, using more than 1 in parallel is very difficult and can potentially cause issues. I posted the direct quote awhile ago, cbf finding it again.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The GMU was designed with OpenCL and CUDA programming in mind. It was not designed with DirectX 12 in mind. It was patched, with Maxwell 2, in order to add a degree of Parallelism which Maxwell and Kepler both lacked.

Sorry but there's absolutely no way for you to know this unless you're an NVidia engineer. The entire development cycle for DX12 was over 4 years according to NVidia, from preliminary discussions and debate all the way to actual design.

It's very plausible that through collaboration with Microsoft, NVidia knew what features DX12 would possess and therefore took that consideration into mind when designing Maxwell.

GPU architectures and CPU architectures aren't designed in a vacuum you know. There is a lot of collaboration between the IHVs and Microsoft when it comes to this sort of thing..

Otherwise what would be the point of having standards?

nVIDIAs engineers weren't incompetent. They're rather conservative. They take their time to adopt new features. AMD engineers, on the other hand, take enormous risks. The risk they took with the ACEs, which sat there doing nothing since the 290x was released, is now paying off. nVIDIA is banking on Asynchronous Shading to not be a huge factor until Pascal releases. Once Pascal releases, you'll notice all of the ACE-like units it will have and you can think back to this thread.

You might be right but we won't know for sure until next year..

Under DX11. A Serial API. Your Statements are correct. Under DX12. A Parallel API. Your Statements are erroneous.

Again, we won't know for sure until we see actual DX12 release titles. Generally speaking, GPU efficiency and usage increases when using DX12. This goes for both Maxwell and GCN, and since Maxwell already has such a large efficiency advantage under DX11, gap will continue to increase even more under DX12.

Devs have already gotten a 20% increase in GPU performance with Maxwell just by switching to DX12, and this was without any CPU bottleneck on DX11..

A 290x is slightly slower than a 390x (core clock and memory bandwidth being the only defining characteristics). Core Clock being the characteristic which matters most for Compute. A 390x is a repackaged 290x with some updated clocks and 8GB of GDDR5 vs 4GB of GDDR5 on the 290x.

A 290x is faster than a 390 in compute tasks. A 390 is a repackaged 290.

Well like I said, the GTX 980 was facing an R9 390x in the PCper review, and the GTX 980 Ti was facing an R9 Fury X in the computerbase.de and Extremetech review, all of which are faster than the 290x.

Take a 290x, it can handle several million draw calls yet in Star Swarm, with only 100,000 draw calls, it is bottlenecked. Why?

Because it can't process that many polygons. OK, I concede your point..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Basically this. Fury (and also AMD's other cards) are really front end limited and cannot extract full hardware parallelism. This can easily be seen in the poor scaling exhibited by the Fury X over Hawaii. This is also why GCN does so well with asynchronous compute - the GPU simply cannot keep the whole chip active because of bottlenecks on the front end.

People are looking at this the wrong way. Asynchronous compute shows gains because the GPU cannot be fully utilized with one task in serial. This is bad. Asynchronous compute is a method to mitigate this by running other tasks on the non utilized hardware (ie running physics calculations while the geometry units are busy with tesselation).

It appears for Nvidia that asynchronous compute is not as much of a benefit. This is likely because the front end isn't holding the GPU back as much.

Solid points Enigmoid.. :thumbsup:
 
Feb 19, 2009
10,457
10
76
Solid points Enigmoid.. :thumbsup:

It's funny you agree to that, since it's the same point several of us including me have been saying for a long time already. AMD needs Mantle/DX12 for GCN to shine because its operating crippled in DX11. That's just the way they designed it, to be "forward-looking"**. This is why its fully realistic to expect GCN to see significant performance gains in DX12.

There's a few possible reasons why NV don't see a perf jump, maybe they already saturate their uarch in DX11 (operating at peak efficiency already), maybe they suffer a performance hit with async shaders, maybe Oxide decides to gimp NV because AMD paid them (right, it's a possibility since none of us know for sure)... But as I've said, until Oxide has proven themselves to be un-ethical, I would lean towards the other reasons as being more plausible.

**
"The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands -- the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that's in the system."

"The time frame when we were designing these features was 2009, 2010. And the timeframe in which people will use these features fully is 2015? 2017?" said Cerny.

http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?print=1
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It's funny you agree to that, since it's the same point several of us including me have been saying for a long time already. AMD needs Mantle/DX12 for GCN to shine because its operating crippled in DX11. That's just the way they designed it, to be "forward-looking"**. This is why its fully realistic to expect GCN to see significant performance gains in DX12.

Well a lot of people (including myself) that follow GPU stuff are aware that AMD's GCN architecture suffers from under utilization..

I just didn't know how to frame it in the context of this discussion on asynchronous compute..

There's a few possible reasons why NV don't see a perf jump, maybe they already saturate their uarch in DX11 (operating at peak efficiency already), maybe they suffer a performance hit with async shaders, maybe Oxide decides to gimp NV because AMD paid them (right, it's a possibility since none of us know for sure)... But as I've said, until Oxide has proven themselves to be un-ethical, I would lean towards the other reasons as being more plausible.

Did you see the Fable Legends video I posted showing the 20% gain from DX12 on a GTX 980? And that was in a non CPU bottlenecked scenario.

DX12 generally speaking lowers CPU utilization by increasing efficiency through much better parallelism, and simultaneously increases GPU utilization.. So just running on DX12 should increase performance, unless you're completely GPU bound, which most games are not.
 
Last edited:

Digidi

Junior Member
Aug 22, 2015
9
0
0
AMD can do a lot to feed there shader by driver. Also i don't think the Problem is in the frontend. The Problem AMD have is the horrible dx11driver which cant pull any drawcalls to the gpu. There are rumors that AMD's driver use only one thread in the CPU. This will also look like frontend bottelnecks. But now the fps in low resolution in dx12 is the same like nvidia. So i think there is no frontend bottleneck!

Nvidia have the Problem that they cant fix there command processor Problem because its 100% Hardware limit.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Each ACE has up to 8 queues, but they cannot handle different queue tasks at the same time, copy, compute, or graphics in the queue, not more than 1 type at once in the queue is my understanding. There's more info in the recent 2015 SIGGRAPH.

Here's the thing though, GCN has a separate CP (hence the 1 + 8 engines, or 1+64 queues) that handles rendering so the ACEs can focus on compute tasks.

And it is true that GCN is under-utilizing its shaders (front-end bottleneck, cos the ACEs are idling in DX11). Computerbase.de's review of Fury X has a summary with their interview with AMD on the uarch and they mention this specifically, its why it sucks at 1080/1440p on DX11 and gets better at 4K (& DX12).

Also, posted on b3d by gamedevs, they had this to say regarding queues, basically don't be fooled by the total queue count of an engine, using more than 1 in parallel is very difficult and can potentially cause issues. I posted the direct quote awhile ago, cbf finding it again.

Thanks. You bring to the discussion a valuable point though when you say that it is very difficult to utilize more than a couple queues. For the most part 31 vs. 64 compute queues is irrelevant as nothing will come close to using the many queues (perhaps some more esoteric ST bound compute code where each thread is somewhat ST limited but you need to run the code hundreds of times - ie 3DPM).

It's funny you agree to that, since it's the same point several of us including me have been saying for a long time already. AMD needs Mantle/DX12 for GCN to shine because its operating crippled in DX11. That's just the way they designed it, to be "forward-looking"**. This is why its fully realistic to expect GCN to see significant performance gains in DX12.

There's a few possible reasons why NV don't see a perf jump, maybe they already saturate their uarch in DX11 (operating at peak efficiency already), maybe they suffer a performance hit with async shaders, maybe Oxide decides to gimp NV because AMD paid them (right, it's a possibility since none of us know for sure)... But as I've said, until Oxide has proven themselves to be un-ethical, I would lean towards the other reasons as being more plausible.

**
http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?print=1

I agree that AMD needs DX12 but I disagree with the wording you are using. AMD is held back but assigning all the blame to DX11 is not true; AMD is at least somewhat at fault for designing a GPU with these kinds of bottlenecks. GCN is foreward thinking and thats good, but it doesn't do well in DX11 and that is bad.

Its also not exclusive to DX, this can be seen in compute.



Fury has serious bottleneck problems. This is not a DX problem. This is an architectural problem - DX 12 alleviates it but Fury is crippled by itself, not DX11, which simply exposes Fiji's problems.

Nvidia have the Problem that they cant fix there command processor Problem because its 100% Hardware limit.

There is little to not chance this will be an actual bottleneck with actual games unless nothing is done but spam unbatched draw calls.
 

Digidi

Junior Member
Aug 22, 2015
9
0
0
I think its only a driver Problem from. AMD. If Software is optimized for AMD it runs really well under dx11
 
Feb 19, 2009
10,457
10
76
I agree that AMD needs DX12 but I disagree with the wording you are using. AMD is held back but assigning all the blame to DX11 is not true; AMD is at least somewhat at fault for designing a GPU with these kinds of bottlenecks. GCN is foreward thinking and thats good, but it doesn't do well in DX11 and that is bad.

Fury has serious bottleneck problems. This is not a DX problem. This is an architectural problem - DX 12 alleviates it but Fury is crippled by itself, not DX11, which simply exposes Fiji's problems.

It's whether you're a half glass full or empty guy. I see GCN as forward thinking, doing alright in the current API (compare the performance versus NV's lineup at every segment), but excelling with a future API.

As we can see work on GCN begun a very long time ago, 2009 in fact, they planned these features to be utilized in 2015+.

So while GCN IS crippled running DX11, the SKUs themselves are very competitive, 7870 vs 660, 7970 v 680, R290X vs Titan/780Ti, now even rebadged, 390/X vs 970/980, and Fury X vs 980Ti (but only at 4K due to its bottlenecks). And its got room to grow in performance with DX12, I see that as a good thing, on the design side, making one uarch that's competitive now and excel in the future, and as a gamer PoV, someone who bought an R290X on release, can still get a big perf leap years later (DX12), is a great investment.

Also for the context of Ashes, be very careful how you interpret the different testing modes. NV do show massive gains when they test CPU focused settings, draw call heavy, no async rendering (look at images in those scenarios, no global lighting, no dynamic lights generated by units firing), WCFTECH got major gains on the 770 with that test. NV GPUs only show a regression in the GPU heavy setting with async compute/shading.

Why does that happen is open for debate. But a blanket statement that Oxide's DX12 is poor or not functional is incorrect.
 

jj109

Senior member
Dec 17, 2013
391
59
91
Is it possible that NV performs worse in DX12 because their hardware is gimped for async compute/shaders? Yes/No?

Unless you know for a fact, then yes, it's possible.

Time will tell all.

Well there are (at least) two inconsistencies with that theory:

The 280X can submit only 3 work units per cycle (I checked the GCN 1.0 whitepaper), but why is it gaining a lot in DX12 across all loads?

GTX 980Ti and GTX 980 allegedly have the same front-end. If there's a command processor bottleneck, why does the GTX 980Ti show gains when using 4 cores but not GTX 980?
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
It's whether you're a half glass full or empty guy. I see GCN as forward thinking, doing alright in the current API (compare the performance versus NV's lineup at every segment), but excelling with a future API.

As we can see work on GCN begun a very long time ago, 2009 in fact, they planned these features to be utilized in 2015+.

So while GCN IS crippled running DX11, the SKUs themselves are very competitive, 7870 vs 660, 7970 v 680, R290X vs Titan/780Ti, now even rebadged, 390/X vs 970/980, and Fury X vs 980Ti (but only at 4K due to its bottlenecks). And its got room to grow in performance with DX12, I see that as a good thing, on the design side, making one uarch that's competitive now and excel in the future, and as a gamer PoV, someone who bought an R290X on release, can still get a big perf leap years later (DX12), is a great investment.

Fair enough. Though I will say (and I think you will agree with me) that had AMD maintained Pitcarin like scaling throughout their lineup they would have done significantly better.
 
Feb 19, 2009
10,457
10
76
Well there are (at least) two inconsistencies with that theory:

The 280X can submit only 3 work units per cycle (I checked the GCN 1.0 whitepaper), but why is it gaining a lot in DX12 across all loads?

Good that you bring that up, look at the comparison chart here (Full System Test, not draw call only):

http://www.computerbase.de/2015-08/...-nvidia/2/#abschnitt_benchmarks_in_1920__1080

280X (GCN 1, 1CP + 2 ACE units) gains 36% from DX11 > DX12.

390 gains 64%.

It would have been nice if they included 285 in that test, cos its got a similar performance profile to the 280X, but its got 8 ACEs instead of 2. That would have been conclusive.

If you look at the High Draw Call test, we find NV improves in performance in DX12 just fine, so that feature of DX12 is functional for NV. They only regress when reviewers test GPU heavy scenarios with async shaders for global dynamic lighting.

Edit: In the context of NV's PR statement, that Ashes isn't representative of DX12 games, may be correct because they have thousands of dynamic lights, it really is pushing async shading heavily. Its hard to imagine most games need that many dynamic light sources so it may in fact be "not representative".
 
Last edited:
Feb 19, 2009
10,457
10
76
Fair enough. Though I will say (and I think you will agree with me) that had AMD maintained Pitcarin like scaling throughout their lineup they would have done significantly better.

Yup but they needed more DP compute throughput at the higher lineup, it helped them gain a nice chunk of the HPC marketshare (which was a shocker since I didn't see that could happen that quick).
 

jj109

Senior member
Dec 17, 2013
391
59
91
Good that you bring that up, look at the comparison chart here (Full System Test, not draw call only):

http://www.computerbase.de/2015-08/...-nvidia/2/#abschnitt_benchmarks_in_1920__1080

280X (GCN 1, 1CP + 2 ACE units) gains 36% from DX11 > DX12.

390 gains 64%.

It would have been nice if they included 285 in that test, cos its got a similar performance profile to the 280X, but its got 8 ACEs instead of 2. That would have been conclusive.

Yes, 285 would've been nice.

280X's lower gains can also be attributed to GPU bottleneck which imo is more likely. The 390 is underutilized much more than the 280X given that it is only 10% faster in DX11.
 

Digidi

Junior Member
Aug 22, 2015
9
0
0
Well there are (at least) two inconsistencies with that theory:

The 280X can submit only 3 work units per cycle (I checked the GCN 1.0 whitepaper), but why is it gaining a lot in DX12 across all loads?

GTX 980Ti and GTX 980 allegedly have the same front-end. If there's a command processor bottleneck, why does the GTX 980Ti show gains when using 4 cores but not GTX 980?

980 and 980ti don't have same front end. 980 4 rasteriser, ti 6 rasteriser.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Sebbi over at Beyond3d forums raises some interesting points..

He says we don't even know for certain whether the game uses asynchronous compute, which is true. This is just an assumption, although a fairly good one I think.

And secondly, it might be possible that NVidia may not even have asynchronous compute function enabled in their drivers yet..

As for AMD, it's much more likely that they have it enabled in their drivers, as they've made demos and slides about the performance increase you can get from using it..

AMD are understandably gung ho about using it as it works really well for their architecture..
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
This touches on the problem of creating drivers for API like dx11. Looks like GPU drivers are going to finally work like most other hardware drivers. Never understood why it was normal to be updating drivers so often when the hardware is not changing.

https://www.youtube.com/watch?v=qKbtrVEhaw8&feature=youtu.be&t=4790

asyn shaders are too much of a big deal not to show up. They are in fable as well. Carfax83 The video you posted was talking about UAV and its effect on FPS in dx11 vs dx12.


http://www.redgamingtech.com/asynch...eir-role-on-ps4-xbox-one-pc-according-to-amd/
https://www.youtube.com/watch?v=NFEhA_L_VlU

That game actually has better particles than PC games I think. Looking forward to similar things coming over

With a little bit of guess work, it’s likely down to a few reasons. The first being the pure shader power. We’re left with 1.84 TFLOPS vs 1.32 TFLOPS. The second, the memory bandwidth equation, and the third – the more robust compute structure of the Playstation 4. The additional ACE buried inside the PS4’s GPU do help out a lot with task scheduling, and generally speed up the performance of the PS4’s compute / graphics commands which are issued to the shaders. Mostly the reason behind the improvement in ACE is Sony (so the story goes) knew the future of this generation of consoles was compute, and requested some changes to the GPU, thus there are many similarities to the PS4’s GPU and Volcanic Islands from AMD.

http://www.redgamingtech.com/ubisoft-gdc-presentation-of-ps4-x1-gpu-cpu-performance/
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
There is a saying about those that live in glass houses...

You might want to check up on that seeing as how you started an entire thread about ARK Survival Evolved that was essentially one huge ad hominem attack on Nvidia and Gameworks titles.

http://forums.anandtech.com/showthread.php?t=2439041&highlight=


In the real world software development houses align themselves with the big chip makers to get support they otherwise couldn't get.

This is absolutely normal and happens between different companies within many different industries. But trying to look at one of these companies products as an indicator of general performance compared to a non-aligned products is at best misleading.


For example, Oxide / Stardock / AMD are clearly closely aligned, as this is right on their front page.



Do you have any direct evidence of the bias you are claiming. The reason I don't just accept the partnership as evidence is prior GE titles have shown no such favoritism. Battlefield 3, the poster child for Mantle still had DX11 multi thread rendering support even though it's only supported by nVidia. AMD releases all of it's source code for it's effects rather than hiding them in a pre-compiled DLL with strict licensing that the source can't be shared with AMD.

There is no evidence to align with the claims you are making except there is an AMD logo on the site.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
No, it is not. It is optimized for AMD hardware.
Otherwise nVidia hardware would be faster with DX12 like they are in King of Wushu or Fable Legends.


They said that they've done nVidia optimizations. They replaced their own shader code with code supplied by nVidia. Do you just post FUD hoping people won't dispute it? Because I know it's already been stated prior to this that nVidia optimizations have been done.
 

Paratus

Lifer
Jun 4, 2004
17,092
14,398
146
@ Mahigan welcome to Anandtech! That PDF document concerned Kepler and not Maxwell. HyperQ was certainly improved for Maxwell over Kepler, as Maxwell can do everything in parallel. Ryan explains it in his article:



Source



Only one benchmark had results where the GTX 980 Ti performed so poorly, and that was Ars Technica's. Computerbase.de and ExtremeTech showed much better results for the GTX 980 Ti, on par more or less with the R9 Fury X:



Looks like the venerable 7970 GHZ (280X) is much faster than the 770 and 960 and would probably slot in between the 780 & 780ti.

It gets a significant bump even with the differences between GCN1.0 and 1.1/1.2.
What do you mean years of subpar DX11 performance?

My R290s stomped all over 780s (they were $100 cheaper each at the time!). It even has a bigger gap now. R290X on recent drivers is faster than 970. If that's subpar, it's pretty decent.

Also, by buying a R290X, the gamer is rewarded years later with DX12 boosting it performance by a massive leap. That's bad? Your world is a strange one.

I know. My 7970 GHZ was faster than the 680 when it came out, then was faster than the 770 and quite possibly now faster than the 780 in dx12. Quite frankly it's surprisingly close to the 970 as well.

Ah well it's all good. Everybody gets better performance. AMD users get better performance from new drivers and DX. NV users get better performance from buying new architectures.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
While that's great for you as a 7970 owner, and as a 7950 owner I'm happy, it doesn't really matter when a lot of people are going to move past the 7970 level of performance. Me personally, I'm only waiting because I know the next gen is a massive jump. Otherwise, I'd NEVER hold onto a card this long. And if that's the case, I'd rather get better performance buying a new architecture with Nvidia, then waiting years for DX12 to come out as a 7950 owner.

It depends on what your upgrade cycle looks like and you hold onto your hardware for quite a bit. Others with more modern rigs don't want to wait for DX12, they'll just buy the best hardware out for DX12 when DX12 games are actually out rather than buy hardware that's good at DX12 games before DX12 is out.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
While that's great for you as a 7970 owner, and as a 7950 owner I'm happy, it doesn't really matter when a lot of people are going to move past the 7970 level of performance. Me personally, I'm only waiting because I know the next gen is a massive jump. Otherwise, I'd NEVER hold onto a card this long. And if that's the case, I'd rather get better performance buying a new architecture with Nvidia, then waiting years for DX12 to come out as a 7950 owner.

It depends on what your upgrade cycle looks like and you hold onto your hardware for quite a bit. Others with more modern rigs don't want to wait for DX12, they'll just buy the best hardware out for DX12 when DX12 games are actually out rather than buy hardware that's good at DX12 games before DX12 is out.

You know, R9 290/X where selling for less than GTX970 a few months ago. It seems now that 290/X was the better card to get. And today, it also seems that 390 is even batter than GTX970, so if you are in the market for a $300 card the R9 390 is the better choice. Same goes for the R9 380 vs GTX960.
 
Feb 19, 2009
10,457
10
76
Sebbi over at Beyond3d forums raises some interesting points..

He says we don't even know for certain whether the game uses asynchronous compute, which is true. This is just an assumption, although a fairly good one I think.

And secondly, it might be possible that NVidia may not even have asynchronous compute function enabled in their drivers yet..

As for AMD, it's much more likely that they have it enabled in their drivers, as they've made demos and slides about the performance increase you can get from using it..

AMD are understandably gung ho about using it as it works really well for their architecture..

I take what Sebbi say seriously, but its been said in public Ashes uses async compute for lighting already. I have zero reason to doubt Oxide.

Why would NV not have a core DX12 feature ready in their drivers? They've ample time to prepare for Win 10's launch. They even released a driver optimized for this one game in alpha.

It may turn out that the simplest reason is the right one, NV have very little gains from this feature, which is a core feature from the early GCN design & Mantle days, one that Sony specifically requested to be buffed on their GCN APU moving from 2 ACE to 8 just so they can smack the Xbone even harder with it.

Also, Fable definitely uses Async Compute & Shading, Raymond at Lionhead Studio was on stage talking about how GCN's uarch made many of these features "free performance".
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |