Kepler, Maxwell, Pascal - Performance comparison from new drivers (353 -> 376)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

realibrad

Lifer
Oct 18, 2013
12,337
898
126
This really doesn't make much sense at all to me. DX12 is a low level API, it's supposed to expose more of the GPU, not less. Anything available in DX11 should be available in DX12, and then some. Plus, there are games where NVidia definitely gains in DX12, like Ashes of the Singularity, a very highly optimized DX12 title. So to me it seems like it's much more a matter of optimization than anything else. It took MONTHS before Oxide studios could patch the game and bring NVidia's DX12 performance up to par as the game was heavily AMD biased from it's inception (plus NVidia definitely tweaked their DX12 drivers), but now NVidia is very competitive in that game.



You know there are times when I really want to believe that you are an unbiased source, but when you make comments like these, it's hard for me to drink the Kool-Aid. The insinuation being of course, that NVidia's architecture is inherently DX12 unfriendly. No matter how many times we see NVidia outperform AMD in DX12 titles, this myth still persists and it seems to be driven by the same people.

Could it not be that their arch is designed to work with DX11, but not optimized to work with DX12 when compared to DX11? I would imagine that DX11 has a structure that is inherently different in terms of how things are executed. Thus, it would seem reasonable that on the hardware side, their arch was not set up to take advantage of DX12 like it was with DX11. That is why drivers are needed, which make up for the hardware not being designed around DX12 as it was with DX11.
 
Reactions: Headfoot

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Did it ever cross people's minds that there's just not that much more performance to wring out of ol' Kepler?
That's certainly possible and as such complaining about Nvidia not providing enough improvements through drivers would be silly.

But my point is that there's a difference between claiming that nvidia "gimps" Kepler by not providing enough improvements and claiming that nvidia "gimps" Kepler by reducing performance through drivers.

The latter would seem to me to be a straw man invented purely to make it easier to dismiss people's complaints about Kepler and thereby to ignore the fact that regardless of the exact reason, Kepler has aged anything but gracefully.
 

nurturedhate

Golden Member
Aug 27, 2011
1,761
757
136
That's certainly possible and as such complaining about Nvidia not providing enough improvements through drivers would be silly.

But my point is that there's a difference between claiming that nvidia "gimps" Kepler by not providing enough improvements and claiming that nvidia "gimps" Kepler by reducing performance through drivers.

The latter would seem to me to be a straw man invented purely to make it easier to dismiss people's complaints about Kepler and thereby to ignore the fact that regardless of the exact reason, Kepler has aged anything but gracefully.
And that's exactly what it is, a strawman. There's been plenty of threads on here and beyond3d discussing the architectural differences between kepler and maxwell/pascal and how kepler required a higher level of driver level optimizations compared to the later due to its SM setup. This is why under certain situations the performance gap between kepler and maxwell/pascal/gcn is larger than it was historically. This isn't anything new.
 
Reactions: Bacon1 and Headfoot

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Yet the article that sparked this thread shows that 780 Ti gained more performance than 980 with the Win10 drivers, even after the 980 was released. Abandoning not found.

A move from 3 FPS to 4 fps is a 33% gain, but I doubt anyone would call that a big improvement. Its much easier to show large performance gain % when looking at low numbers to begin with.

So what is it guys, are game engines based around specific hardware setups or does AMD have very bad launch drivers? Because apparently when AMD does bad it's bad drivers, and when NVidia does bad it's because the engine doesn't support them and the drivers are flawless. The 780 Ti cost way more than the 290 both at launch yet the 290 is now faster. How can you defend that?

28 -> 29 fps, wow look at that 4% performance increase!

The biggest jump I saw was Doom OpenGL which went from 46-59 fps (51 for vulkan). Do we want to compare that to what a 290 can get while costing half as much when both launched?
 

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
does AMD have very bad launch drivers?

DING DING DING! Sound the bells and whistles, drop the confetti, everyone deploy their noisemakers!

For some reason people don't give AMD enough credit in improving their drivers over time, as I've done in the past.

But your whole post is off-topic, this thread compares a variety of Nvidia cards, nothing about comparing them to AMD. I know you can't help yourself, but try to stay on topic.
 
Reactions: Carfax83 and Phynaz

ConsoleLover

Member
Aug 28, 2016
137
43
56
AMD's reference designs are crap, they are loud, run too hot, due to bad thermal they throttle, their drivers are not 100% optimized at launch, they try and save for example 10-15W and fit it in some crap 150w narrative, when for those 10-15W the cards can gain 10% more performance or more.

So essentially its a mix of AIB partners providing good cooling solutions, mature drivers, higher OC headroom due to less loose pin requirements and of course AMD's cards pack more hardware power in terms of raw power, so with time AMD can unlock it with drivers and/or games can take more advantage of it.

Compare the latest for example the 1060 and 480, 1060 192bit 3/6gb vram vs 256bit 4/8gb vram, more textures, etc... just from pure raw performance amd is better, though it does take time to get game developers and engine designers to take advantage of amd hardware.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
But your whole post is off-topic, this thread compares a variety of Nvidia cards, nothing about comparing them to AMD. I know you can't help yourself, but try to stay on topic.

My whole post is off topic? I talked about the 780 Ti the whole time. I mentioned the 290 because they were direct competitors. Is this the Nvidia subforum or the Video Cards General one? If we can't talk about how it compares at all to AMD then it should be moved out of the GENERAL discussion area.

My post was completely on topic except for the tiny portion you quoted out of context.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Why do people keep saying that AOTS is AMD biased? The developers have stated they worked closer with Nvidia than AMD. It was Nvidia's drivers not being up to date that was the issue. Just look at the OP. 7-12% improvement from DX12 Drivers. Same reason that Vulkan in Doom was slower at first. Nvidia's drivers weren't good. They fixed that for a 12% gain once again.

Because the engine was designed around, and promoted for GCN hardware due to it's use of Mantle. It's also featured on AMD's marketing page, as being "Tuned for Radeon Graphics."

Source

When the game first launched, it was horribly balanced towards GCN hardware, and only after numerous patches did NVidia start to gain performance. Also, they HAD to work closer with NVidia than AMD to address the deficit!

The difference is once the initial DX12 / Vulkan drivers get worked out, there won't be the need for game specific drivers anymore to the extent there was for DX11, which is basically every game rendering improperly w/o drivers.

I will be the first to admit that NVidia's DX12 (and Vulkan) drivers were behind AMD's for the first 6 months or so after DX12 launched, but drivers alone cannot explain the large gap that was present with Ashes of the Singularity with the early builds.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Zlatan is a proper dev you eejits. Might want to listen to the guy that works with this stuff on a day to day basis, rather than going with what feels good in light of your hardware purchases.

Oh really. Why don't you tell us what developer he works for then, since you know so much?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Of the roughly dozen odd DX12 games AMD wins 75% atleast if not more. Rx 480 is on avg atleast 5% (if not more )faster in DX12 vs GTX 1060.

https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support

Whats even more revealing is Nvidia's DX11 performance is on par or superior in every DX12 title. You could say better DX11 drivers but thats exactly why Nvidia does better as they have a lot more driver software engineers. Raja Koduri stated in the recent PC World interview that the DX11 drivers are massive and have actual graphics shader code optimized for various game engines / games. In contrast the DX12 driver is lightweight and gives a lot more control to the developer.

The state of DX12 games performance today, is not indicative of anything in regards to which architecture is more suited towards DX12. The fact that both consoles use GCN hardware means that many games are by default heavily optimized for the Radeon architecture. With DX11, NVidia could tune their drivers around this handicap for the most part, but with DX12, they have to rely on developers which have varying degrees of competence.

Most DX12 games run slower in DX12 mode, than in DX11 for NVidia, and this should simply not be the case with COMPETENT development. That said, I expect this trend to decrease to be reversed as time goes by, as developers become more proficient in the use of DX12.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
When the game first launched, it was horribly balanced towards GCN hardware, and only after numerous patches did NVidia start to gain performance. Also, they HAD to work closer with NVidia than AMD to address the deficit!

Funny because the developers have said the opposite multiple times

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995

He has a bunch of other posts talking about how they worked more closely with Nvidia than AMD if you go through his profile, not going to bother posting them all here when this has been debunked a dozen times already.

https://www.extremetech.com/gaming/...ashes-of-the-singularity-directx-12-benchmark

They worked more with Nvidia pre-launch than AMD

Not to mention Nvidia having ID on stage to show off Doom Vulkan on a 1080 launch party, yet Vulkan was then AMD biased on release right?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Could it not be that their arch is designed to work with DX11, but not optimized to work with DX12 when compared to DX11? I would imagine that DX11 has a structure that is inherently different in terms of how things are executed. Thus, it would seem reasonable that on the hardware side, their arch was not set up to take advantage of DX12 like it was with DX11. That is why drivers are needed, which make up for the hardware not being designed around DX12 as it was with DX11.

No, no and no. DX11 and DX12 are APIs that allow programmers to expose capabilities that are built into the GPUs. Both NVidia and AMD GPUs have been DX12 capable for years now, but only with Mantle, DX12 and Vulkan have these features become accessible. In fact, NVidia GPUs support more advanced DX12 features like conservative rasterization and raster ordered views that the Radeon GPUs don't.. Also, the biggest performance benefit for DX12 by far, is the reduction in CPU overhead and the increase in rendering parallelism. So the main performance benefits for low level APIs comes mostly from the CPU side, and not the GPU.

So it has nothing to do with architecture support. It has to do with developer competence when it comes to optimization, and the fact that GCN hardware is in both the PS4 and Xbox One.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
In fact, NVidia GPUs support more advanced DX12 features like conservative rasterization and raster ordered views that the Radeon GPUs don't..

These are DX 11.3 features as well, hence their use in DX11 only in ROTTR and The Division and others. AFAIK CR still doesn't work in ROTTR DX12 mode for Nvidia.

The only DX12 only things are lower API overhead and engine features one of which is Async Compute which AFAIK Nvidia has still not enabled for Maxwell while stating it was available multiple times.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
The state of DX12 games performance today, is not indicative of anything in regards to which architecture is more suited towards DX12. The fact that both consoles use GCN hardware means that many games are by default heavily optimized for the Radeon architecture. With DX11, NVidia could tune their drivers around this handicap for the most part, but with DX12, they have to rely on developers which have varying degrees of competence.

Most DX12 games run slower in DX12 mode, than in DX11 for NVidia, and this should simply not be the case with COMPETENT development. That said, I expect this trend to decrease to be reversed as time goes by, as developers become more proficient in the use of DX12.

Its easy to blame developers completely rather than accept that the architecture plays a factor. But the fact is most DX12 games run faster on DX11 for Nvidia GPUs. For AMD thats the opposite. As I already said Nvidia's larger driver software team is one of the main reasons. DX11 drivers nowadays are pretty much game specific code optimizations and the company which has more engineers to throw at the problem does better. So no surprise Nvidia does better. DX12 on the other hand gives developer more low level control but is also more challenging. GPU architectural capability and game developer expertise with low level APIs / technical knowledge are both factors in the performance a developer can extract from a certain GPU. Arguing that the architecture is not while the developer alone is basically being argumentative for the sake of it.
 
Reactions: Headfoot

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I like how incredibly difficult it is for some posters here to accept that nVidia optimized fast paths in hardware for specific DX11 workloads. How is that at all difficult to accept? They designed the GPU to be fast in the API at the time. Not at all unusual. And if any of you have ever done any sort of optimization, you will know that many times optimization will require trading off corner cases for the main use case. 80/20 rule. The only reason we have any visibility into these tradeoffs is because for once AMD has picked a different strategy (generalized architecture banking on a new API(s) coming out).

I guarantee you that DX9 GPUs were optimized for that architecture. And DX8, 7, etc. all the way back.

The only unusual thing here is that AMD banked on a future API, providing us a comparison point we don't normally have. The only other time I can recall that happening was with G80, which was legendary even in DX9 though it was designed for DX10.
 
Reactions: raghu78

realibrad

Lifer
Oct 18, 2013
12,337
898
126
No, no and no. DX11 and DX12 are APIs that allow programmers to expose capabilities that are built into the GPUs. Both NVidia and AMD GPUs have been DX12 capable for years now, but only with Mantle, DX12 and Vulkan have these features become accessible. In fact, NVidia GPUs support more advanced DX12 features like conservative rasterization and raster ordered views that the Radeon GPUs don't.. Also, the biggest performance benefit for DX12 by far, is the reduction in CPU overhead and the increase in rendering parallelism. So the main performance benefits for low level APIs comes mostly from the CPU side, and not the GPU.

So it has nothing to do with architecture support. It has to do with developer competence when it comes to optimization, and the fact that GCN hardware is in both the PS4 and Xbox One.

Wait, so Nvidia can have better conservative rasterization? Question, did Nvidia accomplish this with hardware, or drivers?

If they did it with hardware, then it logical to think that GPUs can be designed to better process different APIs better and or worse.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
And back on topic. The main reason people say the drivers are crippled is because the lack of major performance gains:



https://www.computerbase.de/thema/grafikkarte/rangliste/#diagramm-performancerating-1920-1080

780 Ti released Nov 7th, 2013 for $700

http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review

290 Released Nov 5th, 2013 for $400 (2 days prior, 780 ti was 75% more expensive)

http://www.anandtech.com/show/7481/the-amd-radeon-r9-290-review

970 Released Sept 26th, 2014 for $330 (under a year later, 780 ti was over twice the price)

http://www.anandtech.com/show/8568/the-geforce-gtx-970-review-feat-evga

290 was way cheaper and is now 6% faster. 970 was under a year later, under half the price and 10% faster.
 
Reactions: AtenRa

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Funny because the developers have said the opposite multiple times

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995

He has a bunch of other posts talking about how they worked more closely with Nvidia than AMD if you go through his profile, not going to bother posting them all here when this has been debunked a dozen times already.

https://www.extremetech.com/gaming/...ashes-of-the-singularity-directx-12-benchmark

They worked more with Nvidia pre-launch than AMD

Not to mention Nvidia having ID on stage to show off Doom Vulkan on a 1080 launch party, yet Vulkan was then AMD biased on release right?

You can post whatever links you want, it won't change my mind because the surrounding circumstances simply don't support it.. The Nitrous Engine was designed to work with Mantle since its inception, and it was also heavily marketed by AMD. The fact that NVidia is now matching or exceeding AMD in Ashes of the Singularity, which I think started last year in the Summer, shows that Oxide studios needed to do a massive amount of optimization to get NVidia's performance up to where it should be. This is from September last year, six months after the game was officially released.

This game, along with Gears of War 4 represent what well optimized DX12 titles should be like where both Radeons and NVidia GPUs increase performance from DX12, rather than suffering.



 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
You can post whatever links you want, it won't change my mind because the surrounding circumstances simply don't support it.. The Nitrous Engine was designed to work with Mantle since its inception, and it was also heavily marketed by AMD. The fact that NVidia is now matching or exceeding AMD in Ashes of the Singularity, which I think started last year in the Summer, shows that Oxide studios needed to do a massive amount of optimization to get NVidia's performance up to where it should be. This is from September last year, six months after the game was officially released.

Ah right, ignore facts and go with your opinion. Can't argue with that.

Also please, please keep posting these benchmarks from hardwarecanucks which they themselves say should not be cross referenced

Ashes of the Singularity

Ashes of the Singularity is a real time strategy game on a grand scale, very much in the vein of Supreme Commander. While this game is most known for is Asynchronous workloads through the DX12 API, it also happens to be pretty fun to play. While Ashes has a built-in performance counter alongside its built-in benchmark utility, we found it to be highly unreliable and often posts a substantial run-to-run variation. With that in mind we still used the onboard benchmark since it eliminates the randomness that arises when actually playing the game but utilized the PresentMon utility to log performance

DX12 Benchmarking

For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.

Guess there was a reason you never post the link to the review.... Because they straight up say "DO NOT COMPARE DX11 and DX12"

http://www.hardwarecanucks.com/foru...asus-gtx-1080-gtx-1070-strix-oc-review-3.html

I pointed this fact out to you 6 days ago:

http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...titles-hardwareunboxed.2496866/#post-38679128

But not surprised to see you still spreading FUD.
 

Krteq

Senior member
May 22, 2015
993
672
136
This game, along with Gears of War 4 represent what well optimized DX12 titles should be like where both Radeons and NVidia GPUs increase performance from DX12, rather than suffering.
And again, UE4 doesn't use DX 12 resource management, AotS does.
 
Reactions: Bacon1

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Its easy to blame developers completely rather than accept that the architecture plays a factor. But the fact is most DX12 games run faster on DX11 for Nvidia GPUs. For AMD thats the opposite. As I already said Nvidia's larger driver software team is one of the main reasons. DX11 drivers nowadays are pretty much game specific code optimizations and the company which has more engineers to throw at the problem does better. So no surprise Nvidia does better. DX12 on the other hand gives developer more low level control but is also more challenging. GPU architectural capability and game developer expertise with low level APIs / technical knowledge are both factors in the performance a developer can extract from a certain GPU. Arguing that the architecture is not while the developer alone is basically being argumentative for the sake of it.

So when NVidia's DX11 performance is beating AMD's DX12 performance in Total War Warhammer by a large margin, what am I to think? To me, it's self evident that the developers are the ones that screwed it up. APIs only expose what is already there. If a GPU is slower with an API that supposedly offers more hardware exposure to developers, then the problem is going to be one of three things:

1) Lack of optimization on the developer side.

2) Inadequate driver support on the IHV side.

3) A combination of the two.

I fully admit that NVidia's DX12 and Vulkan drivers weren't up to snuff for several months, but these were eventually fixed and now the drivers perform much better..
 
Reactions: Arachnotronic

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
You can post whatever links you want, it won't change my mind because the surrounding circumstances simply don't support it.. The Nitrous Engine was designed to work with Mantle since its inception, and it was also heavily marketed by AMD. The fact that NVidia is now matching or exceeding AMD in Ashes of the Singularity, which I think started last year in the Summer, shows that Oxide studios needed to do a massive amount of optimization to get NVidia's performance up to where it should be. This is from September last year, six months after the game was officially released.

This game, along with Gears of War 4 represent what well optimized DX12 titles should be like where both Radeons and NVidia GPUs increase performance from DX12, rather than suffering.

dude Gears of War 4 does not have a DX11 version to draw any comparisons. lmao. you are ridiculous.

So when NVidia's DX11 performance is beating AMD's DX12 performance in Total War Warhammer by a large margin, what am I to think? To me, it's self evident that the developers are the ones that screwed it up. APIs only expose what is already there. If a GPU is slower with an API that supposedly offers more hardware exposure to developers, then the problem is going to be one of three things:

1) Lack of optimization on the developer side.

2) Inadequate driver support on the IHV side.

3) A combination of the two.

I fully admit that NVidia's DX12 and Vulkan drivers weren't up to snuff for several months, but these were eventually fixed and now the drivers perform much better..

AMD Rx 480 DX12 and Nvidia GTX 1060 DX11 have same performance in Total War. In fact AMD is slightly ahead within error margin in hwc test. gamegpu identical perf.

http://www.hardwarecanucks.com/foru...3945-gtx-1060-vs-rx-480-updated-review-8.html

http://www.hardwarecanucks.com/foru...945-gtx-1060-vs-rx-480-updated-review-12.html

http://gamegpu.com/rpg/роллевые/total-war-warhammer-directx-12-test-gpu
 
Reactions: Bacon1

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Wait, so Nvidia can have better conservative rasterization? Question, did Nvidia accomplish this with hardware, or drivers?

If they did it with hardware, then it logical to think that GPUs can be designed to better process different APIs better and or worse.

Conservative rasterization is a hardware feature, and like Bacon1 said, DX11.3 exposes this as well.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
dude Gears of War 4 does not have a DX11 version to draw any comparisons.

What does that have to do with the price of butter? Gears of War 4 is a masterfully optimized DX12 title because of how it runs and the way it exploits the features of modern GPUs.. Much like Quantum Break being a steaming pile of garbage because of how it runs.

lmao. you are ridiculous.

If anything is ridiculous, it was your disastrous predictions concerning the performance of Fury X which you posted all over the internet

AMD Rx 480 DX12 and Nvidia GTX 1060 DX11 have same performance in Total War. In fact AMD is slightly ahead within error margin.

http://www.hardwarecanucks.com/foru...3945-gtx-1060-vs-rx-480-updated-review-8.html

http://www.hardwarecanucks.com/foru...945-gtx-1060-vs-rx-480-updated-review-12.html

Where did I say RX480 and GTX 1060 specifically? I said AMD and NVidia.. Total War Warhammer is a broken pile of trash in DX12 mode for NVidia.. NVidia GPUs take massive performance hits from going to DX12. From your own link.

This game totally exemplifies incompetent developers that shouldn't mess with DX12 until they learn how to optimize for more than one architecture.


 
Reactions: Arachnotronic

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
NVidia GPUs take massive performance hits from going to DX12. From your own link.

The link that states that you can not compare their DX11 and DX12 Results??

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.

http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...-drivers-353-376.2497059/page-3#post-38688950

That's 3 times I've told you that the site itself says not to compare their DX11 and DX12 results.

Also why do you never post the source and just post images? You afraid people will read the reviews?

What does that have to do with the price of butter? Gears of War 4 is a masterfully optimized DX12 title

Hmmm

This game, along with Gears of War 4 represent what well optimized DX12 titles should be like where both Radeons and NVidia GPUs increase performance from DX12, rather than suffering.

How can GoW4 be well optimized DX12 game that increases performance when there is no baseline DX11 to compare it against? What increase do you see with GoW4?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |