[Guru3d] Hitman (2016) DirectX 12 updated benchmarks review

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tential

Diamond Member
May 13, 2008
7,355
642
121
I don't even understand what you are trying to say...
Some CPUs don't need multithreading,not for today's GPUs at least,let's wait for polaris and pascal and look again.
Other CPUs absolutely need it to drive today's GPUs not to even mention the new gen of GPUs.

You guys only look at benchmarks with the top intel CPUs and draw conclusions on scaling or hardware support,but you are forgetting that mantle/dx12 was made from the ground up for a combination of "way too weak CPU cores" + "way more powerful GPU cores"
If 2 or even 4 cores max out current GPUs then you can't see any scaling beyond those 2 or 4 cores,how is this so hard to understand?
So we should purposely allow games to only scale for current gpus and not account for faster gpus.

Thank God you aren't running game development or I'd be screened since I always use a gpu that wasn't out at the time to play a game. When I play the dx12 games out I won't be using Polaris or pascal but unreleased vega....
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
So we should purposely allow games to only scale for current gpus and not account for faster gpus.

Thank God you aren't running game development or I'd be screened since I always use a gpu that wasn't out at the time to play a game. When I play the dx12 games out I won't be using Polaris or pascal but unreleased vega....

It's more like you can't fix what isn't broken is the way I took his statement as far as cpu side goes.

Newer gpu tech should increase performance in older and new games.
 
Last edited:

Beer4Me

Senior member
Mar 16, 2011
564
20
76
LOL, so in order for DX12 to succeed, we all need to buy 6+ core CPUs and $500+ GPUs. Hurrah! ./sarcasm
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
There is a difference between CPU Core/threads scaling and Game Engine (API) Multi Thread/Multi Core performance.

Current GPUs may not take full advantage of the faster CPUs like 10-Core Core i7s but AoTS can scale up to 16 threads.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Funny how people have so many negative things to say about DX12, but can't find a single dev championing DX11 over DX12....
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Funny how people have so many negative things to say about DX12, but can't find a single dev championing DX11 over DX12....

Ironically, some of those people spent weeks trying to show how GCN lacks certain DX12 features (12.1, etc.) and how Maxwell was the way to go for DX12 gaming. Now that 7970/280X demolishes 680/770/780 and 290-390X demolish 970/980 in DX12 games, DX12 games is downplayed and supposedly a waste of time for developers. Yet, Deus Ex Mankind Divided is a DX12 game, and one of the most highly anticipated titles of 2016.

https://www.youtube.com/watch?v=ZGGaVCCMgfw

I guess OG Titan and $550 980 owners aren't happy that a $299 280X/7970Ghz and $330 R9 390 are giving their cards a run for the $ in modern games, especially DX12 titles. You can almost predict that once these same people upgrade to Pascal's GP104/100/102, and Pascal does much better in DX12 than Maxwell, the horrible performance of Kepler since November 2014 and 2016 DX12 downfall for all Maxwell cards besides the 980Ti will be forgotten quickly. Oh and don't forget as soon as we see UE4+DX12 with Pascal dominating Vega, DX12 will be hyped like the next coming of.....

I could understand such hatred regarding DX12 coming out from total PC noobs but the veteran PC gamers? These people have seen 1st and 2nd generation DX9 cards struggle with next gen DX9 games during DX8.1 era or back then modern DX10 GPUs struggling to run DX10 games faster than DX9.

All new APIs of the past took time to be perfected, optimized for and for next gen games to be made from the ground-up to take advantage of them. You have to start somewhere. The early benchmarks of DX12 clearly show THE most impressive CPU bottlenecks lifted even on mid-range cards like R9 390 where 60-80% performance improvements are seen on older CPU architectures such as the FX8370, etc.

We should all be grateful that PC gaming is slowly shifting towards more multi-threaded/parallel computing workloads where we could actually benefit from 6-10 core i7s. But I guess because NV's Fermi, Kepler and Maxwell cards are performing so poorly in DX12, DX12 is being downplayed. This reminds me of the massive butthurt that was around when FX5000 series bombed for next gen titles and GeForce 7 got absolutely destroyed by AMD's cards for next gen shader intensive games. The exact same butthurt followed those NV GPU architectures -- pure defense ....

Recall how the DX8 capable FX5900 series became hopeless trash for DX9.



NV actually has a remarkable track record of delivering GPU architecture that last for about 2 years and then fall apart in modern titles thereafter. Looking back, GeForce 7 underwent something very similar in modern titles of the time like how now Kepler and Maxwell are showing signs of struggle under DX12.









No worries, that's part of the NV's planned obsolescence marketing plan, just like the good old days of GeForce 5 and 7, since then long forgotten. If Pascal has massive gains under DX12 vs. Maxwell in 2016-2017, well, there is our answer.
 
Last edited:

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Didn't the Radeon 9800 also rule the roost for several gens afterward too, at least until the Geforce 8800 (the outlier in planned obsolescence).

That said, I also strongly detest the lack of AMDs driver optimizations during the early days of GCN as well. To think that you buy the card now, then have to wait for several years for drivers to finally make full use of the card strikes me as equally asanine as Nvidia's planned obsolescence.

It was either you buy a card that's good now, but performs poorly a couple years layer, or buy a "meh" card now, and wait for it to pick up steam (get it) later. There wasn't much in the way of "buy kickass card now that is still taking names years later."

Here's hoping we don't repeat the beginning of 28 nm.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,355
642
121
Didn't the Radeon 9800 also rule the roost for several gens afterward too, at least until the Geforce 8800 (the outlier in planned obsolescence).

That said, I also strongly detest the lack of AMDs driver optimizations during the early days of GCN as well. To think that you buy the card now, then have to wait for several years for drivers to finally make full use of the card strikes me as equally asanine as Nvidia's planned obsolescence.

It was either you buy a card that's good now, but performs poorly a couple years layer, or buy a "meh" card now, and wait for it to pick up steam (get it) later. There wasn't much in the way of "buy kickass card now that is still taking names years later."

Here's hoping we don't repeat the beginning of 28 nm.

AMD has vastly improved their driver game since then. We aren't waiting months for updates to games anymore from AMD. AMD used to rarely release driver updates. Now, they come even more often than I think anyone thought they would this year.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Didn't the Radeon 9800 also rule the roost for several gens afterward too, at least until the Geforce 8800 (the outlier in planned obsolescence).

That said, I also strongly detest the lack of AMDs driver optimizations during the early days of GCN as well. To think that you buy the card now, then have to wait for several years for drivers to finally make full use of the card strikes me as equally asanine as Nvidia's planned obsolescence.

It was either you buy a card that's good now, but performs poorly a couple years layer, or buy a "meh" card now, and wait for it to pick up steam (get it) later. There wasn't much in the way of "buy kickass card now that is still taking names years later."

Here's hoping we don't repeat the beginning of 28 nm.

Sorry but Tahiti (HD7970 GHz) on release vs Kepler (GTX680) was on par or faster.

Same with Hawaii, R9 290X was faster than GTX 780 on release day one.

Both of GCN cards got way better than Kepler in the coming years. So you had top notch performance at day one AND even better performance 2-3 years later.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Sorry but Tahiti (HD7970 GHz) on release vs Kepler (GTX680) was on par or faster.

Same with Hawaii, R9 290X was faster than GTX 780 on release day one.

Both of GCN cards got way better than Kepler in the coming years. So you had top notch performance at day one AND even better performance 2-3 years later.

I do remember the 7970 GHZ Edition on par or faster than the GTX 680, though I don't think the original did nearly as well. If I remember correctly, the 680 was lauded on anandtech for landing the "technical trifecta" against the original (stock) 7970 while also being cheaper. Later, the 970 would also get the same praise from Anandtech vs the 290.

As for the 290X, I probably only looked at reference benchies (in fact, I tend to look at only reference benchmarks when comparing cards). Quite odd the reference part would be the one to exhibit throttling, but ehh.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
I do remember the 7970 GHZ Edition on par or faster than the GTX 680, though I don't think the original did nearly as well. If I remember correctly, the 680 was lauded on anandtech for landing the "technical trifecta" against the original (stock) 7970 while also being cheaper. Later, the 970 would also get the same praise from Anandtech vs the 290.

As for the 290X, I probably only looked at reference benchies (in fact, I tend to look at only reference benchmarks when comparing cards). Quite odd the reference part would be the one to exhibit throttling, but ehh.


GTX 680 release

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/27.html


HD7970 GHz release

https://www.techpowerup.com/reviews/AMD/HD_7970_GHz_Edition/28.html


R9 290X on release

https://www.techpowerup.com/reviews/AMD/R9_290X/27.html
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Didn't the Radeon 9800 also rule the roost for several gens afterward too, at least until the Geforce 8800 (the outlier in planned obsolescence).

That said, I also strongly detest the lack of AMDs driver optimizations during the early days of GCN as well. To think that you buy the card now, then have to wait for several years for drivers to finally make full use of the card strikes me as equally asanine as Nvidia's planned obsolescence.

This is a fallacy created by Kepler owners to justify buying the inferior NV architecture of the time. HD7970Ghz was the fastest card as of June 2012. That means it only took 5 months for AMD to catch up and beat 680 over the next 4 years.


https://www.techpowerup.com/reviews/AMD/HD_7970_GHz_Edition/28.html

Even the after-market 680 couldn't beat a 7970Ghz as of June 2012.


To this day, Kepler owners deny these facts. Instead, they perpetuate myths that it took AMD "years" to get GCN to perform well. The reality is GCN performed well already in 2012 and it only got better over time against Kepler. It's so bad today that a 7950 OC easily outperforms a 680/770 in modern games.

Where AMD struggled was CF frame rate pacing.

Sorry but Tahiti (HD7970 GHz) on release vs Kepler (GTX680) was on par or faster.

Same with Hawaii, R9 290X was faster than GTX 780 on release day one.

Both of GCN cards got way better than Kepler in the coming years. So you had top notch performance at day one AND even better performance 2-3 years later.

Bingo. Day 1, $399 R9 290 beat the 780.

AnandTech Day 1 launch review of R9 290 November 2013:

"The 290 is so fast and so cheap that on a pure price/performance basis you won’t find anything quite like it. At $400 AMD is delivering 106% of the $500 GeForce GTX 780’s performance, or 97% of the $550 Radeon R9 290X’s performance."
http://www.anandtech.com/show/7481/the-amd-radeon-r9-290-review/17

Same thing with R9 295X2/R9 290X CF dropping to within $50-100 of GTX980 within weeks of that card's launch.

What sells most NV cards is mind-share and marketing, as well as locking in their userbase with proprietary tech like PhysX, GSync, TXAA and myths of better driver support (oh, the irony of Fermi, Kepler and sub-980Ti Maxwell cards all underperforming given their prices at the time of AMD's competition).

Notice how 7970/R9 280X went from competing with 680, then 770, then 780/OG Titan. Now 290/290X (aka 390/390X) went from competing with 780/780Ti to competing with 970/980 when these cards were never designed to compete with those products. It's amazing that given how well GCN has aged that people continue to pay premiums and/or buy NV. If I owned a 660-780Ti level card, there would be 0 chance I'd buy NV the next round given how poorly my card would have aged.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Ah, ok, think I remember now. Hawaii was the response to GK110. Makes sense it would beat out the 780. That last benchmark shows the 680 pulling ahead of the 7970 GHZ/280X though (the one Atena posted.)

What is interesting is even though Titan has more raw compute, the 280X is really giving it a run for it's money now.

I certainly wouldn't argue driver support on Nvidia's part, especially if the gpu killing drivers are to be believed.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
This is a fallacy created by Kepler owners to justify buying the inferior NV architecture of the time. HD7970Ghz was the fastest card as of June 2012. That means it only took 5 months for AMD to catch up and beat 680 over the next 4 years.

Actually it only took 3 months,

HD7970 January 2012 = Fastest card

GTX 680 March 2012 = Fastest card

HD7970 GHz June 2012 = Fastest card
 

book_ed

Member
Apr 8, 2016
29
0
6
I don't even understand what you are trying to say...
Some CPUs don't need multithreading,not for today's GPUs at least,let's wait for polaris and pascal and look again.
Other CPUs absolutely need it to drive today's GPUs not to even mention the new gen of GPUs.

You guys only look at benchmarks with the top intel CPUs and draw conclusions on scaling or hardware support,but you are forgetting that mantle/dx12 was made from the ground up for a combination of "way too weak CPU cores" + "way more powerful GPU cores"
If 2 or even 4 cores max out current GPUs then you can't see any scaling beyond those 2 or 4 cores,how is this so hard to understand?

For my test I used the latest build of Windows 10 on a PC with an eight-core Core i7-5960X, 32GB of DDR4/2133 RAM, and an AMD Radeon Fury X GPU. To see the effect of losing cores, I manually switched off cores and Hyper-Threading while running the test.
Note that as you scale back the core count on the chip, Turbo Boost reacts by giving you a little more clock speed. Rather than turn off Turbo Boost, I’ll just note that up to two cores with Hyper-Threading, the chip runs at 3.5GHz. Beyond that it ran at 3.3GHz. In an ideal world, I'd use different CPUs, as each specific chip reacts a little differently, but this is a pretty reasonable approximation.



http://www.pcworld.com/article/3039...es-you-really-need-for-directx-12-gaming.html

From nVIDIA itself - https://www.youtube.com/watch?v=i-grTutoJNE&feature=youtu.be&t=3m13s

6 to 12x more than we could do with DX11... also textures are 8k by 8k, significant more than we could do
"But that's 4xTitan X!" True, but unfortunately this video doesn't show where the guy says rendered 4k, but down sampled to 1080p.

Also nVIDIA had the Forza demo on a Titan@60fps which was by that demo statement impossible under dx11.
 
Last edited:

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Ah, ok, think I remember now. Hawaii was the response to GK110. Makes sense it would beat out the 780. That last benchmark shows the 680 pulling ahead of the 7970 GHZ/280X though (the one Atena posted.)

What is interesting is even though Titan has more raw compute, the 280X is really giving it a run for it's money now.

I certainly wouldn't argue driver support on Nvidia's part, especially if the gpu killing drivers are to be believed.

saying a card was a response doesn't really have that big an effect. The chips take years to develop. One coming out a few months earlier does not mean much since they were probably near done or done with the chips when kepler came out. Can't change the fundamental architecture at that point.

Coming out later does give the advantage of trying to subvert the competition in pricing and clock speed. But hawaii was significantly smaller than Kepler so could be cheaper. This actually could be repeated with pascal and polaris.

not sure why this discussion started here though. lol. Just chiming in on whats being talked about.
 
Feb 19, 2009
10,457
10
76
Ironically, some of those people spent weeks trying to show how GCN lacks certain DX12 features (12.1, etc.) and how Maxwell was the way to go for DX12 gaming. Now that 7970/280X demolishes 680/770/780 and 290-390X demolish 970/980 in DX12 games, DX12 games is downplayed and supposedly a waste of time for developers. Yet, Deus Ex Mankind Divided is a DX12 game, and one of the most highly anticipated titles of 2016.

NV actually has a remarkable track record of delivering GPU architecture that last for about 2 years and then fall apart in modern titles thereafter. Looking back, GeForce 7 underwent something very similar in modern titles of the time like how now Kepler and Maxwell are showing signs of struggle under DX12.

No worries, that's part of the NV's planned obsolescence marketing plan, just like the good old days of GeForce 5 and 7, since then long forgotten. If Pascal has massive gains under DX12 vs. Maxwell in 2016-2017, well, there is our answer.

Great post man, as usual, backed with solid evidence.

I have no doubts that Pascal will do much better in modern games DX11 or DX12 than Maxwell.

Pascal has functional fine-grained preemption, something Maxwell claims but actually cannot do (lie), and Kepler lacks. The older GPUs not only lack this feature, so that compute can't interrupt a graphics task in the pipe already to proceed... the basic graphics & compute rendering requires a slow context switch. This means as games use more compute, these GPUs will tank in performance.

Pascal will not tank in compute heavy games, because it's engine has become GCN-like for handling graphics & compute. Even if it lacks multi-engine (ACEs) and cannot do Async Compute, it won't be gimped by it.

Credit goes to @zlatan as he said this about Pascal a long time ago before anyone had any idea. NV's detailed info release about P100 confirms the above features.

Due to this large ability gap that Pascal brings, what is the best way to optimize it so that it performs great in modern games? More compute usage for effects & physics. Expect to see some Pascal-optimized compute heavy GameWork features.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Great post man, as usual, backed with solid evidence.

I have no doubts that Pascal will do much better in modern games DX11 or DX12 than Maxwell.

Pascal has functional fine-grained preemption, something Maxwell claims but actually cannot do (lie), and Kepler lacks. The older GPUs not only lack this feature, so that compute can't interrupt a graphics task in the pipe already to proceed... the basic graphics & compute rendering requires a slow context switch. This means as games use more compute, these GPUs will tank in performance.

Pascal will not tank in compute heavy games, because it's engine has become GCN-like for handling graphics & compute. Even if it lacks multi-engine (ACEs) and cannot do Async Compute, it won't be gimped by it.

Credit goes to @zlatan as he said this about Pascal a long time ago before anyone had any idea. NV's detailed info release about P100 confirms the above features.

Due to this large ability gap that Pascal brings, what is the best way to optimize it so that it performs great in modern games? More compute usage for effects & physics. Expect to see some Pascal-optimized compute heavy GameWork features.
Won't that play to AMD's strengths? It will gimp previous Nvidia generations however.

That's a dilemma.
 

linkgoron

Platinum Member
Mar 9, 2005
2,335
857
136
Won't that play to AMD's strengths? It will gimp previous Nvidia generations however.

That's a dilemma.
Not sure if you're sarcastic or not, but I don't see the dillema. I expect nvidia domination at least until Vega and until then nvidia needs to hit hard and do anything to widen the gap between maxwell and pascal, to entice upgrades. We might even see it with battlefield.

Once Vega arrives - who cares? They already upgraded to Pascal.
 
Last edited:
Feb 19, 2009
10,457
10
76
Won't that play to AMD's strengths? It will gimp previous Nvidia generations however.

That's a dilemma.

In the longer term...

Here's a thought, if it's DX12, non-optimization has a huge impact. Game devs only testing NV hardware, the release will look like GOW:U.

DX12 is more powerful, but it also puts more responsibility on developers to get it right. You can see how throwing $$ at some publishers will mean DX12 is a powerful weapon.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Won't that play to AMD's strengths? It will gimp previous Nvidia generations however.

That's a dilemma.
I doubt Maxwell will be gimped so much as compute limited. Even in AMD biased games, Maxwell seems to fall in line with GCN according to raw compute. (280x closing in on the 970 due to having similar compute capabilities)

I believe games right now are utilizing Maxwell fairly well. Right now, getting more brute in there is probably the main focus behind Pascal. We'll see how that works out. Without ACEs, it will probably be difficult to maintain peak efficiency in all situations, though if they are able to effectively utilize the chip without, they could very well simply out brute Polaris now, then work on ACEs and other such additions later.

One other thing is the bad PR Nvidia has been gaining as of late. Enthusiasts are already fully aware of Nvidia GPUs losing steam against their competitors, not to mention the memory issue in the 970, and now poor drivers, it would be dumb to recommend an Nvidia card at all unless CUDA was a necessity. I'm not sure if Nvidia can hold back again this time and come away unscathed.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Didn't the Radeon 9800 also rule the roost for several gens afterward too, at least until the Geforce 8800 (the outlier in planned obsolescence).

That said, I also strongly detest the lack of AMDs driver optimizations during the early days of GCN as well. To think that you buy the card now, then have to wait for several years for drivers to finally make full use of the card strikes me as equally asanine as Nvidia's planned obsolescence.

It was either you buy a card that's good now, but performs poorly a couple years layer, or buy a "meh" card now, and wait for it to pick up steam (get it) later. There wasn't much in the way of "buy kickass card now that is still taking names years later."

Here's hoping we don't repeat the beginning of 28 nm.

Nope. It was quite the card against the mediocre FX series but once the 6600GT came out (my favourite card during that period! along side the 6800GT), it was pretty much obsolete.

Only until they released the X1900 series did they have a winner but truth be told, they got hammered in the mid performance range due to the 6600GT/7600GT/7800/7900GT/GS based cards that were sold for cheap while delivering good performance (and ATi didn't have a good mid-range card back in those days for several gens..)

edit - actually thinking about it, it was the mid range x600 series (especially from the 6000/7000 series) that really gave nVIDIA a boost in brand perception because of the performance that it provided while being affordable for the average gamer. These cards allowed gamers to play say Doom3/HL2 at respectable settings compared to its lacklustre competition.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,355
642
121
I doubt Maxwell will be gimped so much as compute limited. Even in AMD biased games, Maxwell seems to fall in line with GCN according to raw compute. (280x closing in on the 970 due to having similar compute capabilities)

I believe games right now are utilizing Maxwell fairly well. Right now, getting more brute in there is probably the main focus behind Pascal. We'll see how that works out. Without ACEs, it will probably be difficult to maintain peak efficiency in all situations, though if they are able to effectively utilize the chip without, they could very well simply out brute Polaris now, then work on ACEs and other such additions later.

One other thing is the bad PR Nvidia has been gaining as of late. Enthusiasts are already fully aware of Nvidia GPUs losing steam against their competitors, not to mention the memory issue in the 970, and now poor drivers, it would be dumb to recommend an Nvidia card at all unless CUDA was a necessity. I'm not sure if Nvidia can hold back again this time and come away unscathed.
Well yes, the 290/290x would have been monsters if they released with decent coolers and optimized drivers.

But that's a dream world in reality the 290/290x were competitive at release and amd kept them competitive through driver optimizations.

If Nvidia had done the same thing the Kepler gpus would be in line with the 290x and we would be talking about 290x vs 780 instead of 290x vs 980. The 290x stayed a top of the line choice over 2 generations.... Thats ridiculous. It takes serious brand bias to deny the performance and staying power Hawaii achieved.

Nvidia is capable of optimizing Kepler and maxwell further.... I mean why is it the 290x has overtaken 2 generations of Nvidia gpus?

Nvidia starts strong, makes their sales day 1 on optimized drivers for launch benchmark suites, and then later it all falls apart. This is 2 generations in a row.

I'm not going to be surprised when the fury x is surpassing the 980ti in direct x 12 games consistently and when the 980ti is discontinued (we already have reports of this) the fury x will have to compete against the 1070. I think 1080 will be top dog for sure.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
The reason the 290X did so well for longevity is it's a flat out better chip than the 780 and it's built for a future that AMD made.

NV's architectures have been consistently short-sighted because they can't read the future as well as AMD, because AMD knows what they're making.
 
Feb 19, 2009
10,457
10
76
The reason the 290X did so well for longevity is it's a flat out better chip than the 780 and it's built for a future that AMD made.

NV's architectures have been consistently short-sighted because they can't read the future as well as AMD, because AMD knows what they're making.

It's not like that at all.

NV's architectures were made to excel for the now. Because the now matters more for people when they decide which GPUs to buy, they look at current benchmarks and see that NV is on top. They don't think about a year or two years down the road how the GPUs stack up.

Thus, NV's architectures give them the dominant position and it's the winning design. It also promotes more frequent upgrades and such, it generates more revenue for NV. It's a good strategy.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |