AVX2 and FMA3 in games

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
No, thank goodness. Having to turn off my overclock and requiring aftermarket cooling just to game at stock clocks on Haswell and later would suck. Intel really needs to sort out the overvolting/overheating issue before anyone even thinks of using those.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
I'm no programmer, but I was under the impression that extensions such as AVX2 were backward compatible with older extensions. For example, a new CPU like Haswell or Skylake would run the fastest codepath with AVX2, while a CPU like Sandy Bridge would use the same codepath but with less throughput/performance due to lacking AVX2..

I really have to wonder though at some of the massive performance gains on the CPU side seen in recent games, such as Dying Light for instance. They went from this at the game's launch:



To this 11 months later, a more than doubling of performance for many CPUs on that list..



And the game definitely was CPU limited when it first shipped no doubt, but now it performs very well. So I wonder, did they get these gains by exploiting more vectorization, or was it all due to better multithreading?

It seems more the latter, as CPUs with more threads/cores gained more performance.

You might want to take a look at the GPUs being used for those tests.
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
Its not an issue to have AVX2 support and still run it on CPUs without AVX2. This is essentially what all the "intel compiler cheats" is about in the old days.
That is not what I'm getting from this:
agner said:
Unfortunately, software compiled with the Intel compiler or the Intel function libraries has inferior performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.

I have complained about this behavior for years, and so have many others, but Intel have refused to change their CPU dispatcher. If Intel had advertised their compiler as compatible with Intel processors only, then there would probably be no complaints. The problem is that they are trying to hide what they are doing. Many software developers think that the compiler is compatible with AMD processors, and in fact it is, but unbeknownst to the programmer it puts in a biased CPU dispatcher that chooses an inferior code path whenever it is running on a non-Intel processor. If programmers knew this fact they would probably use another compiler. Various people have raised suspicion that the biased CPU dispatching has made its way into common benchmark programs.

When I started testing Intel's compiler several years ago, I soon found out that it had a biased CPU dispatcher. Back in January 2007 I complained to Intel about the unfair CPU dispatcher. I had a long correspondence with Intel engineers about the issue, where they kept denying the problem and I kept providing more evidence. Sounds nice, but the truth is that the CPU dispatcher didn't support SSE or SSE2 or any higher SSE in AMD processors and still doesn't today (Intel compiler version 11.1.054). I have later found out that others have made similar complaints to Intel and got similarly useless answers.

After Intel had flatly denied to change their CPU dispatcher, I decided that the most efficient way to make them change their minds was to create publicity about the problem. I contacted several IT magazines, but nobody wanted to write about it. Sad, but not very surprising, considering that they all depend on advertising money from Intel.

http://www.agner.org/optimize/blog/read.php?i=49

That says what it was about was not running supported instructions on AMD CPUs in order to weaken their performance.
 

Nothingness

Diamond Member
Jul 3, 2013
3,054
2,021
136
You only back up what I say. There is nothing to hinder having both AVX2 and non AVX2 support in the same application.
Indeed, but I guess he was reacting to this (bold is mine):
Its not an issue to have AVX2 support and still run it on CPUs without AVX2. This is essentially what all the "intel compiler cheats" is about in the old days.
What Intel did back then is to never take the fast path if the CPU wasn't an Intel one. That's what people called cheating. I wouldn't for a simple reason: why should Intel pay for validation for CPU from a competitor?

What I'd call cheating in Intel compiler is benchmark detection for targetted optimization. But that's another discussion
 

DrMrLordX

Lifer
Apr 27, 2000
22,020
11,594
136
SSE4.2
The consoles might help for AVX, but AVX2 is still a not really useful option.

If you are referring to the PS4 and XBone, do recall that their APUs are based on Jaguar. Unless there's a bugfix, they run AVX code half as fast as they should. There's still a performance increase, but it isn't all that great.

I personally think that HSA runtime is the best way to get AVX2 and even AVX512 support for the applications.

Why? As of now, HSA is only publicly supported under Linux, and even then only tenuously. And HSA has nothing to do with SIMD instruction sets. It would need to be re-engineered from the ground up to optimize code blocks for SIMD, which is something no developer should rationally need or want from their development toolset and/or the end-users software stack.

ICC and GCC can do autovectorization, and even MS compilers can do it, to an extent. SIMD has been around in dev circles long enough that if they really want to use it, they ought to be able to figure it out already.

The main problem is that, how often does a video game provide you with situations where a game engine can take advantage of SIMD outside of the graphics pipeline? SIMD thrives in circumstances where you have lots of similar computational tasks involving the same data types (int or fp) of the same length (32-bit, 64-bit) without dependencies or branching interrupting the task flow. The most obvious application is in rendering tasks, but those are primarily offloaded to the GPU so that's out. Physics is another area, but again, the push is to move that to GPUs wherever possible. There are other areas where SIMD could be used. It just requires some outside-of-the-box thinking.

That platform is cheap and we can target a lot of extensions/accelerators with the same codebase. I really think that SYCL 2.1 will be also a revolutionary step for the programers.

At least from what I've sen of SYCL 1.2, it doesn't look like Khronos group is targeting HSA at all, or any kind of SIMD autovectorization. Again, that would be somewhat redundant . . .
Game developers aren't going to want to lose sales to people who don't have AVX2 CPUs. This is one of the reasons why game engines aren't being made that fully take advantage of eight threads. They need the games to run well on an i3.

Skylake i3 supports AVX/AVX2, at least until you overclock them (then I'm not sure what is the problem, hopefully the board OEMs can fix that).


I'm no programmer, but I was under the impression that extensions such as AVX2 were backward compatible with older extensions. For example, a new CPU like Haswell or Skylake would run the fastest codepath with AVX2, while a CPU like Sandy Bridge would use the same codepath but with less throughput/performance due to lacking AVX2..

That depends on the binary. If it has separate code paths, it'll use them. If it doesn't, the program could bomb out and refuse operation. Of course, if you use a JVM language, it can optimize for available SIMD extensions on the fly.

And the game definitely was CPU limited when it first shipped no doubt, but now it performs very well. So I wonder, did they get these gains by exploiting more vectorization, or was it all due to better multithreading?

Shouldn't be too hard to tell, as you articulated here:

It seems more the latter, as CPUs with more threads/cores gained more performance.

though the best indicator would be to run the two different versions side-by-side and see how CPU behavior changed between versions. If core utilization went up among those that were already active, then it was probably due to code optimization (including maybe SIMD extension use). If under-utilized or completely unitilized cores became active, then it was probably due to multithreading.

why should Intel pay for validation for CPU from a competitor?
)

They shouldn't, but why be concerned at all about a competitor's CPU then? Validation implies that Intel would be/should be concerned over whether or not AMD's CPUs can conform to an ISA extension's rules by handling the instructions properly. Let AMD worry about that. If AMD's implementation of SSEwhatever or AVX is flawed, causing an ICC-compiled application to crash, then it's on AMD's head. Intel shouldn't worry about it one way or the other.

Let AMD's CPUs indicate compatibility for whatever ISA extension they think they can support, and let the program use it if it finds it. Intel never needed to check for the Intel label before allowing ICC-compiled apps to use SIMD. That was lame as hell.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Very informative posts, thanks guys :thumbsup:

You might want to take a look at the GPUs being used for those tests.

Good catch. But even the boost in GPU power cannot account for such a massive gain, practically a doubling of performance in many cases..

So obviously they did some major refinements on the engine during the 11 month gap between the game's release and the time that gamegpu redid the tests..

Physics is another area, but again, the push is to move that to GPUs wherever possible. There are other areas where SIMD could be used. It just requires some outside-of-the-box thinking.

As I mentioned in my OP, PhysX 3.xx uses AVX for cloth solving, so it appears that SIMD is seeing a lot more use in the various physics engines than in the main game engine...

This comes directly from one of the programmers of PhysX, who was in fact responding to one of my own posts on Anandtech concerning CPU and GPU PhysX..

If anything, CPU Physics is on the up and up, as NVidia has started to throw a lot of resources into developing their PhysX 3.xx SDK, and it runs primarily on the CPU and is extremely fast.

Witcher 3 has some of the best cloth physics in a game, and it runs completely on the CPU. A lot of the effects seen in current PhysX 3.xx titles are even better than what was seen in GPU accelerated PhysX titles 5 years ago.
 

TheRyuu

Diamond Member
Dec 3, 2005
5,479
14
81
What I'd call cheating in Intel compiler is benchmark detection for targetted optimization. But that's another discussion

It still does this if you use the /Qx or /Qax options. It does not do this (for your generated code) if you use the /arch options although it may still do it with certain library functions. You can override the dispatcher logic though so it treats Intel and AMD CPU's as equals[1] irrespective of the compiler options.

I'm not sure if anybody mentioned this in this thread but you can write code which makes use of the various new SIMD instruction sets (AVX/AVX2/FMA3/etc) and still have it run on older processors (or put another way, how it should be done for most things). This would typically be done in hot code which is called often where it may be beneficial to do so (since you would typically have to write multiple versions)[2].

It may also automatically be occurring in certain library calls (such as the C/C++ runtime library) which will make use of these new instruction sets for certain things (e.g. memcpy) if your processor supports them. I find it hard to believe that games don't already do things this way for their really hot code that would benefit from it.

I'm no programmer, but I was under the impression that extensions such as AVX2 were backward compatible with older extensions. For example, a new CPU like Haswell or Skylake would run the fastest codepath with AVX2, while a CPU like Sandy Bridge would use the same codepath but with less throughput/performance due to lacking AVX2..

Once the CPU that didn't support AVX2 came across an AVX2 instruction the program would crash. It would have to somehow make use of CPU dispatching based on what the processor supports in order to allow this. As I said above that's probably how things are being done. You can read more about various CPU dispatching strategies in Agner's Optimizing software in C++[2].

[1] https://git.videolan.org/?p=x264.git;a=commitdiff;h=40bb56814e56ed342040bdbf30258aab39ee9e89
[2] http://www.agner.org/optimize/optimizing_cpp.pdf (CPU dispatch strategies: Section 13.1, page 122)
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
4,026
753
126
They shouldn't, but why be concerned at all about a competitor's CPU then? Validation implies that Intel would be/should be concerned over whether or not AMD's CPUs can conform to an ISA extension's rules by handling the instructions properly. Let AMD worry about that. If AMD's implementation of SSEwhatever or AVX is flawed, causing an ICC-compiled application to crash, then it's on AMD's head. Intel shouldn't worry about it one way or the other.

Let AMD's CPUs indicate compatibility for whatever ISA extension they think they can support, and let the program use it if it finds it. Intel never needed to check for the Intel label before allowing ICC-compiled apps to use SIMD. That was lame as hell.

The cheapest software suit to get icc is $700, it's professional software that is used in the industry and can cause huge losses,it would be a suicide move for intel to provide code that,even might, function improper and then go "oh not our fault if you use amd/via(anything else) "
 

DrMrLordX

Lifer
Apr 27, 2000
22,020
11,594
136
If anything, CPU Physics is on the up and up, as NVidia has started to throw a lot of resources into developing their PhysX 3.xx SDK, and it runs primarily on the CPU and is extremely fast.

I didn't read the link in the OP, but now that you mention it, I do find it very interesting that Nvidia is targeting AVX so aggressively via PhysX.

The cheapest software suit to get icc is $700, it's professional software that is used in the industry and can cause huge losses,it would be a suicide move for intel to provide code that,even might, function improper and then go "oh not our fault if you use amd/via(anything else) "

AMD's market share is small, and the number of developers that actually use the more-recent ISA extensions is also small, leading to a tiny number of intersection points where a developer coding an AVX2 app (for example) might see their software run on an AMD CPU that actually supports AVX2, which for now is just Carrizo.

Out of that tiny number of use cases, Intel would have to "worry about" said Carrizo CPUs mishandling the instructions and crashing. Thus far, Carrizo's support for AVX2 seems solid, if slow. xOP is faster on that chip. In any case, removing the dispatcher logic that checks CPU brand would be a non-issue.

As mentioned above, you can already override the dispatcher logic, so any developer that (wisely) chooses to do so is already putting Intel in the position that, as you stated, must be avoided by Intel.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I didn't read the link in the OP, but now that you mention it, I do find it very interesting that Nvidia is targeting AVX so aggressively via PhysX.

The link wasn't in the OP. It was in the last post I made above yours.

Very interesting reading, because it dispels a lot of the myths and controversy regarding the development of PhysX.
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
What Intel did back then is to never take the fast path if the CPU wasn't an Intel one. That's what people called cheating. I wouldn't for a simple reason: why should Intel pay for validation for CPU from a competitor?
Why would that require Intel to pay for anything? If there's a bug in an AMD chip no one is going to blame anyone other than AMD (unless Intel changed the spec AMD tried to support after telling AMD a different spec. In that case it becomes more debated but the blame would rest on Intel in that cause because AMD would have been informed of an erroneous spec).

That, as far as I know, is of the issues surrounding Bulldozer and AVX. People talk about AMD CPUs having low AVX performance without remembering SSE5 and all the moving targets.
 
Reactions: Drazick

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
Anyway, the reason I'm bringing back this topic is because now it has been a year and having AVX2 code seems to make more sense for high-end games, provided it can be useful enough to warrant inclusion.

What I'm wondering, though, is: How difficult is it to optimize for Piledriver in addition? Intel enthusiasts have Haswell at least but Zen is a ways off. So anyone wanting to develop an engine that doesn't cut Piledriver out of the picture would need to use AMD's instructions and avoid AVX (since it's apparently slower on PD). This means a code path for AMD with Piledriver support. If order to keep Piledriver support a developer of a high-end title would want to do what they can to extract performance of out it, especially in terms of single thread performance. Would FMA4 and XOP help much for physics and, if so, quite a bit over FM3 or SSE2 alone?

As for CPU PhysX... the biggest problem with PhysX is Nvidia locking it to their cards — even going to the extreme of taking away support after the fact (for people who once had an AMD card and an Nvidia card in their system).
 
Last edited:
Reactions: Drazick

Nothingness

Diamond Member
Jul 3, 2013
3,054
2,021
136
Why would that require Intel to pay for anything? If there's a bug in an AMD chip no one is going to blame anyone other than AMD (unless Intel changed the spec AMD tried to support after telling AMD a different spec. In that case it becomes more debated but the blame would rest on Intel in that cause because AMD would have been informed of an erroneous spec).
They'd have to validate the increase of performance of the path for non-Intel CPU and that the code works, no matter whether there are bugs on a CPU or not. That's how SW dev works
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
They'd have to validate the increase of performance of the path for non-Intel CPU and that the code works, no matter whether there are bugs on a CPU or not.
You're kidding, right? You're saying that not enabling a CPU to perform well by using something like GenuineIntel is protecting that CPU but simply enabling it to use the instructions its designer enabled themselves becomes too burdensome on the marker of the compiler.

I don't see any legitimate reason to demand that they validate anything other than the fact that their code is putting those instructions into action. Seems like a total case of false dilemma. Simply noting that the instructions are being put into action is not burdensome, certainly not enough to justify what clearly looks like a scam and which, regardless, leaves a lot of performance on the table.

If a compiler developer can't manage to do something that simple then they should either not develop an x86 compiler or they should explicitly drop support for competitors' CPUs. That doesn't mean just blithely ignoring those CPUs. It means actively blocking code generated by the compiler from running on them so it's clear that the CPUs aren't supported.

You either support a product or you don't.
 

Nothingness

Diamond Member
Jul 3, 2013
3,054
2,021
136
You're kidding, right? You're saying that not enabling a CPU to perform well by using something like GenuineIntel is protecting that CPU but simply enabling it to use the instructions its designer enabled themselves becomes too burdensome on the marker of the compiler.
I'm saying that if a new code path can run on AMD, then this path has to be validated on AMD CPU. That's a simple software development requirement.

And I add that they also have to validate that the new path increases performance or paranoiac people will start claiming Intel is killing performance on purpose on AMD CPUs.
 
Reactions: Drazick

NTMBK

Lifer
Nov 14, 2011
10,320
5,347
136
Why would that require Intel to pay for anything? If there's a bug in an AMD chip no one is going to blame anyone other than AMD (unless Intel changed the spec AMD tried to support after telling AMD a different spec. In that case it becomes more debated but the blame would rest on Intel in that cause because AMD would have been informed of an erroneous spec).

That, as far as I know, is of the issues surrounding Bulldozer and AVX. People talk about AMD CPUs having low AVX performance without remembering SSE5 and all the moving targets.

You know this thread is a year old, right?
 

KompuKare

Golden Member
Jul 28, 2009
1,176
1,437
136
I am truly and utterly amazed that Intel tries to segment its processors by disabling ISA features. This is the product of a marketing department that has no idea of the ramifications of its actions. They are literally hindering the already naturally slow pace of the adoption of new ISA features that could make their processors run a hell of a lot faster.
Of course just disabling it is easy for them if they just fuse it off.
If they like market segmentation so much, far harder would have been to have though of this at design stage and made it so that they could blow a fuse to force it to run slower (like consumer vs pro GPUs do with DP).
Although, slower in this case might be tricky. Say they want to penalise AVX2 performance but still allow that code-path to be run, if the penalised path runs slower than AVX1 or SSE2 then programmers might still look at the CPU-id and run the old path. Or more likely, not bother to write for AVX2 in the first place.
 

knutinh

Member
Jan 13, 2006
61
3
66
No, thank goodness. Having to turn off my overclock and requiring aftermarket cooling just to game at stock clocks on Haswell and later would suck. Intel really needs to sort out the overvolting/overheating issue before anyone even thinks of using those.
Having twice the vector width (AVX vs SSE or AVX512 vs AVX), given that the actual hw resources behind the scene scales, should more than offset the slight reduction in clock that Intel more or less automatically now use for AVX code, given that the problem solved by the code maps well to wide SIMD.

If you are overclocking your cpu in such a way that it will overheat or crash for certain instructions, I would argue that your problems are not something that the majority users should be hampered by.

-k
 

knutinh

Member
Jan 13, 2006
61
3
66
This is not true at all. If you put an AVX2 isntruction into a program and run it on an older CPU, the program crashes with illegal instruction exception.
As ShintaiDK said, it is possible to distribute binaries that follow different code-paths depending on hardware. I think that makes a lot of sense. Now, should there be 2 or 10 code paths, what is the "sweet spot"? Would users accept that their game binary download is 2 GB instead of 512MB only in order to increase performance by 5% on half of the worlds computers? I don't know.

It is unfortunate that Intel will not/can not offer instruction set parity in all segments of a given cpu generation. Not performance parity, mind you, but merely having the nice AVX2 instructions in the low end implemented at SSE speed would probably accelerate the introduction of code that would benefit owners of high-end cpus.

I do wonder how much low-level fiddling and profiling is common among developers of e.g. games and consumer applications. Or is it rather high-level C++ nonsense and massaging the template calls into what appears like "good" performance? I would think that e.g. Adobe _could_ do a lot with 1-2 people implementing low-level image processing as close to the metal as possible. But do they?

-k
 
May 11, 2008
20,260
1,150
126
As ShintaiDK said, it is possible to distribute binaries that follow different code-paths depending on hardware. I think that makes a lot of sense. Now, should there be 2 or 10 code paths, what is the "sweet spot"? Would users accept that their game binary download is 2 GB instead of 512MB only in order to increase performance by 5% on half of the worlds computers? I don't know.

It is unfortunate that Intel will not/can not offer instruction set parity in all segments of a given cpu generation. Not performance parity, mind you, but merely having the nice AVX2 instructions in the low end implemented at SSE speed would probably accelerate the introduction of code that would benefit owners of high-end cpus.

I do wonder how much low-level fiddling and profiling is common among developers of e.g. games and consumer applications. Or is it rather high-level C++ nonsense and massaging the template calls into what appears like "good" performance? I would think that e.g. Adobe _could_ do a lot with 1-2 people implementing low-level image processing as close to the metal as possible. But do they?

-k

As mentioned above by DrMrlordx, such simd instruction are not used often. Useful for graphics and sound and other calculations that do not have interdependencies and can be run in parallel and when it makes sense to not do it on the gpu. And it is only small functions. Not megabytes long or even gigabytes long. The size of games comes mostly from the sound files and the textures.
Almost all games query the cpu flags what the cpu is capable of. This already happens ever since the time of 3dnow and sse introduction.
The cpu is queried and for example a function pointer is set in such a way that it points to the right function with the right code path.
Or another solution is that during loading of the game engine, the loader queries the cpu and loads the specific dll library and links it.
 

knutinh

Member
Jan 13, 2006
61
3
66
As mentioned above by DrMrlordx, such simd instruction are not used often. Useful for graphics and sound and other calculations that do not have interdependencies and can be run in parallel and when it makes sense to not do it on the gpu.
I think that Photoshop/image processing should be an excellent candidate for applications that matter for a reasonable number of users (i.e. quite a lot of people own it, many would like it to be faster). x264. Encryption. Machine learning.

I think it is more interesting to list the applications that can be "cpu-heavy" but are not likely candidates for SIMD. Office applications. Databases (?).

Many of those are candidates for GPU, but whenever the algorithm is sufficiently "quirky", relies upon integers, the user cannot be expected to have a suitable GPU, SIMD is an option to consider. Of course when the developer is writing code and compiling it for x86, the compiler ideally should use resources whenever it makes sense.

-k
 
May 11, 2008
20,260
1,150
126
I think that Photoshop/image processing should be an excellent candidate for applications that matter for a reasonable number of users (i.e. quite a lot of people own it, many would like it to be faster). x264. Encryption. Machine learning.

I think it is more interesting to list the applications that can be "cpu-heavy" but are not likely candidates for SIMD. Office applications. Databases (?).

Many of those are candidates for GPU, but whenever the algorithm is sufficiently "quirky", relies upon integers, the user cannot be expected to have a suitable GPU, SIMD is an option to consider. Of course when the developer is writing code and compiling it for x86, the compiler ideally should use resources whenever it makes sense.

-k

Oh, i agree. Image and video processing. Sound processing. Simulation programs. There are enough applications besides games.
But in these situations also, a program must query the cpu what it is capable of. The issue with different hardware configurations is that you cannot assume that a given option is present.
So, be sure all these kinds of applications query the hardware to provide an optimized for speed path and a default slower path.
Like for example 3 options is enough. One path that utilizes the general cpu integer unit and fpu, one path that uses now standard sse and one path that makes uses of the latest and greatest extensions. If of course the software benefits from it.

Also, supporting the latest features that hardware provides is a way to sell new versions or just upgrades of existing software. If the software becomes noticable faster, then productivity increases.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |