AMD X399 !!!!!

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
It has nothing to do with GPUs being "faster". It makes no sense to buy extra hardware for a workload that you only run a fraction of the time. That's why you don't see datacenters being loaded up with GPUs, except in a few use cases. In addition, there are vectorized workloads that can't be run on GPUs, either because of control logic (video encoders) or memory requirements.
OEMs not offering much choice in terms of GPUs in their datacenter solutions means that datacenters are less inclined towards HPC than say a compute cluster in a university. Naples is going to target the former, not the latter.
I can't think of any workload that needs 300 GB/s of memory bandwidth yet isn't vectorized. You can't even access memory that quickly without using SIMD, other than through bulk memory copy.
It sounds like you're talking about sequential, there are workloads with random memory access as well. For example, CFD is almost completely memory bound, both latency and bandwidth.
If you really think Naples is 60% more energy efficient (32 * 3 / 180 vs 28 * 2.5 / 205) than Skylake, despite Zen drawing as much power as Broadwell-E, there's nothing to discuss.
What workloads are you talking about? There's nothing to discuss regarding perf/watt unless you specify the workload.
I expect to see Naples fail to reach wide adoption due to extreme NUMA overhead (2S * 4 die * 2 CCX, or 16 nodes to reach parity with Intel 2S platform).
AMD knows better how to implement NUMA than Intel. If anybody can make it work, it's them. But we'll have to see what happens in the future.
 
Reactions: Drazick

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
Drat. I was really hoping for intermediate Zen updates for some IPC gains before they transitioned to 7nm, which is probably due in 1H 2019.

Hopefully at least with 14nm+, Zen can hit 4.2 GHz or higher for most people.

Well, if we assume that the current releases are on "14nm", then it's fair to expect a quick revision on "14nm+".
 
Reactions: Drazick

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
If you run a "FP" workload 20% of the time, why would you buy 4-8 GPUs that would sit idle most of the time?

You serious?

With good code, you might be looking at a 10% overall improvement in run time (i.e. 2x faster for the FP bit).


8 channel memory is useless without compute* performance, which Naples lacks, given that it doesn't have AVX, let alone AVX-512.

Completely incorrect. It has AVX1 and AVX2. That is, 128bit and 256bit.
 
Reactions: Drazick and prtskg

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Can anyone take The Stilt r15 perf numbers from eg 35/40w on a 1800x and compare it to skl perf. Then asuming scaling on Spec Int as Pavermaster described as liniar. What will a top of yhe line 32c 160w epyc have of perf then in spec int?
 

nad-

Junior Member
May 4, 2017
12
1
11
Did anyone notice the compilation time scaling? The dual socket Naples got 15.7 seconds while the single socket 33.7 seconds, that's a 115% difference with 100% more hardware, there's no way multi socket scaling is 100%, much less 115%, if Infinity fabric was that magical AMD could dump all their CPU/GPU business and just sell out fabrics.

Are we looking at a more TDP friendly 32 core with lower clocks and lower price for 1S?
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Nothing is stopping you from using E5s or SPs in 1S configurations. 8 channel memory is useless without compute* performance, which Naples lacks, given that it doesn't have AVX, let alone AVX-512. In fact, the reason there are 8 memory channels is likely because each die on the package is essentially its own independent processor, making Naples essentially a 4S configuration per package. We all know how well 4S/8S Intel systems perform (poorly). For power efficiency and latency, Naples should really only have 4 memory channels, but the lack of a unified uncore makes it impossible.

* As in throughput-computing (e.g. HPC).

There has never been any restriction from Intel on using a 2S-capable CPU in 1S configuration.

You have to first understand what I said. Intel sells 2S CPUs on a separate socket / motherboard than the 1S CPUs. You cannot buy a single socket motherboard server from Intel with the highest Skylake Xeon with 28C/56T. AMD will sell EPYC single socket servers with 32C/64T. In fact for dual processors servers Intel sells Xeon Gold tier with a maximum of 22C/44T . The highest core counts Xeons are reserved for 4/8 processor or 4/8 socket servers.

https://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors#Xeon_E3-12xx_v5_.28uniprocessor.29
https://en.wikipedia.org/wiki/List_...kylake-SP.22_.2814_nm.29_Scalable_Performance
 

blue11

Member
May 11, 2017
151
77
51
You have to first understand what I said. Intel sells 2S CPUs on a separate socket / motherboard than the 1S CPUs. You cannot buy a single socket motherboard server from Intel with the highest Skylake Xeon with 28C/56T. AMD will sell EPYC single socket servers with 32C/64T. In fact for dual processors servers Intel sells Xeon Gold tier with a maximum of 22C/44T . The highest core counts Xeons are reserved for 4/8 processor or 4/8 socket servers.

https://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors#Xeon_E3-12xx_v5_.28uniprocessor.29
https://en.wikipedia.org/wiki/List_...kylake-SP.22_.2814_nm.29_Scalable_Performance
Intel sells neither motherboards nor servers. Given that the memory for a typical server costs $14000 per socket and the CPU ~$4000, I'm sure those datacenter operators are in a real panic over a few bucks for motherboards (which are custom designed and could easily be 1S if desired). Xeon Gold 6000-series will be scalable to 4S, and Platinum to 8S. Yet again, nothing stops you from making 1S LGA 3647 boards, and some customers will be using them. In fact, the Open Compute design for Skylake-SP will be using the Platinum 8176/8180 in 2S configuration and not 8S.

By the way, the majority of Xeon sales are for the top E5 bin, since it has the best TCO. We will have to see how the Gold/Platinum rebranding changes it, but likely the higher end SKUs will remain the best value.
 
Last edited:

ryzenmaster

Member
Mar 19, 2017
40
89
61
Why are people so obsessed with AVX? It always comes up when talking about Zen here.. It's not even typical server workload to begin with. More typical workload would be running bunch of HTTP services and virtual machines. Very few use cases have heavy emphasis on using SIMDs and even then you might just be better off running it on GPU.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
You serious?

With good code, you might be looking at a 10% overall improvement in run time (i.e. 2x faster for the FP bit).


Completely incorrect. It has AVX1 and AVX2. That is, 128bit and 256bit.

That and while I was looking at the implementation of AVX-512 just to get a feel as to when it might be important, I found that Skylake doesn't even have it.

https://en.wikipedia.org/wiki/AVX-512

Looking 512 over, it's also a mess that may never see limited adoption in any kind of retail or pro retail space since only certain aspects are available on certain products and as a whole only 2 aspects are actually shared across CPU's.

Eventually it will be a good idea for AMD to support 512 and get per Core parity on AVX2. But the AVX capabilities of Ryzen and a focus on lack of 512 support is ultimately pointless.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Why are people so obsessed with AVX? It always comes up when talking about Zen here.. It's not even typical server workload to begin with. More typical workload would be running bunch of HTTP services and virtual machines. Very few use cases have heavy emphasis on using SIMDs and even then you might just be better off running it on GPU.
Funny thing is that the real utilization of AVX is in specific instances scientific computing and not only does it require compilers generating AVX-optimized code, but one also needs to identify parts of the code which are amenable to vectorization and the programming model is quite a departure from traditional way of handling things like arrays, pointers etc. Examples of libraries that use AVX heavily include the ROOT data analysis package of CERN for particle physics and various other libraries for lattice QCD, among others. This is serious stuff - I doubt that those who harp on AVX support, or lack thereof, really works with these things to claim its benefits in the upcoming Intel CPUs.
 

blue11

Member
May 11, 2017
151
77
51
That and while I was looking at the implementation of AVX-512 just to get a feel as to when it might be important, I found that Skylake doesn't even have it.

https://en.wikipedia.org/wiki/AVX-512

Looking 512 over, it's also a mess that may never see limited adoption in any kind of retail or pro retail space since only certain aspects are available on certain products and as a whole only 2 aspects are actually shared across CPU's.

Eventually it will be a good idea for AMD to support 512 and get per Core parity on AVX2. But the AVX capabilities of Ryzen and a focus on lack of 512 support is ultimately pointless.
The very article you linked shows that most AVX-512 instruction sets are supported by Skylake-SP, with only ERI, IFMA, VBMI, and the neural network instructions unsupported. AVX-512 may never be released on client SKUs, but it doesn't matter, because consumers never do anything useful with their computers.

Funny thing is that the real utilization of AVX is in specific instances scientific computing and not only does it require compilers generating AVX-optimized code, but one also needs to identify parts of the code which are amenable to vectorization and the programming model is quite a departure from traditional way of handling things like arrays, pointers etc. Examples of libraries that use AVX heavily include the ROOT data analysis package of CERN for particle physics and various other libraries for lattice QCD, among others. This is serious stuff - I doubt that those who harp on AVX support, or lack thereof, really works with these things to claim its benefits in the upcoming Intel CPUs.
If as you claim, nobody ever uses SIMD instructions, then there is even less reason for Naples to have 8 memory channels. Which is why, as I noted before, the 8 channels are a mere artifact of the way Naples is constructed from 4 dies. I fully expected each die in the package to be accessing its own memory bank at dual-channel speed as would be expected from a 4-way NUMA setup.

However, you are wrong, because customers of server CPUs also write their own code, and can therefore use whatever instruction sets are available on the hardware.
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
If as you claim, nobody ever uses SIMD instructions, then there is even less reason for Naples to have 8 memory channels. Which is why, as I noted before, the 8 channels are a mere artifact of the way Naples is constructed from 4 dies. I fully expected each die in the package to be accessing its own memory bank at dual-channel speed as would be expected from a 4-way NUMA setup.

However, you are wrong, because customers of server CPUs also write their own code, and can therefore use whatever instruction sets are available on the hardware.
Memory is an important factor in a lot of applications, and in certain instances, it is even the limiting factor. More memory channels will help in things like VDI, among others. Just because customers write their own code doesn't mean they'll use every supported instruction set. It will depend on the workload.

Read papers on arxiv to really see what kind of applications are accelerated through AVX. Premiere Pro CC and h265 transcoding doesn't even scratch the surface of what AVX is capable of.
 
Reactions: Drazick and CatMerc

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
The very article you linked shows that most AVX-512 instruction sets are supported by Skylake-SP, with only ERI, IFMA, VBMI, and the neural network instructions unsupported. AVX-512 may never be released on client SKUs, but it doesn't matter, because consumers never do anything useful with their computers.
No it furthers a point. It's basically a severely limited SIMD instruction set created by Intel to work with certain large scale customer solutions that can use it. While it may really help the companies they designed it for, I don't see it gaining the traction needed for it to have any real affect on AMD's attempt to sell server CPU's and like Intel they can always as their market grows go back and work it into the next major refresh (not the minor ones). Ryzen's AVX2 (1 instead of 2 instructions per cycle) support is inconsequential as long as they are offering much greater amounts of cores at Intels price brackets. Specially as we talk about server CPU's where clock speeds will even out between the two.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
However, you are wrong, because customers of server CPUs also write their own code, and can therefore use whatever instruction sets are available on the hardware.

You write that as if it is so easy...


Now... if any company had (1)the requirements that would make it worthwhile to fund the production of and (2)the depth of knowledge to produce and compile effective AVX-512 code, tell me why they wouldn't just go and start down the road of GPU or FPGA accelerated computations?

The whole thing doesn't add up. AVX512 is likely to see very little traction in the marketplace as at that level of specialisation, other more attractive alternatives exist. Of course there will be very niche areas that AVX512 is the standout option, but I don't foresee many of these.
 
Reactions: Drazick

blue11

Member
May 11, 2017
151
77
51
No it furthers a point. It's basically a severely limited SIMD instruction set created by Intel to work with certain large scale customer solutions that can use it.
In what way is it "limited"? It is a direct extension of AVX to 512-bits while adding additional masking capabilities.

While it may really help the companies they designed it for
AVX-512 was not "designed for" any specific company. It originated on Xeon Phi.

support is inconsequential as long as they are offering much greater amounts of cores at Intels price brackets.
You can't win in the server market by being cheaper, because the CPU is only 10-20% of the hardware cost (let alone TCO), despite being a primary determiner of performance. You have to be better.

Now... if any company had (1)the requirements that would make it worthwhile to fund the production of and (2)the depth of knowledge to produce and compile effective AVX-512 code, tell me why they wouldn't just go and start down the road of GPU or FPGA accelerated computations?
What is the value of general purpose computing? If you only spend part of your time doing a task, why would you build specialized hardware that will spend most of its time idling (and wasting valuable datacenter space)? This is of course ignoring the reality that many applications can not be run on GPU, either because of memory, latency (e.g. recursion), or control logic (e.g. video encoding).
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Yeah, that's what the roadmap looked like to me as well. I also hope they can get to 4.5G, but 25% clock speed improvement has got to be a tricky proposition for 14 -> 14+. I also find it odd that 14 -> 14+ has no Zen bump, but the move from 7nm to 7nm+ is a Zen2 to Zen3 move. Maybe Zen2 is a bunch of architecture, while Zen3 ups the core counts?

I doubt that the base clocks will jump that high. Maybe 3.6 -> 4.0 GHz at the outside. I would think AMD would squeeze out a bit more IPC with improvements to some areas that are bottlenecks on the current stepping (just no major overhaul). I would think Zen2 will have more cores per CCX and a new uarch to support that (plus support for any new features). I don't know what AMD will due with Zen3, but they expect to have EUV, don't know what EUV will give the design team to work with (in terms of electrostatic advantages). Interesting stuff.
 
Reactions: Drazick

DrMrLordX

Lifer
Apr 27, 2000
21,805
11,159
136
You can't win in the server market by being cheaper, because the CPU is only 10-20% of the hardware cost (let alone TCO), despite being a primary determiner of performance. You have to be better.

I would like to point out that there are exceptions to this rule on the extreme ends of the cost scale. If you'll recall, Intel cut their teeth in the server room by selling lots of relatively cheap commodity hardware that people could use in clusters as an alternative to more-expensive "Big Iron" solutions. It would be interesting to see some of the TCO calculations from the late 90s comparing Intel hardware to products from major supercomputer vendors of the day.

Intel's server product lineup today more-closely resembles that of their old competitors than it does the Intel of 20 years ago; theoretically, someone sufficiently nimble and hungry could to do them what Intel did to the power players of the late 90s just by offering something cheap and efficient. I don't necessarily think AMD's position makes them the Intel of 1997 in the server room, so don't think I'm implying they're the company to make it all happen. Lots of people are still waiting for the ARMy to disrupt the server room (keep waiting, folks).
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
What is the value of general purpose computing?

AVX-512 has very little to do with general purpose computing.


If you only spend part of your time doing a task, why would you build specialized hardware that will spend most of its time idling (and wasting valuable datacenter space)? This is of course ignoring the reality that many applications can not be run on GPU, either because of memory, latency (e.g. recursion), or control logic (e.g. video encoding).

If you only spend part of your time doing a task, why would you bother to design & compile specialised code that will spend most of its time frustrating the coders and wasting valuable engineering time? This is of course ignoring the reality that many applications cannot benefit from AVX-512, either because of dataset size, dataset shape or the other functions required to work the data.
 
Reactions: Drazick

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
You can't win in the server market by being cheaper, because the CPU is only 10-20% of the hardware cost (let alone TCO), despite being a primary determiner of performance. You have to be better.
.

Nonsense. You win by also beeing cheaper. That way Intel kicked Ibm out.
There is a reason Intel xeon isnt twice as expensive. Yes. Price obviously matters big time. There is a sizeable price elasticity here. Its not small amounts here.

The idea that cpu cost is so small it doesnt matter becaue it eg isnt shown in a tco analysis from Gartner is pathetic. Every cost matters. All over. All the time. Every penny. You cost optimize your company all over or somebody takes your chair tomorrow.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
In what way is it "limited"? It is a direct extension of AVX to 512-bits while adding additional masking capabilities.

Limited because adoption of it is extremely platform dependant. What experience you have in Knights Landing or Phi doesn't might not carry over to any other platform. That's on top of the limited use case for 512bit instructions.

AVX-512 was not "designed for" any specific company. It originated on Xeon Phi.

Which doesn't exist for any other reason than one of the big purchasers asking them nicely for it. Intel didn't come out with a new chip and say "build a market around this" they already screwed up with that once with Itanium and are still deathly afraid of that happening again. They certainly wouldn't develop an instruction set to be cut into pieces like that just for the heck of it. They saw 3-4 separate diversions based on customer requirements and developed it for each, decreasing the chance that it gets used much outside those markets and even in market with customer who didn't specifically ask for those instruction sets.

You can't win in the server market by being cheaper, because the CPU is only 10-20% of the hardware cost (let alone TCO), despite being a primary determiner of performance. You have to be better.
In what world are you dealing with where CPU's only take 10%-20% of the hardware cost? I mean yeah certain situations, but small, medium and even large companies buy servers that fit roles where the CPU can be up to half the cost of the system. Sure you can get systems like and SAN's are an example where the CPU's are a small portion of the costs but even then Companies aren't in business to throw away money. On top of that more cores per CPU means less servers need to fill core targets, which as have been covered many server software licenses are still socket and not thread limited. EPYC can hit 64 cores in a 2S system and can therefore easily work in a 2U setup. Intel to match that would require 3 $9k CPU's and probably a 4U setup or 2 2U 2S setups (I guess you can have an extremely tight 4S 2U). So the cost isn't just limited to the absolute sale price of the CPU.

Can we get out of the world where people only look money is no object like spending when analyzing something. Even big corporations have big purchasing departments that analyze the different costs of different hardware implementations. 10k over 1000 servers for a global update is still a lot of money.
 

ryzenmaster

Member
Mar 19, 2017
40
89
61
If as you claim, nobody ever uses SIMD instructions, then there is even less reason for Naples to have 8 memory channels. Which is why, as I noted before, the 8 channels are a mere artifact of the way Naples is constructed from 4 dies. I fully expected each die in the package to be accessing its own memory bank at dual-channel speed as would be expected from a 4-way NUMA setup.

However, you are wrong, because customers of server CPUs also write their own code, and can therefore use whatever instruction sets are available on the hardware.

I don't think anyone is suggesting SIMDs are never being used. It's just that explicitly opting to use them is more of a corner case than generally utilized optimization. You'd be surprised how many developers are not even aware of SIMDs let alone use them.

While they do have their time and place, rarely would AVX or any other SIMD be the decisive factor when choosing server CPU.
 

blue11

Member
May 11, 2017
151
77
51
In what world are you dealing with where CPU's only take 10%-20% of the hardware cost?

Cost of a standard 2P Intel E5-based server (retail pricing):

2x E5-2699v4: $9000
24x 128 GB DDR4 LRDIMM: $28000
Motherboard and enclosure: $1000

On this bare-bones compute-only server, the CPU represents only 20% of the overall cost. This is without including things like SSDs, interconnect controllers, GPUs (!) and other accelerators, etc. that would add even more cost to the system. When you add in power, land, and labor, the fraction of the cost associated with the CPU becomes incredibly small. You might object to retail pricing, but when accounting for actual customer pricing, the CPU cost will be even lower, while memory costs are about the same.

Think about it this way. Just like gamers spend lots of money to maximize the value of their GPU (the most expensive component), datacenter operators want to maximize the utilization of their DRAM.

Can we get out of the world where people only look money is no object like spending when analyzing something. Even big corporations have big purchasing departments that analyze the different costs of different hardware implementations. 10k over 1000 servers for a global update is still a lot of money.

Money is everything, which is why you buy the highest capacity CPU you can fit in a 2P configuration. What keeps 4P configurations (Naples is 8P per socket!) from taking off is poor rack density and memory performance, not cost of CPUs.

I don't think anyone is suggesting SIMDs are never being used. It's just that explicitly opting to use them is more of a corner case than generally utilized optimization. You'd be surprised how many developers are not even aware of SIMDs let alone use them.

While they do have their time and place, rarely would AVX or any other SIMD be the decisive factor when choosing server CPU.
Why argue against strawmen? All I said was that 8ch memory controllers are inefficient and unnecessary without strong compute performance (i.e. SIMD). Someone brought up random access, but in fact memory latency (which is already worse on Zen than Skylake) will hinder Naples even further, given the penalty with extreme NUMA levels and way too many memory channels.

Nonsense. You win by also beeing cheaper. That way Intel kicked Ibm out.
There is a reason Intel xeon isnt twice as expensive. Yes. Price obviously matters big time. There is a sizeable price elasticity here. Its not small amounts here.

The idea that cpu cost is so small it doesnt matter becaue it eg isnt shown in a tco analysis from Gartner is pathetic. Every cost matters. All over. All the time. Every penny. You cost optimize your company all over or somebody takes your chair tomorrow.
The cost difference between IBM and Intel is orders of magnitude. Nobody sane is expecting Naples to be an order of magnitude cheaper than Skylake-SP. Besides that, what really killed POWER was expensive DRAM.
 
Last edited:

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
Cost of a standard 2P Intel E5-based server (retail pricing):

2x E5-2699v4: $9000
24x 128 GB DDR4 LRDIMM: $28000
Motherboard and enclosure: $1000

On this bare-bones compute-only server, the CPU represents only 20% of the overall cost. This is without including things like SSDs, interconnect controllers, GPUs (!) and other accelerators, etc. that would add even more cost to the system. When you add in power, land, and labor, the fraction of the cost associated with the CPU becomes incredibly small. You might object to retail pricing, but when accounting for actual customer pricing, the CPU cost will be even lower, while memory costs are about the same.

Think about it this way. Just like gamers spend lots of money to maximize the value of their GPU (the most expensive component), datacenter operators want to maximize the utilization of their DRAM.

Look this is edging on pointless. That is a single server configuration for one use case. There are a lot more configurations like single service systems, virtualiztion, and so one. Ones that don't require nearly 3 TB of memory. I didn't say that you can't come up with a server that CPU cost is 10% (and that isn't close, but I get your point about further options). Very few solutions actually require 64GB+ per core. But that's not the point. That setup can be an important part of the server market but it isn't the only. Infact I think you will find that there is sufficiently large enough portion of the Market that buys 20k and cheaper servers maybe even business that primarily run off of 10k or less servers. None of this even you compute load out changes the fact that if I told that company that they could spend 10 million less dollars and get more CPU resources the Purchaser would go, well that's only 4% of the project's cost so lets just ignore it.

Money is everything, which is why you buy the highest capacity CPU you can fit in a 2P configuration. What keeps 4P configurations (Naples is 8P per socket!) from taking off is poor rack density and memory performance, not cost of CPUs.

Now you are making no sense. First CCX isn't anything close to a separate CPU. The CCX performance hit should really be given a rest. It was an easy target when talking about Ryzen's gaming performance. It wasn't that and we don't know what the actual performance penalty if there is any specially on server and professional apps. Second Naples isn't 4P or 8P either or not in the sense of talking about 2P configurations. Naples specifically won't have density issues. Wasn't that my point earlier, naples at 32 cores is like 40% more per chip than Intel's highest. Not even knowing the price of the 32c version it would already have an advantage in density. Also cost does play into it. Not only not needing more servers for specific resource requirements, but size of the datacenter are all tied into costs. Costs matter. It always matters. For the guys it doesn't matter with. They will change their tune as business slumps and they don't have the resources to continue because they wasted millions where they didn't need to.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |