AMD Ryzen (Summit Ridge) Benchmarks Thread (use new thread)

Page 41 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
So is the bottom line here that running a unified scheduler at high frequencies is tougher to implement than separate Int/FP schedulers at high frequency?
There are two sides: a scheduler having more things to consider needs more time, but also more power. Zen's FP scheduler is of medium complexity given its 4 issue ports.

Somewhat simpler schedulers (like a group of single INT schedulers) may occasionally lose a cycle, which weighs more, the more pressure there is on the schedulers.
 

Abwx

Lifer
Apr 2, 2011
11,514
4,301
136
There are two sides: a scheduler having more things to consider needs more time, but also more power. Zen's FP scheduler is of medium complexity given its 4 issue ports.

Somewhat simpler schedulers (like a group of single INT schedulers) may occasionally lose a cycle, which weighs more, the more pressure there is on the schedulers.

Actually Intel s scheme is not as heavy as AMD s due to the former s exe ports being used for both FP and INT..

In consequence Intel use a single scheduler that has 97 entries (for SKL) but that s for both FP and INT while AMD use a scheduler for INT with 84 entries and a separate FP scheduler that has 96 entries, so that s 180 entries per core.

Also the two schedulers can work simultaneously, so each cycle 6 uops can be dispatched to the INT exe units and 4 uops to the FP units for a total of 10 uops/cycle, on this subject AMD mentioned that in previous uarch the FP scheduling could sometime make the INT pipeline stall, wich is no more the case in Zen.

http://www.hardware.fr/news/14758/amd-detaille-architecture-zen.html
 

cdimauro

Member
Sep 14, 2016
163
14
61
The possible combinations, and the required logic, grows with the factiorial (n!), so it's useful to split into domain. Power 8 and 9 have more domains: int, branch, memory and FP (and maybe i forgot something).
The number of combinations is greatly lower than O(n!). And to be more specific, the problem of dispatching one uop to one (and only one) of the free ports (from the list of allowed ones for this specific uop) is fairly simple and fast to solve; even optimized taking some design decisions.

Here doesn't matter if a port is used for INT, FP, or another kind of operation: it's totally irrelevant for this specific problem. You have a set of ports that can be used to execute this specific uop, and you need to pick one of them if it's free. Eventually taking into account some priority about what port should be used if more than one of them are free.

After such step, and having the port, you have to dispatch the uop to it. That's done using a 3-inputs MUX or some exotic solution.

Again, here I'm talking about the last step/stage of Haswell's Unified Reservation Station. And this is just my personal idea on the subject.
But yes this comes at one price: longer latencies on intructions that move data between domains... I don0t think that they are more than a few percent of the executed instructions...
Disassembling some code you can see that there are some, but the problem is determining how many of them are executed at runtime. Here, you need a (good) profiler.
This is a simple scheduling. Actually can also be a single bit in the uop, that can be set by the decoders...
That's only an implementation detail, but anyway Zen's ALUs are not symmetrical (they are specialized for some specific operations to be executed), so you need to know which of the (free) schedulers/ALUs need to be selected.

But as I stated previously, the problem of dispatching one uop to one of those schedulers/ALUs if fairly simple (and fast).

@Abwx: the strange thing is that, according to the slides, Zen's uop cache can dispatch a maximum of 6 uops to schedulers...
 

Abwx

Lifer
Apr 2, 2011
11,514
4,301
136
@Abwx: the strange thing is that, according to the slides, Zen's uop cache can dispatch a maximum of 6 uops to schedulers...

6 uops/cycle can be dispatched to the INT schedulers unit and 4 uops/cycle to the FPU scheduler, simultaneously.

Finally, the dispatch can apply six instructions per cycle, at a maximum rate of 6/cycle to the INT scheduler or 4/cycle to the FP scheduler. We confirmed with AMD that the dispatch unit can simultaneously dispatch to both INT and FP inside the same cycle

http://www.anandtech.com/show/10591...t-2-extracting-instructionlevel-parallelism/3



This is also pointed by hardware.fr journalist :

En pratique 10 micro ops peuvent être envoyées (6 vers la partie "Integer" de la puce, 4 vers la partie "Floating Point")
http://www.hardware.fr/news/14758/amd-detaille-architecture-zen.html
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,474
1,962
136
6 uops/cycle can be dispatched to the INT schedulers unit and 4 uops/cycle to the FPU scheduler, simultaneously.

This is not what the statement says. It says it can dispatch up to 6 to INT, up to 4 to FP, and to both on the same cycle, but there is no statement whatsoever on it being able to dispatch the full throughput to both on the same cycle. I would be extremely surprised if it could -- this would be a costly feature to support, and would not really help much in real-world code.
 

cdimauro

Member
Sep 14, 2016
163
14
61
@Tuna-Fish: in fact it's not. Let me explain.
6 uops/cycle can be dispatched to the INT schedulers unit and 4 uops/cycle to the FPU scheduler, simultaneously.

http://www.anandtech.com/show/10591/amd-zen-microarchiture-part-2-extracting-instructionlevel-parallelism/3
From this link:

"Finally, the dispatch can apply six instructions per cycle, at a maximum rate of 6/cycle to the INT scheduler or 4/cycle to the FP scheduler."
From this link:

"6 ops dispatched" (from the Micro-op Queue to the decoders)
So, the decoders can get a maximum of 6 uops per cycle from the Micro-op Queue, and those 6 can be split between the INT and/or FP schedulers, with up to 6 and up to 4 respectively.
This doesn't mean that the maximum of dispatched uops is 6 + 4 = 10: it's still 6, which is the limited imposed by the Micro-op Queue.
 

KTE

Senior member
May 26, 2016
478
130
76
^Yeah that's my understanding of it too. 6 max, shared. But max if only FP is 4.

Per cycle.

Sent from HTC 10
(Opinions are own)
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
You said that unless Intel puts eDRAM on their CPUs, you will wait for Zen. I am telling you that Zen CPUs/APUs do not have eDRAM, something that is pretty common knowledge at this point. It would seem to me that you are applying what is called a "double standard" -- in other words, Intel needs to have eDRAM to be interesting to you, but AMD can get a free pass.

That, to me, suggests a potential bias towards AMD products.
Selective application of facts.

You forgot about a few things:

1) Intel already shipped Broadwell C to consumers with EDRAM
2) Intel shipped Skylake with an EDRAM controller

So, where's the EDRAM? Talk to Peter Bright if you're going to throw around accusations of bias due to disappointment over the lack of EDRAM with Skylake cores.

You also forgot a few other things:

3) Zen is likely to be priced lower, something I specifically mentioned.
4) Zen is likely to be soldered, not sent out with a thermal bottleneck from the use of mediocre TIM.

It's a better idea to read what I write instead of what you want me to have written. Your troll about me being biased toward AMD is cute, in particular, since I have owned more Intel systems than AMD systems and am typing on one right now. But, please continue to demonstrate your desire to browbeat people who push back at your salesmanship for Skylake in topics like this one where it is highly inappropriate.

Plus, it may not be important to you, but it is important for some to see AMD remain in competition, particularly enough of that to push Intel toward providing better products. It's not an irrational bias to give money to AMD if Zen offers an equivalent value to an Intel part, particularly given how Intel has abused its position with anti-competitive practices, practices that have been, in part, responsible for the slowing pace of technological advancement for enthusiasts and artificially-inflated pricing.
6700K for $340 is a pretty good deal for the kind of performance you can get -- meaningfully faster than prior generation Intel CPUs and leagues ahead of anything AMD currently offers. Why be disappointed?
Read Peter Bright's article and post a rebuttal if you can think of one.
A mainstream socketed desktop CPU with eDRAM and a big iGPU just isn't an attractive value proposition for most people, especially since it adds cost and could potentially negatively impact overclocking headroom.
EDRAM is separately clocked. The added cost and complexity of Intel devoting half of the CPU chip to iGPU is a lot more questionable.
Even when AMD fielded competitive products, pricing on high-end GPUs has been "high." Fortunately, NVIDIA offers products at just about every relevant price point, so buy to your budget.
Are you suggesting that AMD cannot offer a price-competitive part but Intel and Nvidia will?
What drawbacks are there in Skylake?
1) Ships with an EDRAM controller but has no EDRAM, causing it to lose to the older Broadwell C, or be matched by it — despite a clock deficit.

Read Peter Bright's article, specifically the section I quoted here that you apparently neglected to read about Moore's Law. Look at Anandtech's results showing the 5775C and even the 5675C beating Skylake in some games. Look at reviews that showed the Broadwell parts equaling Skylake performance in things like latency and frame timings despite a clock deficit.

2) Ships with mediocre TIM, inducing people to delid.

3) Ships with a thinned substrate.

4) Pricey.

5) Requires me to the replace the 32 GB of fast DDR3 I've already invested in, unlike a Broadwell C part with iGPU disabled and a high TDP that Intel could have offered instead of pushing out Skylake before it really offered anything making a motherboard and RAM replacement worthwhile.

While it's true that Gigabyte offers a board that supports both DDR3 and DDR4 that's just one board, hardly much when it comes to choices. And, that doesn't invalidate the strong point that Broadwell C, with very minor tweaks, could have offered consumers enough on an existing motherboard and RAM platform to be competitive with Skylake. Even with Skylake's release, Intel had the opportunity to offer more value and more choice by offering one Broadwell C with a high TDP, possibly with the iGPU disabled for yield improvement. Ideally, as well, it would have been soldered.

I was seriously considering buying a Broadwell C or a Skylake and decided that I wasn't happy with Intel's decisions enough to part with my cash. I am hoping Zen will offer something that will make me change my mind. Perhaps it will prompt Intel to finally offer an L4 cache in an updated platform with up-to-date cores to compete with it, a part that doesn't have so many drawbacks. If the 5775C had been reduced in price to something like the price of a 5675C to compensate for buying into a dead-end platform maybe I would have gotten one despite its drawbacks.
 
Last edited:
Reactions: sirmo and Doom2pro

jpiniero

Lifer
Oct 1, 2010
15,091
5,655
136
Does the Skylake 4+2 die really have the EDRAM controller? It's pretty big, you would think it would be removed since they aren't using it.
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
Does the Skylake 4+2 die really have the EDRAM controller? It's pretty big, you would think it would be removed since they aren't using it.

Anandtech 9-1-2015 said:
For Skylake, eDRAM enabled processors will be configured differently to previous versions in order to make the effect of the eDRAM more seamless for software.

In Skylake, the eDRAM arrangement changes:

Rather than acting as a pseudo-L4 cache, the eDRAM becomes a DRAM buffer and automatically transparent to any software (CPU or IGP) that requires DRAM access. As a result, other hardware that communicates through the system agent (such as PCIe devices or data from the chipset) and requires information in DRAM does not need to navigate through the L3 cache on the processor. Technically graphics workloads still need to circle around the system agent, perhaps drawing a little more power, but GPU drivers need not worry about the size of the eDRAM when it becomes buffer-esque and is accessed before the memory controller is adjusted into a higher power read request. The underlying message is that the eDRAM is now observed by all DRAM accesses, allowing it to be fully coherent and no need for it to be flushed to maintain that coherence. Also, for display engine tasks, it can bypass the L3 when required in a standard DRAM access scenario. While the purpose of the eDRAM is to be as seamless as possible, Intel is allowing some level on control at the driver level allowing textures larger than the L3 to reside only in eDRAM in order to prevent overwriting the data contained in the L3 and having to recache it for other workloads.

A number of media have already been requesting an announcement regarding a discrete processor with an eDRAM implementation, similar to Broadwell. I even enjoyed conversations at IDF where it was suggested that Intel could produce an i7 at 4.0 GHz with 128MB eDRAM, either with or without overclocking, and charge a nice $30-$50 premium for it. ... For users who actively want an LGA1151 4+4e configuration, make sure your Intel representative knows it, because customer requests travel up the chain.

WCCFtech said:
Intel Skylake Gen9 Graphics Features:

Gen9 Memory Hierarchy Refinements:

In Gen9 EDRAM now acts as a memory-side cache between LLC and DRAM. Also, the EDRAM memory controller has moved into the system agent, adjacent to the display controller, to support power efficient and low latency display refresh.

More surprisingly, the 6700K die also houses an optional eDRAM controller that can feature 64 MB to 128 MB of eDRAM (L4) cache with frequencies of up to 1.6 GHz to increase bandwidth, reduce latency and improve performance on faster iGPUs.
Everything I read about it said it shipped with the controller but not with EDRAM installed.

And then there's this:

Anandtech 9-15-2016 said:
Some could argue that going for a Crystal Well based processor, with added embedded DRAM (eDRAM), might be more helpful in certain memory limited situations. This is true ... like WinZip compression, this shows in the results. Something like the ... i7-5775C have 128MB of eDRAM that acts like a victim cache and can optimise most gains (up to 128MB) as long as the software is recalling frequently used memory due to the victim cache nature. The latest iteration of eDRAM processors such as the i7-6770HQ, where the eDRAM acts as a true DRAM buffer...
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,514
4,301
136
This is not what the statement says. It says it can dispatch up to 6 to INT, up to 4 to FP, and to both on the same cycle, but there is no statement whatsoever on it being able to dispatch the full throughput to both on the same cycle. I would be extremely surprised if it could -- this would be a costly feature to support, and would not really help much in real-world code.

That s not what the hardware.fr journalist is saying :

En pratique 10 micro ops peuvent être envoyées (6 vers la partie "Integer" de la puce, 4 vers la partie "Floating Point")

Litteraly translated :

In practice 10 micro ops may be sent (6 toward the "Integer" of the chip, 4 to the "Floating Point"), two more than Haswell ...

In the slide i posted there s two output paths from the dispatcher, one go to the Integer schedulerq and the other to the FP scheduler, AMD s Mike Clark explicitely stated that the two parts of the CPU works as two independants processors according to Hardware.fr article.

https://translate.googleusercontent...n.html&usg=ALkJrhhdx7hNYCyhmV97bOgsUF_cXRWo_w
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
From what i know, Skylake-H does ship without eDRAM,but certainly has eDRAM controller.

There is not a single suggestion Skylake-S does.

Anyways,in hindsight eDRAM is overrated. I mean, Skylake in initial reviews was losing to 4790k in some benches, so now what?
 

cdimauro

Member
Sep 14, 2016
163
14
61
That s not what the hardware.fr journalist is saying :



Litteraly translated :

In practice 10 micro ops may be sent (6 toward the "Integer" of the chip, 4 to the "Floating Point"), two more than Haswell ...

In the slide i posted there s two output paths from the dispatcher, one go to the Integer schedulerq and the other to the FP scheduler, AMD s Mike Clark explicitely stated that the two parts of the CPU works as two independants processors according to Hardware.fr article.

https://translate.googleusercontent...n.html&usg=ALkJrhhdx7hNYCyhmV97bOgsUF_cXRWo_w
Yes, but if in such dispatcher only 6 uops are entered, how can it dispatch 10?
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
eDRAM is overrated.
Remember that it was only implemented as a victim cache in Broadwell C. Skylake, by contrast, was developed so it can be used as a DRAM buffer.

And, Peter Bright disagrees.

I mean, Skylake in initial reviews was losing to 4790k in some benches, so now what?
The 4790K isn't clocked at the level of Broadwell C nor did it have the same low TDP.

Also, if Haswell is that competitive with Skylake it also supports my argument that enthusiasts would have been better-served if Intel had sold a higher-TDP Broadwell C part (possibly with iGPU disabled for yields) and/or a Skylake with the EDRAM (particularly in the apparently improved condition of being a DRAM buffer not just a victim cache). Why ask everyone to buy new RAM and a new motherboard for a minor improvement?

IBM's results with the latest Power stuff also suggests that EDRAM is far from being overrated. There is EDRAM all over the place.
 

Abwx

Lifer
Apr 2, 2011
11,514
4,301
136
Yes, but if in such dispatcher only 6 uops are entered, how can it dispatch 10?


From this link:

"Finally, the dispatch can apply six instructions per cycle, at a maximum rate of 6/cycle to the INT scheduler or 4/cycle to the FP scheduler."

From this link:

"6 ops dispatched" (from the Micro-op Queue to the decoders)
So, the decoders can get a maximum of 6 uops per cycle from the Micro-op Queue, and those 6 can be split between the INT and/or FP schedulers, with up to 6 and up to 4 respectively.
This doesn't mean that the maximum of dispatched uops is 6 + 4 = 10: it's still 6, which is the limited imposed by the Micro-op Queue.

The slide i posted is lower level, and it s written 6 to the INT schedulers and 4 to the FP scheduler...

And from the quote that you didnt quote completely, one has to wonder why :

Finally, the dispatch can apply six instructions per cycle, at a maximum rate of 6/cycle to the INT scheduler or 4/cycle to the FP scheduler. We confirmed with AMD that the dispatch unit can simultaneously dispatch to both INT and FP inside the same cycle, which can maximize throughput (the alternative would be to alternate each cycle, which reduces efficiency).

So if it can dispatch to the two schedulers in the same cycle it is clear that it can dispatch as much as 10 uops during a same cycle, indeed you didnt quote the whole text since it contradict your own point, or rather the lack of, i guess that it s a school case of cutting a text so it means the contrary of what it s supposed to mean on the first place...
 

Saylick

Diamond Member
Sep 10, 2012
3,504
7,764
136
So if it can dispatch to the two schedulers in the same cycle it is clear that it can dispatch as much as 10 uops during a same cycle, indeed you didnt quote the whole text since it contradict your own point, or rather the lack of, i guess that it s a school case of cutting a text so it means the contrary of what it s supposed to mean on the first place...

I am not a computer engineer, but I believe the correct translation of that statement is that the micro-op cache can dispatch up to 6 ops total per cycle, where the INT side can accept up to 6 ops and the FP side can accept up to 4 ops. Therefore, any combination of ops to each side, up to 6 total, can be made, e.g. 6 ops to INT and 0 to FP, or 2 to INT and 4 to FP, or 3 to INT and 3 to FP, or 0 to INT and 4 to FP, etc.
 

bjt2

Senior member
Sep 11, 2016
784
180
86
I am not a computer engineer, but I believe the correct translation of that statement is that the micro-op cache can dispatch up to 6 ops total per cycle, where the INT side can accept up to 6 ops and the FP side can accept up to 4 ops. Therefore, any combination of ops to each side, up to 6 total, can be made, e.g. 6 ops to INT and 0 to FP, or 2 to INT and 4 to FP, or 3 to INT and 3 to FP, or 0 to INT and 4 to FP, etc.

I agree... If we imagine the arrow as a pipe, with 6 uops input, it can't output 10 uops...
 

Abwx

Lifer
Apr 2, 2011
11,514
4,301
136
Therefore, any combination of ops to each side, up to 6 total, can be made, e.g. 6 ops to INT and 0 to FP, or 2 to INT and 4 to FP, or 3 to INT and 3 to FP, or 0 to INT and 4 to FP, etc.

If more than 6 uops are available in the uop queue, say 6 INT uops and 4 FP uops, it wouldnt make sense that only 6 could be dispatched as to do so only a few more registers would be needed, the paths are clearly separated and the uops are already tagged, so it would cost nothing complexity wise to dispatch 10 uops with said repartition.

..
 

Saylick

Diamond Member
Sep 10, 2012
3,504
7,764
136
If more than 6 uops are available in the uop queue, say 6 INT uops and 4 FP uops, it wouldnt make sense that only 6 could be dispatched as to do so only a few more registers would be needed, the paths are clearly separated and the uops are already tagged, so it would cost nothing complexity wise to dispatch 10 uops with said repartition.

..
I'm sure it would be great to have that micro-op queue be able to dispatch to each and every port (all 6 INT and all 4 FP) but I bet the likelihood that there will be an op for each execution unit in a given cycle is rather slim, which explains why the total dispatch rate is reduced to 6 ops/cycle. 6 is probably the optimal number per AMD's internal testing; any less and you start starving the execution units for the design workload (factoring in SMT to help feed the execution units), and more than 6 is probably an inefficient use of die space or power given that most programs probably don't require processing over 4 ALUs, 2 AGUs, 2 FP MULs, and 2 FP ADDs in a given cycle.
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
The 6 vs. 10 uop dispatch likely causes a performance delta less than typical error margins. You might win less processing time on a Zen in a month than you spend here reading researching, and writing answers related to this topic.

Also worth considering:
Being able to simultaneously dispatch to INT and FP doesn't necessarily mean up to 10 uops, because a typical case of FP x86 instructions with mem ops would also both issue to FP and INT. The latter means to the AGUs and maybe one of the ALUs responsible for SSE/AVX MOVs according to the patch.
 

cdimauro

Member
Sep 14, 2016
163
14
61
Nevertheless, if the Micro-op Cache can deliver a maximum of 6 uops/cycle, this is clearly the upper bound for both INT and FP decoders.
 

bjt2

Senior member
Sep 11, 2016
784
180
86
The 6 vs. 10 uop dispatch likely causes a performance delta less than typical error margins. You might win less processing time on a Zen in a month than you spend here reading researching, and writing answers related to this topic.

Also worth considering:
Being able to simultaneously dispatch to INT and FP doesn't necessarily mean up to 10 uops, because a typical case of FP x86 instructions with mem ops would also both issue to FP and INT. The latter means to the AGUs and maybe one of the ALUs responsible for SSE/AVX MOVs according to the patch.

So a memory FP instruction is decoded into at least 2 uops? Now I get the REAL reason to have to have double path decoder... There are a few instruction that need 2 uops, but they are very important...
I bet that a memory FP instruction is decoded as a single uop in INTEL cpus, due to unified scheduler...
 

bjt2

Senior member
Sep 11, 2016
784
180
86
Nevertheless, if the Micro-op Cache can deliver a maximum of 6 uops/cycle, this is clearly the upper bound for both INT and FP decoders.

It is stated that de decoders are four... It would not make sense, power wise, to add more decoders... Even if the hit rate in the uop cache is only 50%, the decoders can produce at least the same uops than the 2x4 decoders in excavator, for 2 threads... But this time the two threads have 4 alu, 2 agu+l/s and 4 fp pipelines, instead of the 4 alu, 4 agu+l/s and 4 fp pipelines...
If the uop cache is big enough, the hit rate can be as high as 80%... And so the 4 decoders maybe are even too much...
I think that the first improvement that they will make in Zen+ is to beef up the uop cache, without loose frequency... I bet that they are doing a lot of simulations on this topic...
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,907
3,517
136
Remember AMD decodes into Mop's (assuming they haven't changed the way they have done thing since like K7). 1x86 instruction = 1 to 2 Mops. traditionally an AMD decoder can all decode 1-2 Mops each. Each Mop can have arithmetic and memory uop's.
 

cdimauro

Member
Sep 14, 2016
163
14
61
It was a
It is stated that de decoders are four... It would not make sense, power wise, to add more decoders... Even if the hit rate in the uop cache is only 50%, the decoders can produce at least the same uops than the 2x4 decoders in excavator, for 2 threads... But this time the two threads have 4 alu, 2 agu+l/s and 4 fp pipelines, instead of the 4 alu, 4 agu+l/s and 4 fp pipelines...
If the uop cache is big enough, the hit rate can be as high as 80%... And so the 4 decoders maybe are even too much...
I think that the first improvement that they will make in Zen+ is to beef up the uopache, without loose frequency... I bet that they are doing a lot of simulations on this topic...
it was a lapsus. Decoders -> dispatchers
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |