"primitive" GPU's

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

silverpig

Lifer
Jul 29, 2001
27,703
11
81
Originally posted by: FreemanHL2
Hey i have a question! Why deos maya use your CPU for rendering when the GPU is more refined at doing these types of calculations. And don't say that MAYA needs more raw power, because you have been arguing against the Mhz stereotype throughout this thread.

Because Maya uses a lot of custom programmed rendering techniques, raytracing, and a lot of other things. Think of a cpu as being a bunch of metal workers, mechanics, and machinists. Now think of a gpu as being a factory of robots pre-programmed to manufacture a ford focus.

Obviously the robots are going to be able to spew out focuses much faster than the people are, but this is because they're specifically designed to do certain operations that are pre-programmed in.

Now have them custom build you an Aston Martin to your specs. The people can do it, but it'll take some time. The robots will sit there and do nothing because they can't read blueprints.

GPUs have special hardware that allows them to do specific tasks well, but they aren't very versatile. CPUs on the other hand try to do everything fairly well, but very little specifically well.
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
It doesn't take 2 years to design a ~2.6 GHz processor, it takes 6 years and on a scale of ~180 million transistors, even more time. Prescott, at 125 million transistors (and not forgetting more than half of that is cache), took much more than 6 years to design (let's consider the original willamette as taking 6 years, then 4 more years to get to Prescott), that's 10 years for an MPU core that's roughly ~60 million transistors to get to 3.8 GHz.
A modern GPU is ~180 million transistors *without massive amounts of cache*. Most of it is logic. To custom-design an MPU of that magnitude and get it to run at the GHz range (if you even can design something like that by hand) would probably take 20 or 30 years (design complexity and debug problems grow exponentially, so that's an underestimate). In 20 or 30 years, you could've put that resource towards designing many more robust architectures that would've provided features that greatly outperforms your multi-GHz MPU.
The reason CPU manufacturers can't do this (or rather, don't do this) is that it is much more difficult to design a general-purpose processor that performs well in all areas vs a graphics processor. In the latter case, you simply can add more pipelines in parallell, add more FPU's, add more memory bandwidth, etc. etc. and you can easily increase performance. For general-purpose processor, such is not true. So we need higher frequencies.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Megamixman
GPU's are specifically designed to be able to calculate data in certain ways. Put it this way, the shader units are specific sequences of Math units that perform the shading algorithm. This constrains them to certain types of calculations, mostly calculations that need to be done on the fly such as in games or for previews for CAD or 3D programs. The Final scene from a 3d Program takes on a complexity that is not designed for on the fly calculations. It is meant for detail. Secondly Video cards are not designed to calculate data. Of course if you work in the 3D industry you are working with Render clusters so that isn't a big problem.
gpus aren't designed to calculate data.. then exactly what is it designed for? answer that without using the words "calculate/compute/add/subtract/multiply/divide". gpus are designed to do vector math (lienar algebra).. computer graphics are based on sets of vectors (matrices).
Branch Prediction, micro ops, etc. were ways for Intel to keep the computer going in the path they wanted. Until now CPU's have been advancing through pure MHz. It's called business; there is more to a technological advance then just scientific genius, especially in a major company of a capitalist economy.
micro-ops was a way to fix the problem with x86 instructions not being capable of executing in parallel with the given set of gprs. without a branch prediction unit, the pipeline would be useless. all this stuff is a way for companies to make cheap "superscalar" processors..
Shading is also not just integer math. When did you get that idea. If it was still integer math, Halo wouldn?t even look half as good as it did. After all, our eyes see colors not distinct geometric objects with a shade applied to them. That is processed in the brain.
never said that it was.. what i was saying was that a gpu would process integer math extremely slow compared to a cpu. the multiple pipelines used by gpus are utilized when performing ops on matrices and vectors.
edit: and when i say integer math, i mean a single integer, not matrix or vector math with integers.
GPUs are in no way primitive. Converting 3D data into 2D is also not primitive. It is very complex. Computers can't do all the calculations required for that in reality. What we have are estimates. Even then those estimates are computationally intensive. The advantage a GPU designer has is that he doesn?t need a Branch Prediction Unit, because there only so many different operations that need be done. Secondly he has to design within Technological limits. I don?t know anyone at Intel that would want to put 16 Presscot cores on one chip. Seriously this almost like asking, which is faster a Mac or a PC.
alrite let me rephrase myself.. gpus are complex, but aren't as complex as cpus. when looking at them from a global pov, then yea.. it's extremely complex. but relative to cpus these days, they're simple. much of the time spent on gpus is spent optimizing path delays and simple logic. "..doesn't need a Branch Prediction Unit.." exactly. gpus don't benefit from the stuff i mentioned.. which are the more complex parts that comprise cpu synthesis.

I will take a shot at this. For one, scheduling is done by the operating system, not by the CPU. Second, branching, or branch prediction, is a fancy way of saying "I will assume the answer is X, and work as though it is X, instead of wasting clock cycles waiting for an answer. If the answer is not X, then no harm, no foul." Misprediction penalty is non-existant. If the wrong branch is taken, it is as if no branch were taken at all. All work is thrown out and the correct path is loaded.
instruction scheduling is a way to keep from bubbles forming in the pipeline.. from what i've learned this is implemented by hardware and software (compilers, not os')
An example: If I told you to add two numbers together, and the numbers to add were either W+X or Y+Z, but I would have to get back to you on which two, you could start working on W+X. If I came back and said it is Y+Z, you can immediately quit work on W+X and start Y+Z. No time is wasted.
that's not how a pipeline works.. say i told you to add w to x and compare x with y, if they're equal to each other go to A otherwise go to B. say x is equal to y, then A is executed.. let's say A modifies the value for x.. then after A executes another comparison of x and y is made, this time if it's true it goes to C otherwise D. assume that the branch prediction table wasn't updated during or after A, then when it sees the branch it's prediction is 'true' and it'll start executing set C. when it comes to the realization that x is not equal to y, you have a misprediction and the pipeline gets flushed.
GPU's are not primitive. The 6800 has 222 million transistors and can perform tasks that a CPU cannot. I would agree that CPU's can perform more tasks and take more planning, but to say that a GPU is primitive is a relative term.
well, thats what i meant it as.. the gpu is primitive relative to the cpu. i'm not arguing that a gpu is equivalent to a calculator.
there's nothing that a gpu can do that a cpu can't do.. it may take 100 cycles longer, but it'll still be able to do it.
 

Spencer278

Diamond Member
Oct 11, 2002
3,637
0
0
I would guess GPU makers don't ramp clock speed like CPU makers because it takes less transistor to added parral pipelines then to increase the depth of the pipeline units. Also i think a low clock speed is easyer to design for.
 

complacent

Banned
Dec 22, 2004
191
0
0
Originally posted by: itachi

instruction scheduling is a way to keep from bubbles forming in the pipeline.. from what i've learned this is implemented by hardware and software (compilers, not os')

that's not how a pipeline works.. say i told you to add w to x and compare x with y, if they're equal to each other go to A otherwise go to B. say x is equal to y, then A is executed.. let's say A modifies the value for x.. then after A executes another comparison of x and y is made, this time if it's true it goes to C otherwise D. assume that the branch prediction table wasn't updated during or after A, then when it sees the branch it's prediction is 'true' and it'll start executing set C. when it comes to the realization that x is not equal to y, you have a misprediction and the pipeline gets flushed.

I misunderstood what you meant by scheduling, and yes, instruction scheduling would be done by the compiler.

As far as pipelines and prediction, what I gave was a valid instance of it, as well as yours. Branch prediction is not that complicated compared to some of the other technology. The most common case would be any loop. When in a loop, most processors will assume that the loops returns true because ~80% of the time a loop will return true. So it chugs along assuming the branch of true will be taken, and if not, again, there is no lost time; the CPU would have sat idle for that time if it didn't predict. I think the only thing we disagree on is the definition of primitive. But you cannot say a GPU is primitive because it isn't that versatile. Is a boat primitive because it can only travel on water? Is a telephone primitive because its microcontroller cannot perform a floating point operation?

 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: complacent
Originally posted by: itachi

instruction scheduling is a way to keep from bubbles forming in the pipeline.. from what i've learned this is implemented by hardware and software (compilers, not os')

that's not how a pipeline works.. say i told you to add w to x and compare x with y, if they're equal to each other go to A otherwise go to B. say x is equal to y, then A is executed.. let's say A modifies the value for x.. then after A executes another comparison of x and y is made, this time if it's true it goes to C otherwise D. assume that the branch prediction table wasn't updated during or after A, then when it sees the branch it's prediction is 'true' and it'll start executing set C. when it comes to the realization that x is not equal to y, you have a misprediction and the pipeline gets flushed.

I misunderstood what you meant by scheduling, and yes, instruction scheduling would be done by the compiler.

As far as pipelines and prediction, what I gave was a valid instance of it, as well as yours. Branch prediction is not that complicated compared to some of the other technology. The most common case would be any loop. When in a loop, most processors will assume that the loops returns true because ~80% of the time a loop will return true. So it chugs along assuming the branch of true will be taken, and if not, again, there is no lost time; the CPU would have sat idle for that time if it didn't predict. I think the only thing we disagree on is the definition of primitive. But you cannot say a GPU is primitive because it isn't that versatile. Is a boat primitive because it can only travel on water? Is a telephone primitive because its microcontroller cannot perform a floating point operation?

I think part of the issue here is that the word "primitive" is both loaded and imprecise.

You can argue that the path that data takes through a GPU (at least a non-programmable GPU) is relatively *simple* compared to how data is processed in a modern CPU. The pipeline has fewer components (even though each one probably does more computation than a pipeline step in a CPU, since most GPU operations are vector-based), and it does not deal with branch prediction, pipeline stalls, etc. as discussed above. Once you start getting into programmable shaders, things get a little trickier, but are still not as complex. Essentially, GPUs are starting to evolve some of the more sophisticated mechanisms that CPUs use, but still are not nearly as "complex", in terms of how many steps data goes through from input->output.

However, in terms of the computational work performed, a modern GPU does a *lot* more ops/sec. (as measured against a common reference) than a modern CPU. It's just that, because of the specific tasks it's built to do, it makes more sense to create a highly parallel, super-high-IPC, vector-based architecture than a single-pipe, (relatively) low-IPC, superscalar one. Something like a P4 would need several instructions (doing at least several integer adds and multiplies, and one or more memory reads) to match the work done in each clock cycle of just one pipeline of a GPU like the NV40. Calling such an architecture "primitive" just because it is more straightforward is rather misleading.

GPUs are a one-trick pony, which is why they're so blazingly fast; you can do a lot more in a lot less time if you know 100% of what you have to do beforehand. A CPU has no clue which instruction is coming next, and has to guess about branches; the fixed-function portions of a GPU pipeline are just that -- fixed! They have a lot of things hard-wired that CPUs have to compute in real time.
 

Megamixman

Member
Oct 30, 2004
150
0
0
Originally posted by: itachi
gpus aren't designed to calculate data.. then exactly what is it designed for? answer that without using the words "calculate/compute/add/subtract/multiply/divide". gpus are designed to do vector math (lienar algebra).. computer graphics are based on sets of vectors (matrices).
Well even by what you are saying it is calculating data. It does after all have to have an input of matricides to do the vector math. I was trying to generalize, because the actual logical units are not designs specifically to shade, because each shader is made up of Arithmetic units that have to be seen as the logical units. They perform arithmetic operations, but in computers everything is put in the form of math operations so you can generalize anything a CPU or GPU is doing by saying that it is computing data. This is especially true since I once saw an article of a sound effects program that ran on the GPU. It needed lots of parallel processing power, so the author created a program that converted the midi inputs to vector data and sent it to the GPU for certain operations.

micro-ops was a way to fix the problem with x86 instructions not being capable of executing in parallel with the given set of gprs. without a branch prediction unit, the pipeline would be useless. all this stuff is a way for companies to make cheap "superscalar" processors..

Exactly the end point is a cheap superscalar processor. In other words you can?t just say that a Ti-89 is primitive. It is simple, because anything more and it doesn?t become economically sane. There is no High school student or even college student that needs the power of 1Ghz CPU for a Graphing Calculator. There is no need to spend all the money in hand optimizing a GPU when you can make it more parallel for cheaper.

never said that it was.. what i was saying was that a gpu would process integer math extremely slow compared to a cpu. the multiple pipelines used by gpus are utilized when performing ops on matrices and vectors.
alrite let me rephrase myself.. gpus are complex, but aren't as complex as cpus. when looking at them from a global pov, then yea.. it's extremely complex. but relative to cpus these days, they're simple. much of the time spent on gpus is spent optimizing path delays and simple logic. "..doesn't need a Branch Prediction Unit.." exactly. gpus don't benefit from the stuff i mentioned.. which are the more complex parts that comprise cpu synthesis.


Ok this I can agree with. ;p I think you answered yourself why GPU's aren?t as complex.
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
Originally posted by: Spencer278
I would guess GPU makers don't ramp clock speed like CPU makers because it takes less transistor to added parral pipelines then to increase the depth of the pipeline units. Also i think a low clock speed is easyer to design for.

A 6800 GPU is ~200 million transistors of logic, little cache. Prescott is 125 million transistors where ~60% is cache. Less transistors to add parallel units you say? No, less design time, yes. In reality, GPU's have extremely long pipelines and lots of them in parallel. That kind of thing can be designed quite rapidly and quickly in an HDL. Actually timing those individual pipeline stages and optimizing circuit paths on a transistor or even wire-level takes a lot of time and that's how you get to the multi-GHz range.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Megamixman
Well even by what you are saying it is calculating data. It does after all have to have an input of matricides to do the vector math. I was trying to generalize, because the actual logical units are not designs specifically to shade, because each shader is made up of Arithmetic units that have to be seen as the logical units. They perform arithmetic operations, but in computers everything is put in the form of math operations so you can generalize anything a CPU or GPU is doing by saying that it is computing data. This is especially true since I once saw an article of a sound effects program that ran on the GPU. It needed lots of parallel processing power, so the author created a program that converted the midi inputs to vector data and sent it to the GPU for certain operations.
eh, i had a feeling that this might cause some sort of confusion.. "Secondly Video cards are not designed to calculate data." that's what you said, and that's what my argument was aimed at.
Exactly the end point is a cheap superscalar processor. In other words you can?t just say that a Ti-89 is primitive. It is simple, because anything more and it doesn?t become economically sane. There is no High school student or even college student that needs the power of 1Ghz CPU for a Graphing Calculator. There is no need to spend all the money in hand optimizing a GPU when you can make it more parallel for cheaper.
we're arguing 2 different things.. it seems like you're arguing why gpus aren't more complex than cpus. which isn't the point i was trying to make. my arguments are based on the inherent nature of gpus in the current state relative to cpus. i'm not trying to justify or question the justification of the design.
looking at it now, i guess the use of the word primitive would imply that i thought gpus were on the opposite side of the spectrum compared to cpus.. which isn't the case. so, i'll say that gpus are "simpler" than cpus..
Ok this I can agree with. ;p I think you answered yourself why GPU's aren?t as complex.
wasn't trying to imply that a gpu would benefit from the complex logic that goes into a cpu.
As far as pipelines and prediction, what I gave was a valid instance of it, as well as yours. Branch prediction is not that complicated compared to some of the other technology. The most common case would be any loop. When in a loop, most processors will assume that the loops returns true because ~80% of the time a loop will return true. So it chugs along assuming the branch of true will be taken, and if not, again, there is no lost time; the CPU would have sat idle for that time if it didn't predict. I think the only thing we disagree on is the definition of primitive. But you cannot say a GPU is primitive because it isn't that versatile. Is a boat primitive because it can only travel on water? Is a telephone primitive because its microcontroller cannot perform a floating point operation?
just out of curiousity.. in your example, there are 2 paths that the program could take.. if it starts executing both branches, how would it know which one not to execute? i know the write-back would return teh result and all for the comparison.. but the execution is out-of-order, it won't be a simple matter of ignoring sequential cycles. it seems like the cpu would have to "follow" the branch and know which write-backs not to perform and outputs to ignore, can't imagine that this would be simple. or maybe the write-back/output is muxed? mm yea.. just curious.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
just out of curiousity.. in your example, there are 2 paths that the program could take.. if it starts executing both branches, how would it know which one not to execute? i know the write-back would return teh result and all for the comparison.. but the execution is out-of-order, it won't be a simple matter of ignoring sequential cycles. it seems like the cpu would have to "follow" the branch and know which write-backs not to perform and outputs to ignore, can't imagine that this would be simple. or maybe the write-back/output is muxed? mm yea.. just curious.

I don't claim to be an expert on CPU design (let alone the P4 in particular), but I believe they have some sort of mechanism to block the output until they have determined which side of the branch to take. Obviously, a pipeline stall can occur if it turns out to take more than just a few cycles to figure out which side it actually being taken, and in this case it would be unable to really predict the branch effectively.
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
Originally posted by: Matthias99
just out of curiousity.. in your example, there are 2 paths that the program could take.. if it starts executing both branches, how would it know which one not to execute? i know the write-back would return teh result and all for the comparison.. but the execution is out-of-order, it won't be a simple matter of ignoring sequential cycles. it seems like the cpu would have to "follow" the branch and know which write-backs not to perform and outputs to ignore, can't imagine that this would be simple. or maybe the write-back/output is muxed? mm yea.. just curious.

I don't claim to be an expert on CPU design (let alone the P4 in particular), but I believe they have some sort of mechanism to block the output until they have determined which side of the branch to take. Obviously, a pipeline stall can occur if it turns out to take more than just a few cycles to figure out which side it actually being taken, and in this case it would be unable to really predict the branch effectively.

Not sure how it's done on Netburst but it's definitely possible to do branch resolution in the re-order buffer. I'm guessing that's actually how it's done on Netburst and other OoOE chips as the branch mispredict penalty is the entire length (from decode to retire) of the pipeline. So mispredicted branches are only known when instructions are put back into order and retired (or in the case of a branch mispredict, discarded).
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
just out of curiousity.. in your example, there are 2 paths that the program could take.. if it starts executing both branches, how would it know which one not to execute? i know the write-back would return teh result and all for the comparison.. but the execution is out-of-order, it won't be a simple matter of ignoring sequential cycles. it seems like the cpu would have to "follow" the branch and know which write-backs not to perform and outputs to ignore, can't imagine that this would be simple. or maybe the write-back/output is muxed? mm yea.. just curious.

Basically, instructions are fetched and decoded in order, but executed out of order. One way to handle exceptions and branch mispredictions and maintain correct results is to use a reorder buffer - as you finish decoding instructions / dispatch instructions to reservation stations (this is still in order), you also add them to a queue at the end of the pipeline: the reorder buffer. Instructions are tagged as they flow through the pipelines with a reorder buffer ID, and when they finish, the don't actually write to the "architectural state" until they're the oldest entry in the reorder buffer (they write to a speculative register file in the mean time, or hold the results in the reorder buffer itself). Branches are also given entries in the reorder buffer, and if you discover that you mispredicted a branch, you just discard all the younger reorder buffer entries.

In the MIPS R10000, they use a slightly different method to squash instructions that follow branches - if I remember correctly, it is limited to 4 unresolved branches at any given time, and every instruction is given a 4-bit tag: one bit per branch... if the instruction depends on the branch, the bit is set. When you resolve a branch, you either kill any instructions with the corresponding bit set (if the branch went the wrong way) or clear the bit (if the branch went the right way).

I'd be happy to go into more detail on various out-of-order implementations.
 

itachi

Senior member
Aug 17, 2004
390
0
0
thanks guys.. read your explanations but don't really have time to respond right now, and i probably won't have internet for another couple weeks. but when i get back, i'll be sure to make a start a thread on ooo implementations.. seems pretty interesting.
 

Rock Hydra

Diamond Member
Dec 13, 2004
6,466
1
0
Hmm...I guess maybe an analogy would be good.
Here's what I can come up with.
There is a Chef (CPU) and a General Contractor (GPU). The Chef works at a restaurant making elegent dishes, which is what he does best, but does a bit of woodworking at home. Even though It's possible for him to be able to build a house (render a 3D scene) by recieving special instruction (software) he can't build one nearly as efficiently as a General Contractor could using his own skill (Logic Architecture).

Say, The Contractor can look at a blueprint of the house only every hour and understand what everything represents. The Chef has to look every 15 minutes and doesn't understand everything and wastes time and resources trying to read the blueprint. Mind you, this chef has to pack his equipment up (send across bus to main memory), and go to work every day (running the OS, and application calculations). Whereas the cnotractor can leave his stuff right on the site. And when the Chef finishes the house, the owners are not satisfied because they realized that the Chef didn't know how to put windows in the house.(unable to perform certain GPU exclusive calculations resulting in despicable quality). Although it's a house nonetheless. The Contractor is finished with it's house, windows in all, with a happy owner and it is already working on its second.

Hmm....If that makes sense to anybody.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Give me 10 bucks, and I will explain in detail why gpus are FAR more advanced then cpus are. Just so you know, cpus are in the stoneage compared to the technology these gpus push.
 

Gannon

Senior member
Jul 29, 2004
527
0
0
Originally posted by: dguy6789
Give me 10 bucks, and I will explain in detail why gpus are FAR more advanced then cpus are. Just so you know, cpus are in the stoneage compared to the technology these gpus push.

Of course but GPU's dont have to deal with backwards comptability of enormous libraries of software from a variety of compilers and languages over decades. Huge difference. Intel wanted to do that with the Itanic but ultimately failed. Also at some point someone the industry might be forced to change to more advanced architectures since I've read arguments that the cpu industry could be forced to a totally new architecture.

The problem is the design of compilers and computer hardware were not and stilll aren't very "advanced" in the sense that software compatability and the dreaded "recompiling" and "porting" of software is a mess. If languages were designed correctly anything you write as a program should be able to run on any hardware architecture but the abstractions weren't in place. I believe JAVA tries to do this although I am not a programmer. Wher you have the Java run time environment which is a layer between the program and the architecture the program is running on. Ultimately this is where program languages and compilers will end up going for general purpose computing.

If you truly think about it, the architecture a program runs on should be invisible to the program itself and converted on the fly to the instructions in question of whatever architecture its running on. No one could predict that such a crappy architecture would become the basis of modern computing. You can't sell computers without backwards compatability, Just ask microsoft when they wanted to switch from DOS to windows 95. Or fro 98 to XP, backwards comptability sucked up huge amounts of their resources.
 

PrinceXizor

Platinum Member
Oct 4, 2002
2,188
99
91
The correct answer is that NEITHER is technologically inferior to the other. They are BOTH highly evolved pieces of hardware that are not as alike as everyone likes to think they are. They are both designed and specialized to perform in specific areas.

Both are quite advanced.

Cost of such products is a poor indicator of "advancement". Today's calculators are much more advanced than the primitive calculators of yesteryear, yet, very old calcuators cost as much as a computer does today. Why? Supply and Demand, manufacturability, how much the market will take (i.e. how much is someon willing to pay), etc.

I also saw someone mention that branch prediction penalties are non-existant. since when? What happens when you have a mis-prediction is that the entire pipeline must be flushed before the correct branch can be taken. This can be "hidden" by cache, but is painfully evident on cache thin processors such as the Celeron line.

Basically there are two faulty assumptions in the initial post (as has been expertly pointed out by many of course).

To summarize:

Higher clockspeed (2 Gigahertz vs. 500 Megahertz) is better. Reality: FALSE
Higher cost should be analagous to better technology. Reality: FALSE

This was an interesting thread though. Thank you's to all who posted in it and will post in it.

P-X
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |