"primitive" GPU's

FreemanHL2

Member
Dec 20, 2004
33
0
0
Hey ppl, im new to anandtech, so... hi!

I was thinking yesterday about the large difference in speeds between CPU's and GPU's, apperently due to the amount of time it would take a graphics manafacturer to create a 2.6Ghz GPU. However, if you had the time, there is nothing stopping nvidia from having a 2.6Ghz card on the market in 2 years.

A 2.6Ghz chip is already avaliable in the form of a CPU, and can easily be modified by Nvidia/ATI into a GPU (If they asked intel really nicely and offered them a couple million).

Besides, the technology is already avaliable and nvidia would only have to draw up blueprints for a 2.6GHz core and update their manafacturing process to create the chip.

Look im actually very annoyed. A graphics card is simply a smaller MOB and CPU dedicated for processing graphics. Why deosn't Nvidia just make a 2.6Ghz, 1GB card??? I pay $900 for a card, i can buy 1gb of DDR 2 RAM, an ASUS MOB, and an intel 3.2Ghz for that money! There is nothing stopping us from developing a card like that!!!

And if you think there is no reason for that kind of power, you'd be right. But while im still buying a card with a 500Mhz GPU, i don't wanna be paying $900!!!
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
I'm not a computer engineer, but I lived with five of them, so I can only regurgitate what I recall them telling me.

GPUs are on a short product cycle. As such, they are designed using hardware languages rather than by hand (CPUs are designed by hand). Basically, this means CPUs are much more optimized for what they do: crunch lots of data very quickly. GPUs have many more bells and whistles to enable them to process a specific type of data rather than any type of data they are fed. They are geared to be very efficient at processing this particular type of data.

They also said that when CPUs are fast enough, GPUs will no longer be necessary. CPUs can run graphics now (remember software mode?), but they're not optimized for the job so they're rather inefficient. GPUs are optimized, so they're better at it per clock cycle by a long ways. Hopefully that makes a little sense.
 

Heisenberg

Lifer
Dec 21, 2001
10,621
1
0
Contrary to popular belief, clock speed isn't the end-all of CPU performance. GPU's are built to do mathematical tasks specific to what 3D rendering requires (lots of linear algebra IIRC). CPU's on the other hand must be able to a variety of tasks depending on what the software requires. As a result, the architecture between the two is very different, and simply comparing clock speed and memory isn't very useful.
 

silverpig

Lifer
Jul 29, 2001
27,709
11
81
Yeah, freightliner could build a 600 hp @ 9000 rpm truck that could hit 200 mph, but it'd suck at hauling steel up a mountain.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Also, GPU's are massively parallel compared to CPU's.
The current top of the line GPU's have 16 pipelines, and given sufficient bandwidth, the scaling is pretty damn near linear.

Also, considering an NV40 or R420 probably draws around 75W at 350-500 MHz, what would it draw at 2+ GHz?
 

FreemanHL2

Member
Dec 20, 2004
33
0
0
ahhh yes, but your missing the whole point. A 2.6Ghz is mostly innefficent for the graphics market at our current stage (as demonstrated by you all). BUT cards are not worth the money we pay for them, they are indeed primitive (even if for a reason) in comparison to our CPU's, so we should pay less. I don't care what you say, They are overpriced nomatter what.

I also realise how much it costs for nvidia to invest in their next gen card, but the consumer should not have to pay for recless spending on behalf of graphics manafacturers!
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: FreemanHL2
ahhh yes, but your missing the whole point. A 2.6Ghz is mostly innefficent for the graphics market at our current stage (as demonstrated by you all). BUT cards are not worth the money we pay for them, they are indeed primitive (even if for a reason) in comparison to our CPU's, so we should pay less. I don't care what you say, They are overpriced nomatter what.

I also realise how much it costs for nvidia to invest in their next gen card, but the consumer should not have to pay for recless spending on behalf of graphics manafacturers!

Not really.
Yes, a GeForce 6 Ultra Extreme is overpriced, it is because it's a low volume halo product, same as a 3.6 GHz P4 is extremely overpriced compared to a 3.2 GHz P4.

As for primitive, how would you rate it's primitiveness?
A Prescott P4 has 125 million transistors, I don't remember offhand, but IIRC an NV40 is ~180 million transistors.
Also, the RAM on that GF6 isn't exactly your average PC3200, same with the memory bus/controller, PCB, etc.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: FreemanHL2
ahhh yes, but your missing the whole point. A 2.6Ghz is mostly innefficent for the graphics market at our current stage (as demonstrated by you all). BUT cards are not worth the money we pay for them, they are indeed primitive (even if for a reason) in comparison to our CPU's, so we should pay less. I don't care what you say, They are overpriced nomatter what.

I also realise how much it costs for nvidia to invest in their next gen card, but the consumer should not have to pay for recless spending on behalf of graphics manafacturers!

To understand why you are wrong you have to listen to what everyone is saying, instead of constantly contradicting us.

Nvidia and ATI are having problems with there GPUs (130nm) at ~500mhz. Whereas Intel and AMD are having problems around 3 and 4ghz respectively. Doesn't that signify to you that something else is limiting them and there is a reason why we are not cruising around at 2ghz.

-Kevin
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Sunner
Not really.
Yes, a GeForce 6 Ultra Extreme is overpriced, it is because it's a low volume halo product, same as a 3.6 GHz P4 is extremely overpriced compared to a 3.2 GHz P4.

As for primitive, how would you rate it's primitiveness?
A Prescott P4 has 125 million transistors, I don't remember offhand, but IIRC an NV40 is ~180 million transistors.
Also, the RAM on that GF6 isn't exactly your average PC3200, same with the memory bus/controller, PCB, etc.
how would i rate their primitiveness? by the fact that gpu's don't need isa translation (legacy support), cache flushing, a non-blocking architecture (prevention of hazards during memory and io calls), a branch prediction unit, or speculative prefetch/execution.. and they don't perform out of order execution or register renaming. just to name a few things.

as for the whole "p4 has 125 mil and nv40 has ~180 mil".. graphics cores consist of a lot of duplicated logic. unlike cpus where the number of pipelines represents the number of stages in a single pipeline, gpus are composed of parallel pipelines. that means that whatever logic goes into the pipeline (instruction prefetch, decoding, etc).. is implemented 16 times in the design. then you have parallel matrix operations and blah blah blah.
 

Steg55

Member
May 13, 2004
96
0
0
GPUs are totally different from CPUs - and technologically many miles ahead. The Athlon 64 idea of putting the memory controller on-die came from...the earliest 3D acceleraters. Every GPU ever has had a high-speed low latency onboard memory controller.

I read somewhere that the average P4 3.2Ghz CPU puts out ~3 Gigaflops if i remember rightly, while the FX5900 GPU put out over 10 Gigaflops. So dispite the fact it is running ~2.7Ghz slower it is still over 3 times powerful. As mentioned earlier the parallelness of GPUs help. CPUs have a single pipeline while the latest AMD/nVidia cards have 16. So, for example 3.2*1=3.2 0.450*16=7.2. Not very advanced maths but it demonstates a point.

CPUs are trying to catch up with GPUs because GPUs are the most powerful processing units you will find in a home PC. The trouble with CPUs is that then need to generallize to be able to do any task/calculation thrown at them. Whereas GPUs, at any single stage of their pipeline, can repeat a certain calculation over and over to perform their function, and no diversity is needed.

I hope that makes some sence.

Steg
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: itachi
Originally posted by: Sunner
Not really.
Yes, a GeForce 6 Ultra Extreme is overpriced, it is because it's a low volume halo product, same as a 3.6 GHz P4 is extremely overpriced compared to a 3.2 GHz P4.

As for primitive, how would you rate it's primitiveness?
A Prescott P4 has 125 million transistors, I don't remember offhand, but IIRC an NV40 is ~180 million transistors.
Also, the RAM on that GF6 isn't exactly your average PC3200, same with the memory bus/controller, PCB, etc.
how would i rate their primitiveness? by the fact that gpu's don't need isa translation (legacy support), cache flushing, a non-blocking architecture (prevention of hazards during memory and io calls), a branch prediction unit, or speculative prefetch/execution.. and they don't perform out of order execution or register renaming. just to name a few things.

as for the whole "p4 has 125 mil and nv40 has ~180 mil".. graphics cores consist of a lot of duplicated logic. unlike cpus where the number of pipelines represents the number of stages in a single pipeline, gpus are composed of parallel pipelines. that means that whatever logic goes into the pipeline (instruction prefetch, decoding, etc).. is implemented 16 times in the design. then you have parallel matrix operations and blah blah blah.

Yes, but that's because that's the best way to do it, especially with 6-12 month product cycles.
Of course there's alot more engineering going into a P4 than an NV40, but the basic P4 architecture will have to live for years, while the NV40 will be refreshed once, maybe twice, then just 18 months later, NV50(or whatever) will replace it.

And besides, I think you're putting primitive where simple should be, 16 pipelines in an NV40 may be alot simples than a P4 pipeline, but that doesn't make it primitive, it just means to nVidia it's alot more efficient to put 16 pipelines in parallel than to try to come up with some single or dual pipe 2 GHz design.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Sunner
Yes, but that's because that's the best way to do it, especially with 6-12 month product cycles.
Of course there's alot more engineering going into a P4 than an NV40, but the basic P4 architecture will have to live for years, while the NV40 will be refreshed once, maybe twice, then just 18 months later, NV50(or whatever) will replace it.
i'm not sure what you're arguing.. what you said here is the reason why gpus are designed using hdl rather than by hand.
And besides, I think you're putting primitive where simple should be, 16 pipelines in an NV40 may be alot simples than a P4 pipeline, but that doesn't make it primitive, it just means to nVidia it's alot more efficient to put 16 pipelines in parallel than to try to come up with some single or dual pipe 2 GHz design.
define primitive to mean simple, then yea.. oh wait, it already is. graphics companies use multiple pipelines because their application benefits from such a model. cpus on the otherhand don't benefit from such parallelization under general usage. but if you want to compare gpus to cpus in this respect, then parallel instruction execution has been implemented for years by super computers.
 

complacent

Banned
Dec 22, 2004
191
0
0
By many standards the x86 processor would be considered primitive because the only real performance improvement Intel has introduced is higher clock speed. Obviously you have no idea what you are talking about because you seem to subscribe to the MHz myth. The MHz myth is that faster clock cycle = higher performance. You need to realize that it is a combination of clock cycle and IPC, or instruction per clock cycle. This is how AMD can outperform Intel even with processors that are almost 1 GHz slower. Look at the advanced Sun UltraSPARC processor which only operates at 1.2 GHz but will blow away a 3.0 GHz Xeon if put into a server. To state that a graphics card is a smaller MOB and CPU for graphics processing is like saying a {sound card, network card, modem, etc} is a smaller MOB and CPU for {audio, TCP/IP} processing.

GPU's are highly advanced and perform very different tasks than your x86 processor. You could not simply rig your graphics card and have it run Windows, nor could a dedicated 3 GHz CPU even touch a 500 MHz nVidia 6800. The GPU does rendering that your processor wouldn't even be able to do (such as HDR Rendering with the nVidia 6xxx line.) As also mentioned, GPU have short parallel pipelines. Therefore, more instructions per clock can happen.

So, in closing, completely detach yourself from the MHz myth. An Intel Itanium 2 at only 1.4 GHz costs about $1200, while a 3.4 GHz Pentium 4 costs about $275. The difference is that the Itanium doesn't need a higher clock speed to perform its tasks. Educate yourself and take a few computer science classes and I think you will understand a little better.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: itachi
Originally posted by: Sunner
Yes, but that's because that's the best way to do it, especially with 6-12 month product cycles.
Of course there's alot more engineering going into a P4 than an NV40, but the basic P4 architecture will have to live for years, while the NV40 will be refreshed once, maybe twice, then just 18 months later, NV50(or whatever) will replace it.
i'm not sure what you're arguing.. what you said here is the reason why gpus are designed using hdl rather than by hand.
And besides, I think you're putting primitive where simple should be, 16 pipelines in an NV40 may be alot simples than a P4 pipeline, but that doesn't make it primitive, it just means to nVidia it's alot more efficient to put 16 pipelines in parallel than to try to come up with some single or dual pipe 2 GHz design.
define primitive to mean simple, then yea.. oh wait, it already is. graphics companies use multiple pipelines because their application benefits from such a model. cpus on the otherhand don't benefit from such parallelization under general usage. but if you want to compare gpus to cpus in this respect, then parallel instruction execution has been implemented for years by super computers.

What I'm saying is that "simple" != primitive.
As you(and CycloWizard for that matter) mentioned, they're made using HDL, they have alot of parallel duplicated circuitry, and so forth, but that doesn't mean they're primitive.
 

complacent

Banned
Dec 22, 2004
191
0
0
define primitive to mean simple, then yea.. oh wait, it already is. graphics companies use multiple pipelines because their application benefits from such a model. cpus on the otherhand don't benefit from such parallelization under general usage. but if you want to compare gpus to cpus in this respect, then parallel instruction execution has been implemented for years by super computers.
You have parallel pipelines and parallel processors mixed up. There is no parallel execution in any of the graphics cards we have today (acutally, someone just came out with that 6600gt with two CPU's on one card. That may be parallel.) That is serial, since there is only one GPU on board, and they only have one core. Parallel pipelines allow SIMD execution (single intruction multiple data.) To have parallel execution you need at least two processors (or at least two cores) and the pipelines mean squat in this sense. Super computers, in contrast, have hundreds, or even thousands, of processors executing in parallel. They generally have one, long pipeline.
 

MetalStorm

Member
Dec 22, 2004
148
0
0
This is my first post here as this thread caught my interest, so here's my 2 cents!

It's important to know how CPUs and GPUs work, by the looks of things FreemanHL2 already realises that the difference in architecture between the P4 and Athlons mean they can do the same thing despite clock frequency but the same is also applicable to GPUs but for different reasons.
Basically each pipeline in the GPU renders 1 pixel, so in the latest and greatest cards out at the moment with 16 pipes they can render a maximum of 16 pixels per clock cycle. The reason GPUs are able to do so many things at once like that - giving the output of 16 pixels per clock cycle is because rendering is inherently parallel, so the output of one pixel isn't needed to find the next one. There are a lot of calculations required to find the exact colour of the pixels though and a lot of work is done in each stage of the pipelines above, Each pipeline though is similar to those in CPUs, the only reason the GPUs are so much "slower" in terms of clock speed is because they pipelines are short, having a high frequency core uses more power, and as you can see with the prescott, ramping of clock speeds like that might not be the best way to get increased performance.

If you think GPUs are primitive or even simple you are mistaken, they might have a lot of repeated logic, but so do CPUs so somewhat a moot point. They are usually less hand crafted due to the high refresh rate of their product cycle, but that's only a result of the speed games have been progressing. CPUs tend to have more time spent tweaking their architecture to get the last bit of performance out of them, but that's not to say that GPUs aren't good at doing what they do, far from it, they are certainly very good at doing what they are meant to do as are CPUs.

Mentioning paying excessive amounts for a top end product is also pretty pointless, we already knew that there is a huge mark-up on the best of the best, that's just the way it is.
 

FreemanHL2

Member
Dec 20, 2004
33
0
0
OK guys, this topic has been really interesting, i've learnt a fair bit from it. I don't think i have anything else to say, except that i'd like my cards to be running much faster! I also knew that CPU's were bad at rendering graphics, as demonstrated by the CPU tests in 3dmark04... tragic to say the least. I'd also like to say that the humble beginings of the graphics card were indeed very similar to a MOB and CPU design. lastly, BOTH the 3.6GHz CPU, and the 6800ULTRA GPU are VERY overpriced PC components...
 

FreemanHL2

Member
Dec 20, 2004
33
0
0
Hey i have a question! Why deos maya use your CPU for rendering when the GPU is more refined at doing these types of calculations. And don't say that MAYA needs more raw power, because you have been arguing against the Mhz stereotype throughout this thread.
 

complacent

Banned
Dec 22, 2004
191
0
0
It is two different types of rendering. The GPU does (normall) OpenGL and Direct3D rendering. I know virtually nothing about Maya, except that it does rendering of scenes for movies, etc. This type of rendering does not need to be calculated on the fly, an thus is not implemented at a hardware level. The GPU is more refined at doing a specific set of calculations, and very quickly. If you are rendering for a movie, it can take your computer days to render a scene and no one is the wiser, and the rendering is a much higher quality than you can ever expect out of a GPU.

Basically, Maya is rendering done with software. It takes a long time to render a scene and does not use the same rendering that is used in games.



 

MetalStorm

Member
Dec 22, 2004
148
0
0
The rendering done in Maya and other CAD programs is more complex, for example you can have hundreds of light sources and millions upon millions of polygons - the number of lights and polygons in games is usually reasonably limited in this sense. Basically the reason CPUs are used to render is because they are more flexible, GPUs are only able to render a certain amount of lights and the way they texture is designed to look good, but there's definitly room for improvement.

However, graphics cards are used to accelerate the rendering of the scene before the final rendering takes place, so while you're making the scene the graphics card does come in to play, but it doesn't have any part in the final render except displaying it on your screen!
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: complacent
You have parallel pipelines and parallel processors mixed up. There is no parallel execution in any of the graphics cards we have today (acutally, someone just came out with that 6600gt with two CPU's on one card. That may be parallel.) That is serial, since there is only one GPU on board, and they only have one core. Parallel pipelines allow SIMD execution (single intruction multiple data.) To have parallel execution you need at least two processors (or at least two cores) and the pipelines mean squat in this sense. Super computers, in contrast, have hundreds, or even thousands, of processors executing in parallel. They generally have one, long pipeline.
ok i guess i should've been a bit more clear on that.. i was referring to the rasterization stage, which is parallel. textures can easily be represented using matrices, and can be processed a lot quicker. if it were simple integer arithmetic that one was doing, the pipelines would be useless.

seriously tho.. where is this "mhz myth" coming from? my arguments are based on the type of logic that goes into gpus. and yea, i'll agree that x86 is primitive. but that instruction set also brought about a lot of changes. instruction dependancy with the complex addressing modes and limited registers got intel to re-engineer the pipeline.. introducing micro-ops, out of order execution, and register renaming. essentially turning modern cpus into risc-like processors.

sunner.. then explain to me why they're not primitive. i won't deny that it does take a lot of time to design them, but they don't have to deal with scheduling, branching, misprediction penalty, etc.. as far as i know, the most complex part of the gpu is converting a 3d image into a 2d image.
 

complacent

Banned
Dec 22, 2004
191
0
0
Originally posted by: itachi

seriously tho.. where is this "mhz myth" coming from?

The MHz myth didn't come from you. FreemanHL2's main argument that we can buy a 3.0 GHz processor for $200, why should a 500 MHz processor cost $600. The myth is that higher clock speed = all around faster. That is a myth. Give me a 1.2 GHz UltraSPARC IV over a 3.0 GHz Xeon any day.

sunner.. then explain to me why they're not primitive. i won't deny that it does take a lot of time to design them, but they don't have to deal with scheduling, branching, misprediction penalty, etc.. as far as i know, the most complex part of the gpu is converting a 3d image into a 2d image.

I will take a shot at this. For one, scheduling is done by the operating system, not by the CPU. Second, branching, or branch prediction, is a fancy way of saying "I will assume the answer is X, and work as though it is X, instead of wasting clock cycles waiting for an answer. If the answer is not X, then no harm, no foul." Misprediction penalty is non-existant. If the wrong branch is taken, it is as if no branch were taken at all. All work is thrown out and the correct path is loaded.

An example: If I told you to add two numbers together, and the numbers to add were either W+X or Y+Z, but I would have to get back to you on which two, you could start working on W+X. If I came back and said it is Y+Z, you can immediately quit work on W+X and start Y+Z. No time is wasted.

GPU's are not primitive. The 6800 has 222 million transistors and can perform tasks that a CPU cannot. I would agree that CPU's can perform more tasks and take more planning, but to say that a GPU is primitive is a relative term.

 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: itachi
Originally posted by: complacent
You have parallel pipelines and parallel processors mixed up. There is no parallel execution in any of the graphics cards we have today (acutally, someone just came out with that 6600gt with two CPU's on one card. That may be parallel.) That is serial, since there is only one GPU on board, and they only have one core. Parallel pipelines allow SIMD execution (single intruction multiple data.) To have parallel execution you need at least two processors (or at least two cores) and the pipelines mean squat in this sense. Super computers, in contrast, have hundreds, or even thousands, of processors executing in parallel. They generally have one, long pipeline.
ok i guess i should've been a bit more clear on that.. i was referring to the rasterization stage, which is parallel. textures can easily be represented using matrices, and can be processed a lot quicker. if it were simple integer arithmetic that one was doing, the pipelines would be useless.

seriously tho.. where is this "mhz myth" coming from? my arguments are based on the type of logic that goes into gpus. and yea, i'll agree that x86 is primitive. but that instruction set also brought about a lot of changes. instruction dependancy with the complex addressing modes and limited registers got intel to re-engineer the pipeline.. introducing micro-ops, out of order execution, and register renaming. essentially turning modern cpus into risc-like processors.

sunner.. then explain to me why they're not primitive. i won't deny that it does take a lot of time to design them, but they don't have to deal with scheduling, branching, misprediction penalty, etc.. as far as i know, the most complex part of the gpu is converting a 3d image into a 2d image.

I think we're just arguing about the definition of the word primitive right now.
They don't have to deal with all that because they don't have to, they do their job just fine without them.

Primitive to me would be a 386 by todays standards, while an NV40 is by no means primitive, just simple(relatively speaking of course).
 

MetalStorm

Member
Dec 22, 2004
148
0
0
Originally posted by: Sunner
Primitive to me would be a 386 by todays standards, while an NV40 is by no means primitive, just simple(relatively speaking of course).

Indeed, try telling the dev team of the NV40 that their GPU is primitive, I'm sure they'll like you for that!
 

Megamixman

Member
Oct 30, 2004
150
0
0
GPU's are specifically designed to be able to calculate data in certain ways. Put it this way, the shader units are specific sequences of Math units that perform the shading algorithm. This constrains them to certain types of calculations, mostly calculations that need to be done on the fly such as in games or for previews for CAD or 3D programs. The Final scene from a 3d Program takes on a complexity that is not designed for on the fly calculations. It is meant for detail. Secondly Video cards are not designed to calculate data. Of course if you work in the 3D industry you are working with Render clusters so that isn't a big problem.

Branch Prediction, micro ops, etc. were ways for Intel to keep the computer going in the path they wanted. Until now CPU's have been advancing through pure MHz. It's called business; there is more to a technological advance then just scientific genius, especially in a major company of a capitalist economy.

Shading is also not just integer math. When did you get that idea. If it was still integer math, Halo wouldn?t even look half as good as it did. After all, our eyes see colors not distinct geometric objects with a shade applied to them. That is processed in the brain.

GPUs are in no way primitive. Converting 3D data into 2D is also not primitive. It is very complex. Computers can't do all the calculations required for that in reality. What we have are estimates. Even then those estimates are computationally intensive. The advantage a GPU designer has is that he doesn?t need a Branch Prediction Unit, because there only so many different operations that need be done. Secondly he has to design within Technological limits. I don?t know anyone at Intel that would want to put 16 Presscot cores on one chip. Seriously this almost like asking, which is faster a Mac or a PC.


 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |