Linus Torvalds: Too many cores = too much BS

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

flexy

Diamond Member
Sep 28, 2001
8,464
155
106
I think he is right. 99% of people won't ever need 4 cores.

On a related side-note, talk about BS - look how so called "high-speed" memory is pushed. People pay top dollar for DDR-XXXXX whatever memory for a MARGINALLY gain which only shows up in AIDA memory/cache benchmark with no performance benefit whatsoever.

WORSE - so called review sites I checked in the last days are "recommending" such memory...despite in the same review showing it has no benefits whatsoever in real world apps and games.
 

tolis626

Senior member
Aug 25, 2013
399
0
76
I think he is right. 99% of people won't ever need 4 cores.

On a related side-note, talk about BS - look how so called "high-speed" memory is pushed. People pay top dollar for DDR-XXXXX whatever memory for a MARGINALLY gain which only shows up in AIDA memory/cache benchmark with no performance benefit whatsoever.

WORSE - so called review sites I checked in the last days are "recommending" such memory...despite in the same review showing it has no benefits whatsoever in real world apps and games.

Although we are somewhat derailing here... High-speed DDR3 (>1866MHz) has been marginally more expensive than cheap 1600MHz modules for speeds up to 2400MHz. When I bought my system I could find (The numbers are approximations because I don't remember the exact pricing, but still close to the reality) the Corsair Vengeance Pro I wanted for 155-160€ for the 1600MHz CL9 model, 160-165€ for the 2133MHz CL11 and 165-170€ for 2400MHz CL11, and other brands' memory was similarly priced. I went for the 2133MHz model instead of the 2400MHz one simply because I would have it in my hands in 1 day instead of over a week for the 2400MHz ones. Performance improvements are at best marginal, but they are there and cost pennies. So why not?

Sorry for OT.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
145
106
www.neftastic.com
I think he is right. 99% of people won't ever need 4 cores.

On a related side-note, talk about BS - look how so called "high-speed" memory is pushed. People pay top dollar for DDR-XXXXX whatever memory for a MARGINALLY gain which only shows up in AIDA memory/cache benchmark with no performance benefit whatsoever.

WORSE - so called review sites I checked in the last days are "recommending" such memory...despite in the same review showing it has no benefits whatsoever in real world apps and games.

Here's the thing with that kind of mentality - if you stick with that paradigm, nobody will bother investigating breakthroughs that will benefit mainstream computing. This is why Torvalds, in this case, is both correct (at this time) and full of shit (for the future).

It's a road that needs to be investigated by people willing to think outside of the box. He obviously isn't willing to do so. He has a habit of running his mouth off, and is a very intelligent guy, but he can't predict the future.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Here's the thing with that kind of mentality - if you stick with that paradigm, nobody will bother investigating breakthroughs that will benefit mainstream computing. This is why Torvalds, in this case, is both correct (at this time) and full of shit (for the future).

It's a road that needs to be investigated by people willing to think outside of the box. He obviously isn't willing to do so. He has a habit of running his mouth off, and is a very intelligent guy, but he can't predict the future.

I mean, stupidly parallel computation is a route that only goes so far.

But we're seeing a lot of hardware advancements that take out the difficulty of parallel computation. Synchronization costs are what kill a lot of parallelization, and our hardware is getting better at communicating.
The shared caches and atomic operations of current Intel CPUs are fantastic and your synchronization cost is very low. Even a naive locking algorithm can probably scale to ~64 processors without a problem.
AMD keeps making strides with HSA, reducing the distance between your highly parallel resources and your CPUs. This is important, since it will allow for lamda style functions where you just feed off a computation to the GPU mid program.
Intel's TSX instructions will also make things easier too.

Now that said, there's still some pretty hard limits on what parallel computation can do for most problems. Most parallel implementations of algorithms are eventually going to fall into a series of reductions that serialize the problem. Each stage, your dataset will collapse by half, so as the number of processors approaches the size of your dataset, your speed improvement collapses pretty quickly. Of course, you can always just scale up the size of the problem.

Something like the Linux kernel may actually be better off being tuned for a single processor.
 

futurefields

Diamond Member
Jun 2, 2012
6,471
32
91
Once hardware is tapped out they'll be able to focus on software optimizations.

Good for the end user. Bad for the chip makers?

They'll always find a way to artificially limit performance in order to keep the economy running, though.
 

Spungo

Diamond Member
Jul 22, 2012
3,217
2
81
Dedicated GPUs for the normal consumer right now are still a thing...but let's look 10 years into the future.

I think we're going more in the direction of splitting things up. A decade ago, this is what a video card looked like:
http://www.ixbt.com/video/itogi-video/ti4600-albatron.jpg
That's a GeForce 4600. That was absolutely top of the line. My friend hand one and it was amazing.

A modern day video card in that price range looks like this:
http://cdn.videocardz.com/1/2014/09/NVIDIA-GeForce-GTX-980-angle-1.jpg
It's more than twice as big. It consumes more than twice much power.

The video card is larger and more important than it was in the past. I don't see any evidence of this trend reversing. If anything, it's speeding up in the direction of more video cards hooked in parallel:
http://www.gigabyte.com/FileUpload/Features/2/Photo/1671.jpg


It's not difficult to make programs more parallel. It just takes a different way of looking at problems, the syntax might be a bit weird at first, and it needs to be planned in advance.
I'm still learning programming, so I apologize if this example is slightly wrong. Suppose you have a list of things that need to be done. Doing them in parallel in C# is actually quite easy. Regular code would be like this:
Code:
foreach (string myString in myStringList)
{
   do things;
}

Doing it in parallel looks like this:
Code:
Parallel.ForEach<string>(myStringList, (myString, myLoopState) =>
   {
      do things;
   });


Or suppose it's something that relies on states. Variables and references don't need to be passed back and forth:
Code:
private void myFunction()
{
   change the state of this object;
}

foreach (object myObject in myObjectList)
{ 
   Task.Factory.StartNew( () =>
      {
          myFunction();
      });
}

It's not super difficult, but it's a bit weird. Instead of having a single thread for AI, it should be possible to have multiple threads for AI. If you really wanted, you could put each computer controlled thing on a separate thread. A tiger has his own thread. An NCP shooting at the tiger has his own thread.
 
Last edited:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86

Yes, I interpreted his main point to be that you won't be able to find significant gains past taking advantage of ~4 cores. Which is basically the main issue acknowledged with trying to design programs for many threads. Tasks that benefit greatly from many threads don't require an obsessive level of parallelism design work, usually it's just a matter of running many instances of the task. Context is he was responding to someone proposing increasing complexity of design for a very modest gain in parallelism. If you have a task that can be split up in an obvious manner, great. Completely redesigning the way things work to extract a very modest gain in performance on many core systems, not so great.

Not sure if 4 cores is the sweet spot but I do think Linus has a point that instead of an 8+ core mobile chip, looking at you 8 x A53 core SoCs, it would probably be better to have a few less cores and instead have more fast cache.
 
Last edited:

Shehriazad

Senior member
Nov 3, 2014
555
2
46
I think we're going more in the direction of splitting things up. A decade ago, this is what a video card looked like:
http://www.ixbt.com/video/itogi-video/ti4600-albatron.jpg
That's a GeForce 4600. That was absolutely top of the line. My friend hand one and it was amazing.

A modern day video card in that price range looks like this:
http://cdn.videocardz.com/1/2014/09/NVIDIA-GeForce-GTX-980-angle-1.jpg
It's more than twice as big. It consumes more than twice much power.

The video card is larger and more important than it was in the past. I don't see any evidence of this trend reversing. If anything, it's speeding up in the direction of more video cards hooked in parallel:
http://www.gigabyte.com/FileUpload/Features/2/Photo/1671.jpg


It's not difficult to make programs more parallel. It just takes a different way of looking at problems, the syntax might be a bit weird at first, and it needs to be planned in advance.
I'm still learning programming, so I apologize if this example is slightly wrong. Suppose you have a list of things that need to be done. Doing them in parallel in C# is actually quite easy. Regular code would be like this:
Code:
foreach (string myString in myStringList)
{
   do things;
}

Doing it in parallel looks like this:
Code:
Parallel.ForEach<string>(myStringList, (myString, myLoopState) =>
   {
      do things;
   });


Or suppose it's something that relies on states. Variables and references don't need to be passed back and forth:
Code:
private void myFunction()
{
   change the state of this object;
}

foreach (object myObject in myObjectList)
{ 
   Task.Factory.StartNew( () =>
      {
          myFunction();
      });
}

It's not super difficult, but it's a bit weird. Instead of having a single thread for AI, it should be possible to have multiple threads for AI. If you really wanted, you could put each computer controlled thing on a separate thread. A tiger has his own thread. An NCP shooting at the tiger has his own thread.

Sure...super uber enthusiast stuff is going to look huge...but do those cards NEED to be that huge? Nah.

http://media.bestofmicro.com/S/O/459816/gallery/Picture3_w_600.png

That's a GTX 970...

High end cards are only this huge (currently) because Nvida, AMD and also third party companies WANT them this huge. (And sales, always sales).

Enthusiast versions of high end cards will obviously still need their 2-3 fans and extra length... but look at how much integrated graphics advanced on Nvidias and AMDs side in the past few years.

Tegra K1 and AMD Kaveri are quite capable.

Also 10 years is a lot of time...if chips really make it below the 10nm in that time ( and they will) then there is an absurd amount of space for a graphics chip even after you added your 4-12 CPU Cores (depending on the stance of CPU parallelism at that time)
HSA is also something that tends to be ignored right now...but yea. Once DDR is replaced by 3D HBM or something like that, onboard memory will also not hold back iGPUs anymore even if they don't come with "soldered on" memory.

SoCs will be the future and mark my words (feel free to quote me in 10 years)...Classic GPUs will phase out.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
3,929
405
126
Well Intel is providing ~8% performance increase per 18 months currently if there is no increasing core count. So while adding more cores does not scale performance linearly, it sure is a way to leap ahead to more performance in many cases. Let's say Amdahl's law on average will result in only 50% performance increase when adding 2x the amount of cores. I'd say that is still justified then. It corresponds to 50%/8%x18 months = 112 months = ~10 years of performance increase compared to if no cores are added.

Sure, in an ideal world we would be getting 30-50% yearly single threaded performance increase like we got during the "golden years". That would be better than increasing parallelism. But it seems like that is no longer happing anymore, so we have to turn to the options available.
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
It's not super difficult, but it's a bit weird. Instead of having a single thread for AI, it should be possible to have multiple threads for AI. If you really wanted, you could put each computer controlled thing on a separate thread. A tiger has his own thread. An NCP shooting at the tiger has his own thread.

Yes but think about it,something has to tell the NPC-thread that it's time to shoot and something has to tell the tiger-thread that it got hit or killed and that's the job of the main game loop the maximum speed of a game will be the speed that this thread runs at, and that's the problem with "too many cores" they will be weaker so the main loop will run slower but with no additional slowdown due to "not enough cores" ,
on fast CPUs like the I3 the main loop will run faster but it will have some slowdowns due to not enough "cores" (IPC throughput) .

Best compromise(without factoring in cost) at the moment, between fast cores and enough cores are the I5 cpus and that's why everyone thinks that games are optimized for 4 cores but they are not.
 

Spungo

Diamond Member
Jul 22, 2012
3,217
2
81
I'll state right away that I have no idea what I'm talking about. Anyway...

Yes but think about it,something has to tell the NPC-thread that it's time to shoot and something has to tell the tiger-thread that it got hit or killed
I'm thinking of something analogous to the way humans would interact in a multiplayer game if cheating were impossible. Humans cheat, but computer programs don't, so let's assume no cheating. The server acts like the main thread of a game. As a player, I'm like a worker thread. Other players are like worker threads too. How do I know when it's time to shoot another player? Does the main thread (server) tell me to shoot? No, it only tells me what the state of the world is. An enemy of type X is Y distance away, and I have some kind of internal logic telling me how to act. Since there's no cheating, the server doesn't need to know what gun I'm using. I know what gun I'm using, I know how many bullets I have, I know which direction I'm facing, and I know if my shots should hit or miss the target based on the world data I was given. Instead of sending my shots to the main thread, I could send it directly to the object I'm acting on. The object would then register this data with the server to say "yes, I have been shot, this is how much damage it did." This is very similar to the way lag compensation works in games. Does it ever seem like someone shot through the wall even though that's not possible in the game? The other player registered the shot with the server, the server agreed that it should have been a hit based on that data, so you get hit even though you're behind the wall half a second later.

Anyway, my point is that you shouldn't always need to go through the main thread for everything. The threads can be mostly autonomous. I remember GTA4 being one of the first games that could really use a quad core. That game's graphics are nothing special, but the game does involve crazy amounts of AI. The game's developers managed to effectively use 4 cores for that job. Saints Row 2 was like that as well. The graphics and sound were crap, but it's an AI filled sandbox game just like GTA4.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
I think we're going more in the direction of splitting things up. A decade ago, this is what a video card looked like:
http://www.ixbt.com/video/itogi-video/ti4600-albatron.jpg
That's a GeForce 4600. That was absolutely top of the line. My friend hand one and it was amazing.

A modern day video card in that price range looks like this:
http://cdn.videocardz.com/1/2014/09/NVIDIA-GeForce-GTX-980-angle-1.jpg
It's more than twice as big. It consumes more than twice much power.

The video card is larger and more important than it was in the past. I don't see any evidence of this trend reversing. If anything, it's speeding up in the direction of more video cards hooked in parallel:
http://www.gigabyte.com/FileUpload/Features/2/Photo/1671.jpg


It's not difficult to make programs more parallel. It just takes a different way of looking at problems, the syntax might be a bit weird at first, and it needs to be planned in advance.
I'm still learning programming, so I apologize if this example is slightly wrong. Suppose you have a list of things that need to be done. Doing them in parallel in C# is actually quite easy. Regular code would be like this:
Code:
foreach (string myString in myStringList)
{
   do things;
}

Doing it in parallel looks like this:
Code:
Parallel.ForEach<string>(myStringList, (myString, myLoopState) =>
   {
      do things;
   });


Or suppose it's something that relies on states. Variables and references don't need to be passed back and forth:
Code:
private void myFunction()
{
   change the state of this object;
}

foreach (object myObject in myObjectList)
{ 
   Task.Factory.StartNew( () =>
      {
          myFunction();
      });
}

It's not super difficult, but it's a bit weird. Instead of having a single thread for AI, it should be possible to have multiple threads for AI. If you really wanted, you could put each computer controlled thing on a separate thread. A tiger has his own thread. An NCP shooting at the tiger has his own thread.

You're greatly underestimating the complexity of multi-threading. In simple examples it looks easy, but in practice with more complex applications you can get some really bizarre results.

In multi-threading, where generally the threads need to talk to each other in some way, they ultimately wind up trying to update/ use the same resources.

Lets say you have 3 threads, two of them update the same data/resource which the 3rd reads and uses for something else. You'll need to use some kind of lock to prevent updating at the same time, and from reading while an update is being performed.

In Windows this is usually done via a "Critical section" in the code that will prevent other threads from using that resource until that thread is done with it.

So you can imagine, if each of these processes spends 1/10th of its time updating this resource and 90% of their time doing something else then they won't have too many concurrency issues.

But what if we have 9 threads updating this resource and 3 threads reading from it?

Now this resource is used 120% of the time.

I think the technical term for this is a "lock convoy".

And then there is the more common deadlock, where thread "A" is waiting on resource which thread "B" has and thread "B" is waiting on a resource which thread "C" has and thread "C" is waiting on a resource which thread "A" has - so none of them are doing anything.

There are ways to handle this but then you wind up running into "livelocks".

So in simple examples where you have 3 or 4 or even 8 threads running this can be tracked down and handled.

Now think about how complex 100 threads would be in the above examples.
 
Last edited:
Dec 30, 2004
12,554
2
76
Yes but think about it,something has to tell the NPC-thread that it's time to shoot and something has to tell the tiger-thread that it got hit or killed and that's the job of the main game loop the maximum speed of a game will be the speed that this thread runs at, and that's the problem with "too many cores" they will be weaker so the main loop will run slower but with no additional slowdown due to "not enough cores" ,
on fast CPUs like the I3 the main loop will run faster but it will have some slowdowns due to not enough "cores" (IPC throughput) .

Best compromise(without factoring in cost) at the moment, between fast cores and enough cores are the I5 cpus and that's why everyone thinks that games are optimized for 4 cores but they are not.

both of those things you mentioned in the main game loop are parallelizeable. Basically, studios are lazy and want to crank out more stuff instead of investing in a better software architecture and optimizing their algorithms. Our AI is still just as predictable as it was 10 years ago. That's not a CPU problem.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
both of those things you mentioned in the main game loop are parallelizeable. Basically, studios are lazy and want to crank out more stuff instead of investing in a better software architecture and optimizing their algorithms. Our AI is still just as predictable as it was 10 years ago. That's not a CPU problem.
AI is going to be slower if parallelized, though, and it is going to be difficult. As soon as two entities have an interaction, they are serialized, period. There's no TLP to be had, at that point. It may be worth trying for sim or strategy games, that might reach single-core performance limits with thousands, or even up to hundreds of thousands, of interacting entities, but it would be a ton of work for no gain otherwise. Just the walking over structures to identify what can't be put in another thread is going to be a huge overhead cost, in both CPU time and development time (and checking as you go, yielding when it happens, isn't going to be much better, and may even end up much slower in the end).

Not only that, but only large-scale sim games and strategy games will ever even run into it. The last console gen's CPUs might have had problems with AI from being turds, but we had better AIs in PC games years before that; and I think that's a very good example of the consoles holding PC gaming back. We went from enemies acting after seeing you from hundreds of yards away, working in groups, using cover by the timing of your actions, etc., to being blind bast 100 feet, acting like idiots, needing infinite ammo, not noticing that you just killed 9 of their buddies that were standing beside them, etc.. Still, if Pentium IIIs could handle real-time AIs that were actually frustrating, there's no need to multithread it on anything modern, even Jaguars and Silvermonts. Physics needs it, but relative to other calculations, unpredictable serializing events will be much more rare than AI, along with needing more CPU time than AI.
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
. Does it ever seem like someone shot through the wall even though that's not possible in the game? The other player registered the shot with the server, the server agreed that it should have been a hit based on that data, so you get hit even though you're behind the wall half a second later.

Anyway, my point is that you shouldn't always need to go through the main thread for everything.

The first thing you say is my point,if you don't go through the main thread for synchronization,than things will happen before or after they are supposed to happen.
If threads would work on their own they would have no way of knowing whats going on and when to do what they should do.
In general that's something you don't want.
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
both of those things you mentioned in the main game loop are parallelizeable.
And that's why they are(would be) separate threads but that does not mean that they can run whenever, or however fast,they have to "keep time" with a main thread for the game to make sense to us humans.
 
Dec 30, 2004
12,554
2
76
AI is going to be slower if parallelized, though, and it is going to be difficult. As soon as two entities have an interaction, they are serialized, period. There's no TLP to be had, at that point. It may be worth trying for sim or strategy games, that might reach single-core performance limits with thousands, or even up to hundreds of thousands, of interacting entities, but it would be a ton of work for no gain otherwise. Just the walking over structures to identify what can't be put in another thread is going to be a huge overhead cost, in both CPU time and development time (and checking as you go, yielding when it happens, isn't going to be much better, and may even end up much slower in the end).

Not only that, but only large-scale sim games and strategy games will ever even run into it. The last console gen's CPUs might have had problems with AI from being turds, but we had better AIs in PC games years before that; and I think that's a very good example of the consoles holding PC gaming back. We went from enemies acting after seeing you from hundreds of yards away, working in groups, using cover by the timing of your actions, etc., to being blind bast 100 feet, acting like idiots, needing infinite ammo, not noticing that you just killed 9 of their buddies that were standing beside them, etc.. Still, if Pentium IIIs could handle real-time AIs that were actually frustrating, there's no need to multithread it on anything modern, even Jaguars and Silvermonts. Physics needs it, but relative to other calculations, unpredictable serializing events will be much more rare than AI, along with needing more CPU time than AI.

what I'm saying is that individual NPC AI is completely parallelizeable, interactions are trivially serialized, and NOT a significant performance bottleneck

physics calculations are generally marketing, rarely useful, and need simplification. studios are lazy
 
Dec 30, 2004
12,554
2
76
And that's why they are(would be) separate threads but that does not mean that they can run whenever, or however fast,they have to "keep time" with a main thread for the game to make sense to us humans.

that keeping time you're talking about happens in parallel. You just report a state, the state gets touched every frame.

How many of you guys are professional software engineers?

Look to UE4 for effective multithreading. UE3 was definitely forward thinking and had no trouble making use of quad cores. As a result it runs wonderfully and was licensed heavily.
 

greatnoob

Senior member
Jan 6, 2014
968
395
136
physics calculations are generally marketing, rarely useful, and need simplification. studios are lazy

I'm not sure what you mean by that, but the physics engine is at least 70% of what makes up any game (including mine). It is definitely not a marketing ploy and it is one of the toughest areas I've faced in game development - maybe because I suck at math in general.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
what I'm saying is that individual NPC AI is completely parallelizeable, interactions are trivially serialized, and NOT a significant performance bottleneck
You're talking about possibly waiting thousands to hundreds of thousands of cycles for another thread to catch up, even not considering any other performance issues, compared to being able to run through hundreds or thousands more iterations in that time if left completely serial. The only trivial way to serialize is to yield and wait, if running them all in parallel, which is going to require more time than just doing it all in one thread, probably up to many thousands of entities at once (more than most games will fit in memory, anyway). Any other way is not at all trivial, and would involve some kind of partitioning before-hand based on possibility/probably of interaction, then clean up and merging after.

physics calculations are generally marketing, rarely useful, and need simplification. studios are lazy
Physics calculations are necessary, and cannot be simplified. Games with physics cannot be played without them, or with more simplified physics--they are integrated real-time game behavior rules. They need to do just the opposite. We need soft body and fluid to go from specialized demos to being integral game mechanics. Studios being lazy is partly true (in that publishers want to spend $30M on marketing, but not on the game, and need it done yesterday).
 

Spungo

Diamond Member
Jul 22, 2012
3,217
2
81
You're greatly underestimating the complexity of multi-threading. In simple examples it looks easy, but in practice with more complex applications you can get some really bizarre results.
I can see how my simplistic model of AI threading would lead to..... interesting results. This would be the stuff you find in poorly made video games where the AI will walk into walls or spin in circles or do something else that doesn't make any sense. It would take some effort to work out the bugs.

And then there is the more common deadlock, where thread "A" is waiting on resource which thread "B" has and thread "B" is waiting on a resource which thread "C" has and thread "C" is waiting on a resource which thread "A" has - so none of them are doing anything.
Spinning in a circle and walking into walls
 

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
"Don't add more cores at the expense of per-core performance, we already have dedicated hardware for doing highly parallel tasks."

Seems pretty reasonable to me.
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
This is the way that games are optimised for weaker cores.



The beauty of it is that you probably never be able to figure it out what happened.
 
Last edited:

mindbomb

Senior member
May 30, 2013
363
0
0
I would say he is more referring to gpu compute, and 2048 cores on the gtx 980 and marketing claims of cuda and stuff like that (and amd with gcn), saying they won't be used outside of graphics and server. For x86, I would say hex cores are just starting to get mainstream, and this is almost a decade after the q6600 came out and made people get quad cores, and it's just a 50% increase.
 
Last edited:

jhu

Lifer
Oct 10, 1999
11,918
9
81
"Don't add more cores at the expense of per-core performance, we already have dedicated hardware for doing highly parallel tasks."

Seems pretty reasonable to me.

OMG! Linus is an Intel shill!!!!
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |