IVY E uses solder tim as expected, see delidded (and killed) 4960X with solder tim

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Even in my quote that you selected I stated that it hurts the foundries, not the SoC designers. You're quite correct that the development costs for the SoC designers aren't all that bad, even for Qualcomm and Apple with their custom cores. (It's actually similar for Intel as well.) But it costs the foundries just as much to develop new processes and build new fabs as it does Intel. If Intel starts taking substantial marketshare in the mobile realm it's going to hit the foundries that make those chips hard... especially because if they start slipping further behind than they already are it'll only increase Intel's marketshare even further.

I agree with you, that is a way Intel could fight back against the competition, in a big and very significant way.

As you pointed out, the fabs situation makes this a much more complicated situation to analyse.
I think the fabs situation gets particularly complicated, because (my understanding) is that some stuff is best made using the latest and greatest (smallest, e.g. 14 nm) feature size, such as for making the fastest/most powerful cpu's. But, it is also (in its early years of availability), the most expensive.
But other things DON'T really have to be made using the most expensive, smallest, and latest fab plants. Which then allows these chips to be made in huge quantities, and sold for relatively small amounts of money.

(Going beyond my understanding of the economics of fab plants) I have read that (confusingly, to me at least), the latest and smallest feature size fab plants, can also produce the cheapest chips, because the tiny IC size, allows many more of the same IC's to be produced per (same sized) wafer, because of the smaller feature size.

Where I'm confused is that some articles claim that the smallest feature size chip plants, produce the cheapest chips. And yet, the older (larger feature sized chip plants) ones, ALSO make the cheapest chips, because it is OLD technology, which makes the plant (fab) much cheaper to buy.

In other word, if old AND new chip plants BOTH produce the cheapest chips, then it cancels out, and they BOTH produce chips at the same price.

Maybe the complication is that the new chip plants need to sell in huge quantities in order to make the cheapest chips and recoup the huge investment, whereas, OLD chip plants DON'T need to necessarily sell in huge quantities, to make a profit as they cost much less in the first place.

DISCLAIMER: I think it's really hard to try to predict the future. If it was easy, we would all be billionaires, having each made our first $10,000,000,000 on the (predictable) stock market.


Son, this ain't Slashdot. No need for a car analogy

I will stop it at once, as I don't want to 'brake' forum rules, exceed the analogy speed limits, or get honked at and get my forum driving licence confiscated.

Anyway, sorry, I know that analogies do not always go down well.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
I think the fabs situation gets particularly complicated, because (my understanding) is that some stuff is best made using the latest and greatest (smallest, e.g. 14 nm) feature size, such as for making the fastest/most powerful cpu's. But, it is also (in its early years of availability), the most expensive.
But other things DON'T really have to be made using the most expensive, smallest, and latest fab plants. Which then allows these chips to be made in huge quantities, and sold for relatively small amounts of money.

(Going beyond my understanding of the economics of fab plants) I have read that (confusingly, to me at least), the latest and smallest feature size fab plants, can also produce the cheapest chips, because the tiny IC size, allows many more of the same IC's to be produced per (same sized) wafer, because of the smaller feature size.

Where I'm confused is that some articles claim that the smallest feature size chip plants, produce the cheapest chips. And yet, the older (larger feature sized chip plants) ones, ALSO make the cheapest chips, because it is OLD technology, which makes the plant (fab) much cheaper to buy.

In other word, if old AND new chip plants BOTH produce the cheapest chips, then it cancels out, and they BOTH produce chips at the same price.

Maybe the complication is that the new chip plants need to sell in huge quantities in order to make the cheapest chips and recoup the huge investment, whereas, OLD chip plants DON'T need to necessarily sell in huge quantities, to make a profit as they cost much less in the first place.

There is a very good reason why you've read about both sides of that situation, and have simultaneously grappled with which side is correct...the reason being that of course one side or the other can be true depending on certain conditions.

As is true with everything in life, you have fixed costs and per-part costs. And when you amortize the fixed costs over the course of the volume produced (which adds to the per-part costs) you have a net cost-per-part that produces a curve based on volume of parts.

For an IC the fixed costs include amortization of the node's development expense as well as the IC's development expense. The variable cost is that of the production expense associated with the producing the wafer itself (which will not include the amortized expense of developing the node if you are an IDM, but it will include it if you are fabless and buying wafers from a foundry).

The IC development cost doesn't go away with time, but the node development cost will. And the price per wafer for production is always more expensive on newer nodes than older nodes. But contrary to prices for newer nodes, which tend to go down over time (for a while anyways), the prices for older nodes actually go up over time to account for inflation and price increases in labor, electricity, water, and other consumables.

This all conspires to create a nicely shaped price curve which depends entirely on the complexity of the IC (impacts design costs and ultimately the die-size), the intended performance capability of the IC (clockspeeds, power consumption, i.e. general marketability), and the volume of parts one is intending to sell (what is the TAM?).

And of course the curve is not static, nor is one going to produce and sell all the intended parts over the course of 30days. So the price curve itself will have a third dimension (which is time) that captures expected shifts in both demand (impacts the ASP as a function of time) and variable production costs (shifts in currency exchange rates, shipping expenses, production costs from inflation, etc).

The goal being to maximize the area under the curve with respect to gross margins and total profits, requires some hedging in advance of designing the IC itself.
 

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
In the "old" computer days we had Mainframe computers, and they were sort of killed off by Mini-computers. Then LARGE-Microprocessors killed off mini-computers.

This is a pretty large oversimplification, even to the extent that it applies to this situation at all. Mainframes aren't dead even today.

So I assume IDC means that (Intel) Microprocessors (Large, high power consuming, meant for desktops) are killed off in coming years by (Arm) SMALL-Microprocessors (small, low power consuming, meant for hand held devices).

When two products or technologies clash in the marketplace, one of two things happens: they coexist, or one drives the other out. Which occurs depends on whether one is nearly completely better than the other, or whether each is better than the other in some way.

Good current example is SDDs and HDDs. An SDD is smaller, lighter, faster and (arguably) more reliable -- but an HDD is significantly cheaper. And so they each have a market niche. If and when the price of SDDs drops to below that of HDDs, or even close, HDDs will likely be driven out of the market.

Same thing with CPUs right now -- ARM chips are not strictly better than x86 CPUs, they are only better in certain circumstances. I not only do not see it as obvious that ARM will ever be strictly better than x86, I see a good chance that the opposite may happen. x86 has been left for dead too many times in the past, and has proven remarkably difficult to kill off, something even Intel knows from repeated direct experience.

Arm processors come in at prices around a few dollars.
Higher end haswell processors come in at prices around a few hundred dollars.

ARM processors that cost a few dollars can't do what higher-end x86 processors can. If they could, Intel wouldn't be able to sell the latter for hundreds of dollars. This isn't complicated.

Usually when there are such big price differentials, history tells that the thing which costs 100 times more, disappears.
E.g. In the 1960's, desktop calculators were something like $8,000.
Hand held calculators in coming years were $1.
How many shops stock $8,000 "desktop" calculators now ?

Again, this is not a valid analogy, because you are talking about equivalent items where one is strictly better than the other. That's not the case with ARM versus x86.
 
Last edited:

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
There is a very good reason why you've read about both sides of that situation, and have simultaneously grappled with which side is correct...the reason being that of course one side or the other can be true depending on certain conditions.

As is true with everything in life, you have fixed costs and per-part costs. And when you amortize the fixed costs over the course of the volume produced (which adds to the per-part costs) you have a net cost-per-part that produces a curve based on volume of parts.

For an IC the fixed costs include amortization of the node's development expense as well as the IC's development expense. The variable cost is that of the production expense associated with the producing the wafer itself (which will not include the amortized expense of developing the node if you are an IDM, but it will include it if you are fabless and buying wafers from a foundry).

The IC development cost doesn't go away with time, but the node development cost will. And the price per wafer for production is always more expensive on newer nodes than older nodes. But contrary to prices for newer nodes, which tend to go down over time (for a while anyways), the prices for older nodes actually go up over time to account for inflation and price increases in labor, electricity, water, and other consumables.

This all conspires to create a nicely shaped price curve which depends entirely on the complexity of the IC (impacts design costs and ultimately the die-size), the intended performance capability of the IC (clockspeeds, power consumption, i.e. general marketability), and the volume of parts one is intending to sell (what is the TAM?).

And of course the curve is not static, nor is one going to produce and sell all the intended parts over the course of 30days. So the price curve itself will have a third dimension (which is time) that captures expected shifts in both demand (impacts the ASP as a function of time) and variable production costs (shifts in currency exchange rates, shipping expenses, production costs from inflation, etc).

The goal being to maximize the area under the curve with respect to gross margins and total profits, requires some hedging in advance of designing the IC itself.

That's a very good explanation, thanks!
I understand it now, so both "old" OR "new" process technologies, such as 22 nm or 14 nm, could be the cheapest option for a chip/cpu manufacturer. It all depends on a ton of complicated economic factors, which seem to go considerably beyond the sort of petty signals (reasons) we typically pick on in these forums. E.g. Intel hates enthusiasts (Rather than, economically, enthusiasts chips are not very profitable, etc etc).

Non-car analogy.
In other industries, sometimes a spread sheet of all the production costs are created (e.g. for a new PC motherboard), and the parameters messed about with by accountants and managers, until an optimum strategy is decided upon.
E.g. Sell huge quantities of the motherboard, at a low cost, or sell tiny quantities of a very expensive motherboard. See if adding an additional network driver chip, will increase theoretical profits of the motherboard, or not.

Your very well written explanation has illustrated to me that IC (fab) plants are a really, really tough business economics version of other business types.

I've previously read long articles about IC manufacturing yields, and the complicated economic decisions which go behind selecting the right balance between lower transistor counts, hence better yields, but lower sale values, versus higher transistor counts, worse yields, but much more lucrative sale. But you have opened my eyes to the massively more complicated economics of fab plants.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
ARM prices also tends to go up fast with performance. The S4 for example is estimated to have a 30$ CPU cost. And we are not talking about listprice here.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
ARM prices also tends to go up fast with performance. The S4 for example is estimated to have a 30$ CPU cost. And we are not talking about listprice here.

Yep, ARM SOC costs are commensurate based on performance, and there are a wide variety of products available. Obviously you can buy extremely low performance yet high battery life ARM SOCs which are dirt cheap, but there are products at the other end of the spectrum as well.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
This is a pretty large oversimplification, even to the extent that it applies to this situation at all. Mainframes aren't dead even today.

I agree with you, but it gets complicated.
The real top end massively sized, super computers these days, for massive scientific research, weather prediction, Nuclear bomb simulations etc, are in effect the modern high/top end mainframe computers (i.e. is ones definition mainframe = gaint super computer, even if huge cluster of tiny compute nodes, or MUST IT BE one (or a few) single very fast processors ?).

If you want to jump in and say "aha, wait a minute, they are NOT strictly mainframe computers", because they are not a small number of ultra powerful processors (like the old cray 1 etc was), but a giant cluster of "compute nodes". I would have to agree. And many tasks WON'T parallelise into code suitable for running on such computers.
So, it all depends on ones definitions. Three possible ones come to mind:
  1. Many people think that mainframe computers have been killed off, in modern times
  2. Many agree that mainframe computers are still important today
  3. Some/many/(Me) think that the giant cluster "super computers" count as sort of mainframe computers. But non-parallelise-able stuff needs a different type of "Mainframe" computer (which the old generation ones nearly all were), or very high end PC (workstation).


When two products or technologies clash in the marketplace, one of two things happens: they coexist, or one drives the other out. Which occurs depends on whether one is nearly completely better than the other, or whether each is better than the other in some way.
Good current example is SDDs and HDDs. An SDD is smaller, lighter, faster and (arguably) more reliable -- but an HDD is significantly cheaper. And so they each have a market niche. If and when the price of SDDs drops to below that of HDDs, or even close, HDDs will likely be driven out of the market.

Same thing with CPUs right now -- ARM chips are not strictly better than x86 CPUs, they are only better in certain circumstances. I not only do not see it as obvious that ARM will ever be strictly better than x86, I see a good chance that the opposite may happen. x86 has been left for dead too many times in the past, and has proven remarkably difficult to kill off, something even Intel knows from repeated direct experience.

ARM processors that cost a few dollars can't do what higher-end x86 processors can. If they could, Intel wouldn't be able to sell the latter for hundreds of dollars. This isn't complicated.

You may well be right, because the X86 has been very resilient over the years. Even Intel has sometimes tried to move on from X86, with other architectures over the years, and failed, a number of times.

But as "Arm" steals revenue from X86, more and more (apparent to me, but subjective) in the future, it may get to the point that further X86 improvements are too expensive, relative to their (possibly) decreasing future sales.


Again, this is not a valid analogy, because you are talking about equivalent items where one is strictly better than the other. That's not the case with ARM versus x86.

If that customer wants/needs or is use to an X86 platform e.g. Windows, then the X86 is an obvious choice.

But, especially if OS is unimportant to a particular customer, I think their equivalence depends on the usage/application:

For stuff which is effectively just displaying simple videos or web-browsing, or emailing, or simple low end server like stuff, etc. Either a high end arm (future 2014++ new architecture ones, old ones have a reputation for being slow) or X86 would be reasonably fine for this.
(In your HDD/SSD analogy, it's like saying for capacities below 500Gb, SSD is the best choice).

For stuff like high end gaming, serious development/workstation, serious servers, high usage web browsing (25 pages open at same time & high end flash game playing), heavy users, etc. The X86 (at least at the moment) is definitely the preferred choice.
(In your HDD/SSD analogy, it's like saying for capacities above 2T, HDD is the best choice).


------------------------------------------------------------

In summary, I broadly agree with you, and was probably too hasty in presenting X86's demise.
 
Last edited:

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
ARM prices also tends to go up fast with performance. The S4 for example is estimated to have a 30$ CPU cost. And we are not talking about listprice here.

Yep, ARM SOC costs are commensurate based on performance, and there are a wide variety of products available. Obviously you can buy extremely low performance yet high battery life ARM SOCs which are dirt cheap, but there are products at the other end of the spectrum as well.

That's exactly what has been bugging me, worrying about possible future cpu's.

Even if Intel are partially matched or even beaten (probably unlikely), by other cpu manufacturers, the competing cpus could well cost broadly equivalent amounts of money. Meaning that their main advantage to consumers would be increased competition against Intel, forcing Intel to charge reasonable prices for their processors.
NOT, mega cheap, very powerful Arm processors (bah bah, double bah, I want my 1000,000,000,000,000 core, low cost personal cluster, and change from the $1).
 

AdamK47

Lifer
Oct 9, 1999
15,322
2,928
126
We've had the x86 demise predicted in the late 90's with RISC vs CISC. Still waiting.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
That's exactly what has been bugging me, worrying about possible future cpu's.

Even if Intel are partially matched or even beaten (probably unlikely), by other cpu manufacturers, the competing cpus could well cost broadly equivalent amounts of money. Meaning that their main advantage to consumers would be increased competition against Intel, forcing Intel to charge reasonable prices for their processors.
NOT, mega cheap, very powerful Arm processors (bah bah, double bah, I want my 1000,000,000,000,000 core, low cost personal cluster, and change from the $1).

If an ARM CPU performed like a 4770. It would also cost something like a 4770. Its an illusion to think we suddenly get cheap CPUs. R&D and other expenses still needs to be payed.
 
Mar 10, 2006
11,715
2,012
126
If an ARM CPU performed like a 4770. It would also cost something like a 4770. Its an illusion to think we suddenly get cheap CPUs. R&D and other expenses still needs to be payed.

People seem to forget this. ARM's propaganda machine has done a good job...props to them
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
If an ARM CPU performed like a 4770. It would also cost something like a 4770. Its an illusion to think we suddenly get cheap CPUs. R&D and other expenses still needs to be payed.

Sadly, I'm sure you are right. I am still secretly hoping that potentially bigger "Arm" sales, may lower the prices, as the fixed R&D would be divided into a larger number of sold units.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Sadly, I'm sure you are right. I am still secretly hoping that potentially bigger "Arm" sales, may lower the prices, as the fixed R&D would be divided into a larger number of sold units.

Any custom cores cost extra, alot extra.

And the default ARM core is pathetic. You get exactly what you pay for.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Any custom cores cost extra, alot extra.

And the default ARM core is pathetic. You get exactly what you pay for.

I guess an analogy, to what you (and others) are saying, is as follows:

Electronic calculators cost something like $8,000 in the 1960's
For a long time (since the end of the massive calculator development "explosion"), many calculators have been around the $1 or so.

BUT, it still costs $10..$100, or more, if you want a decent quality one e.g. Scientific or Programmable or heavy duty or a printing-calculator.

Or better still (analogy wise), Digital cameras. You can buy very cheap ones e.g. $20 (guestimate), but they are simple ones, without many features, and probably don't take the best pictures. It still costs many $100's to buy a decent semi-professional camera (in my opinion, perhaps many would disagree, I don't know).

So Arm chips would/should be the same, with only very weak, poorly performing ones, being very cheap. The rest, rapidly rising in cost, as they get more and more like Intel 4770's.

i.e. Professional Calculator = Professional Camera = Professional Arm cpu = Professional Intel 4770 = Lots and lots of $'s

I will have to go back to dreaming about getting 8 core Haswell E's, now!
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
If an ARM CPU performed like a 4770. It would also cost something like a 4770. Its an illusion to think we suddenly get cheap CPUs. R&D and other expenses still needs to be payed.

I agree with Shintai here, costs associated with the complexity of an IC do not scale linearly.

They don't scale linearly with the design team, nor the layout team, nor the validation team, nor the yield team, nor the packaging team...

To make little things become bigger things, you pay an outsized expense for that priviledge.

Look at nature. Look at ants. Awesome performance/weight ratios. Scale that ant up to human-size on paper and you find yourself saying silly things like "an ant the size of a human could lift a school bus over its head and walk the equivalent of 30 kph with it!"...but nature shows you that you can't scale an ant to human size (its exoskeleton cannot scale like that), so the hypothetical scenario is completely absurd and irrelevant at the same time.

An ARM processor vs a big-core processor like the 4770 is like that IMO.

(also, not a grammatical nitpick but just an FYI - it is "paid" not "payed"...English sucks as a rules-based language, it happens )
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
I agree with Shintai here, costs associated with the complexity of an IC do not scale linearly.

They don't scale linearly with the design team, nor the layout team, nor the validation team, nor the yield team, nor the packaging team...

To make little things become bigger things, you pay an outsized expense for that priviledge.

Look at nature. Look at ants. Awesome performance/weight ratios. Scale that ant up to human-size on paper and you find yourself saying silly things like "an ant the size of a human could lift a school bus over its head and walk the equivalent of 30 kph with it!"...but nature shows you that you can't scale an ant to human size (its exoskeleton cannot scale like that), so the hypothetical scenario is completely absurd and irrelevant at the same time.

An ARM processor vs a big-core processor like the 4770 is like that IMO.

(also, not a grammatical nitpick but just an FYI - it is "paid" not "payed"...English sucks as a rules-based language, it happens )

That's a nice explanation.

There use to be (and still is), a classic little IC, called the NE555, usually just called the 555 (timer IC). It's an amazing IC, able to perform a wide range of (analogue) timer functions, such as oscillating or one-shot timing, etc.
But one of the most amazing things about it (this is from memory, so my apologies if I get the story wrong), is that it was designed by just ONE electronics engineer, over the space of just one weekend, AT HOME. He was not even at on IC plant, while he was designing it.
Also, if I remember right, the 555 is the most produced integrated circuit ever.

Nowadays, they probably have giant teams of 100's or even 1000's of highly trained experts, working for years, using powerful computer aided design software.

So, yes, I can easily see why things do not scale linearly. (Unless we can get that 555 engineer to make an Arm version of a 32 core 4770, he can have an entire week to design it in, joke).

EDIT: Actually, I think the original (1st) microprocessor (Intel 4004), was designed by just 2 people, using paper (cut-able light box version, if you are fussy) over the space of one weekend as well, and had something like 2,000 transistors.

EDIT2: 1 weekend sounds too short. A quick internet search has NOT found out how long the 4004 took. But it was probably a lot shorter than today's development cycles.
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,882
3,230
126
u know why it dead?

he snapped off 8 resistors trying to get that rubber seal off the edge.



The guy clearly didnt know how to delid a processor, and he was new.
If he had any clue on how to delid a processor, he would of realized that there is a high possibly the processor had resistors like that, on how the old Intel cpu's were and AMD's.

So its not the TIM that killed his processor.
It was his idiotic unexperienced delidding technique.

This is totally not the work of Coolalar
I asssume he would know how to delid a proper cpu.
If it was coolalar who did this.. then he killed it on purpose to prove a point.


And to be honest i dont know why u guys are suprised.
Intel has been weeding us out into set and define catigories.
(overclockers isle 1... none overclocker isle2... enterprise isle 1... desktop overclock gimped isle2.....

And now there telling us they really mean it in those catigories.


I dont think ive ever missed AMD so much.
Its getting really rediculous over at the Intel side without AMD.
 
Last edited:

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
If I have to pay $999 for a Haswell-e (or IVY-E), socket 2011++ class processor from Intel, there is NO WAY (or at least very unlikely), I would risk de-lidding it. (whoops, $999, bang).
 
Last edited:

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,414
401
126
I will stop it at once, as I don't want to 'brake' forum rules, exceed the analogy speed limits, or get honked at and get my forum driving licence confiscated.

Anyway, sorry, I know that analogies do not always go down well.
Heh, no worries. Just couldn't resist poking fun at the car analogy, since on Slashdot, the usual stuff is :

1) Car analogy
2) Imagine a Beowulf cluster of those
3) Can it run Quake
4) I for one welcome our *** overlords
5) This will be the year of the Linux desktop
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Heh, no worries. Just couldn't resist poking fun at the car analogy, since on Slashdot, the usual stuff is :

1) Car analogy
2) Imagine a Beowulf cluster of those
3) Can it run Quake
4) I for one welcome our *** overlords
5) This will be the year of the Linux desktop

6) How do you drive a car without arms ? (Use Intel, rather than Arm)
7) All car accidents are caused by mechanical faults. The most common cause is the nut which is in the middle of the steering wheel, and holds it on the rack spindle. I.e. The nut behind the wheel causes accidents.

No worries!
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |