different approach to dual processing

jhu

Lifer
Oct 10, 1999
11,918
9
81
i don't know if this has been touched upon before, but i was just thinking about the abysmal performance of the itanium running x86 instructions. could intel have just tacked on a pentium core onto the itanium somewhere? couldn't that have solved the x86 issue for itanium instead of having to run hardware translation using the itanium processor? since the pentium only had 3 million transistor, that's not a lot of die real-estate.

along the same lines, current 32-bit and 64-bit x86 cpus, for the most part, don't run 8-bit or 16-bit code anymore (except during startup, but that's usually about it). why not just tack on a 286 core? that thing only had 134k transistors versus the tens of millions that current processors have. could that not free up more die space for other things?
 

Matthew Daws

Member
Oct 19, 1999
31
0
0
I think the answers are "marketing" and the difference between 8/16-bit processors and 8/16-bit data sizes.

Intel certainly could tack a pentium (or whatever) core onto the Itanium. However, I believe that the whole point of having x86 compatability at all is to give good performance with current software, and excellent performance with new software (so tempting people to move without forcing them to buy new software for everything). As such, an old pentium core can't hack it, and once you start adding a modern, fast x86 compatible core, you are talking about serious transistor count. There is probably (I would guess) no call for a (by modern standards) very slow x86 ability. Does anyone know if Intel is planning to ditch the x86 mode on the Itanium soon?

Now, for 8/16-bit code. Modern software DOES still use 8 and 16-bit data sizes. For example, in a 32-bit display, each byte corresponds to a colour component (red, green, blue, alpha). Thus, if a graphics package wants to deal with each colour component independently (as they do), the processor has to be able to handle bytes of data (i.e. 8-bit data sizes). Consequently, a modern CPU must be able to deal with 8 (and 16) bit sizes of data. Once you have this, I imagine that adding in the ability to run in a purely 16-bit mode (i.e. 286 mode) is not that hard. There are also advantages to having one CPU do everything: e.g. one memory bus etc. I suppose that if we ever get a new BIOS system, we might get PCs which start up in 386 mode, and then we might start to see 286 mode disappearing.

Anyhow, that's how I would answer the questions. Anyone with other thoughts?

--Matt
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Originally posted by: Matthew Daws
I think the answers are "marketing" and the difference between 8/16-bit processors and 8/16-bit data sizes.

Intel certainly could tack a pentium (or whatever) core onto the Itanium. However, I believe that the whole point of having x86 compatability at all is to give good performance with current software, and excellent performance with new software (so tempting people to move without forcing them to buy new software for everything). As such, an old pentium core can't hack it, and once you start adding a modern, fast x86 compatible core, you are talking about serious transistor count. There is probably (I would guess) no call for a (by modern standards) very slow x86 ability. Does anyone know if Intel is planning to ditch the x86 mode on the Itanium soon?

given how poorly the itanium emulates x86 code, tacking on a pentium core could provide a vast improvement without cannabilising existing x86 processor sales. but that is the point: to have decent x86 performance, but have even better performance if the application was recompiled to on the itanium. of course, i'm assuming that the pentium would be running at the same frequency as the itanium itself.

Now, for 8/16-bit code. Modern software DOES still use 8 and 16-bit data sizes. For example, in a 32-bit display, each byte corresponds to a colour component (red, green, blue, alpha). Thus, if a graphics package wants to deal with each colour component independently (as they do), the processor has to be able to handle bytes of data (i.e. 8-bit data sizes). Consequently, a modern CPU must be able to deal with 8 (and 16) bit sizes of data. Once you have this, I imagine that adding in the ability to run in a purely 16-bit mode (i.e. 286 mode) is not that hard. There are also advantages to having one CPU do everything: e.g. one memory bus etc. I suppose that if we ever get a new BIOS system, we might get PCs which start up in 386 mode, and then we might start to see 286 mode disappearing.

Anyhow, that's how I would answer the questions. Anyone with other thoughts?

--Matt

i'm talking about code that runs in 8/16-bit real mode or 16-bit protected mode. other than startup code, most computers do even use these modes anymore.
 

Matthew Daws

Member
Oct 19, 1999
31
0
0
given how poorly the itanium emulates x86 code, tacking on a pentium core could provide a vast improvement without cannabilising existing x86 processor sales. but that is the point: to have decent x86 performance, but have even better performance if the application was recompiled to on the itanium. of course, i'm assuming that the pentium would be running at the same frequency as the itanium itself.

I'm not sure it is *that* bad at emulation. A brief google search does not yield anything other than rather old articles. Still, I am willing to believe that pentium running at the same frequency as the itanium itself would probably be better than the itanium in x86 mode. This assumes, though, that you could get a pentium core up to this speed. I believe (again, I'm not sure) that the reason processor design evolves is both to allow higher clock speeds, and also to add new features (e.g. a P3 core, even a modern implementation like the Pentium M, cannot get close the the frequency of a P4 core). As such, I wonder if Intel could easily get a pentium core up to this speed? If not, we're back to putting a P3/P4 core in there, with those sorts of transistor counts, which I think defeats the point of your idea.

You are maybe winning me around here though: it would seem to have been about as much effort to add the emulation to the core as it would be to add a full, if old, x86 core. Maybe it just would have looked really bad if it had come to light that the Itanium had a 10 year old processor in it?

i'm talking about code that runs in 8/16-bit real mode or 16-bit protected mode. other than startup code, most computers do even use these modes anymore.

Yep, I know. But I'm guessing that the transistor count to implement this is rather small, given that the logic to deal with 8/16-bit data types has to be present anyway: having done some assembly coding, I know that the difference between 16-bit real mode and 32-bit real mode is tiny (even the opcode encoding is the same, just with the default size changed from 16-bits to 32-bits). I guess there might be more complicated issues with 16-bit protected mode (which is a real fudge: technically, there is no way to get out of 16-bit protected mode). IMHO it would be great if we killed real mode on x86. I'm just not convinced that removing support form the main core and then adding a 286 core (or whatever) would be better than what we have now...

Cheers, --Matt
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: jhu
i don't know if this has been touched upon before, but i was just thinking about the abysmal performance of the itanium running x86 instructions. could intel have just tacked on a pentium core onto the itanium somewhere? couldn't that have solved the x86 issue for itanium instead of having to run hardware translation using the itanium processor? since the pentium only had 3 million transistor, that's not a lot of die real-estate.

along the same lines, current 32-bit and 64-bit x86 cpus, for the most part, don't run 8-bit or 16-bit code anymore (except during startup, but that's usually about it). why not just tack on a 286 core? that thing only had 134k transistors versus the tens of millions that current processors have. could that not free up more die space for other things?

What you're describing isn't so easy. The Pentium core's circuits likely wouldn't even function correctly if ported to the 860 (130 nm) or 1262 (90 nm) process, and there's absolutely no way it would meet the timing required for the end-of-life frequency targets for the 130 nm and 90 nm Itanium 2 processors, even with significant redesign...I'd WAG that the best you could hope for is around 1 - 1.2 GHz for the Pentium core at 90nm. And its performance would fall quite a bit short of the IA-32 software execution layer. The IA-32 EL provides 50%-60% of the native IA-64 performance, with tests showing a 1.5 GHz I2 to be equal to a 1.6 GHz P4 in SPEC CPU and a 2.0 GHz P4 in Sysmark...and it is hoped that further optimizations can improve the IA-32 EL's performance by another 10%-20%.

Integrating a more modern core would be far more difficult. A Northwood or Prescott core would have a poor ROI, considering at best it could only hope to match the performance of the IA-32 EL. A Pentium 3 or Banias/Dothan core would provide more performance, but integrating one isn't as simple as "tacking" it on. Differences in the supply voltage and power as well as minor differences in the process technology would require some redesign...I'd imagine that it would require at least 10-15 engineers to do the design and verification.

There's also the issue of die size...I can see a Pentium 3 core to have difficulty meeting the target frequency, and its microarchitecture would be especially long-in-the-tooth at 90nm. That leaves Banias and Dothan, and at 87mm^2, Dothan would take away quite a bit of real estate. It's dynamic and static power draw, even when not in use, is also going to hurt the target clock frequency. And all this is assuming that a team doing an x86 core on the same process technology is far enough ahead of the Itanium design such that the Itanium team can leverage the core...what happens as Itanium comes more into alignment with x86 in using new process technologies?

Integrating a modern x86 core may give you better x86 performance, but at the expense of project schedule, due to increased design and verification, and because it would bring engineers off of the IA-64 side of things. It would also impact IA-64 performance, because it would potentially impact clock rate, and because it would take away silicon real estate that could otherwise be used for IA-64 features. And even if it was feasible and was deemed to provide a sufficient ROI, it's still baggage that continues to have all the cons described as Itanium designs are compacted to smaller processes. On the other hand, the IA-32 EL requires no engineers (a separate software team can work on it completely independently) and provides performance equivalent to a Pentium 4 clocked ~7% to 33% faster, with more improvement possible.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
IA64 certainly needs no more than a subset of x86 - and that's what it has. Its x86 compatibility isn't anywhere near completely compatible, merely targeting execution of x86 programs in an IA64 environment, not made to make the entire machine look like an x86.

As long as you're running the same instruction set architecture anyway, like the AMD64 processors do, there's little to no point in taking "obsolete" stuff out - after all, it's the same set of instructions, and all those 8- and 16-bit register accesses and addressing commands are still valid and mostly useful, even if you're writing a 64-bit enabled program.
 

Mday

Lifer
Oct 14, 1999
18,647
1
81
this has been my argument as to "what the hell was intel thinking"

i have always said, since the performance sucked on x86, why not dedicate some things to x86 even if its not a complete set of x86 instructions, the rest can be emulated. they could have simply tested a pure ia64 chip (even simulate it) and see what x86 instructions should be included since those that should be included would be ones that emulation mode does not perform effectively or to see which x86 calls are the most frequent. then all MS has to do is put in a wrapper and a dev box could bang out x86 more effectively than it is now...
 

Smaug

Senior member
Jul 16, 2002
276
0
0
I believe, and theres nothing really confirming or denying it, that there is an actual x86 chip within a chip on the itanium 2 but that it performs poorly.
 

Mday

Lifer
Oct 14, 1999
18,647
1
81
Originally posted by: Smaug
I believe, and theres nothing really confirming or denying it, that there is an actual x86 chip within a chip on the itanium 2 but that it performs poorly.

then you just assumed it, cuz its not there.
 

sao123

Lifer
May 27, 2002
12,650
203
106
Originally posted by: Peter
IA64 certainly needs no more than a subset of x86 - and that's what it has. Its x86 compatibility isn't anywhere near completely compatible, merely targeting execution of x86 programs in an IA64 environment, not made to make the entire machine look like an x86.

As long as you're running the same instruction set architecture anyway, like the AMD64 processors do, there's little to no point in taking "obsolete" stuff out - after all, it's the same set of instructions, and all those 8- and 16-bit register accesses and addressing commands are still valid and mostly useful, even if you're writing a 64-bit enabled program.


I agree with what your saying in your first paragraph.
I just wanted to let you know that AMD 64 does disable a small amount of the legacy stuff when it actually enters 64bit mode. Specifically, it disables the x87 instruction set... Thereby the original 3Dnow! and MMX which use x87 registers are abaondoned also. It is noteworthy that all of these instruction sets are used when the AMD 64 cpu is in 32bit mode. The AMD64 uses SSE, SSE2, & 3Dnow! Ext. to accomplish the tasks once processed by x87, MMX, and 3Dnow! orig. when in full 64bit mode.

You can read what i'm talking about more in depth Here at CPU-Z's website.
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: Mday
Originally posted by: Smaug
I believe, and theres nothing really confirming or denying it, that there is an actual x86 chip within a chip on the itanium 2 but that it performs poorly.

then you just assumed it, cuz its not there.

Yes and no...there isn't a distinct x86 core on Itanium 2, but there is a x86 front-end which, in combination with the IA-64 back-end, provides full hardware x86 compatibility. I'm told by people who would know (I'm on the team that design McKinley, the 1 GHz Itanium 2) that you could boot into DOS on an Itanium 2 system without any additional software assistance, if you so desired. For various reasons, the IA-32 software emulation layer provides better performance (50%-60% of native IA-64 performance), and will supercede the hardware compatibility in the future for the various reasons I described above.

Here's a McKinley die photo that highlights the IA-32 engine on the upper-left portion of the die...this is normally on HP's Systems and VLSI Technology Division website, but it's down for some reason.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Originally posted by: Smaug
I knew there was a hardware component

?? Last I checked, that's what everyone's been saying for the past five years or so...
 

Matthew Daws

Member
Oct 19, 1999
31
0
0
I just wanted to let you know that AMD 64 does disable a small amount of the legacy stuff when it actually enters 64bit mode. Specifically, it disables the x87 instruction set... Thereby the original 3Dnow! and MMX which use x87 registers are abaondoned also. It is noteworthy that all of these instruction sets are used when the AMD 64 cpu is in 32bit mode. The AMD64 uses SSE, SSE2, & 3Dnow! Ext. to accomplish the tasks once processed by x87, MMX, and 3Dnow! orig. when in full 64bit mode.

You can read what i'm talking about more in depth Here at CPU-Z's website.

I am pretty sure that this is false. If you read the AMD tech docs, then it's unclear, but I interpretted them to mean that x87 usage is discouraged but not actually turned off. You could interpret the CPU-Z article the same was as well. The reason I believe this is that SSE/SEE-2 etc. are great for multiplication/addition stuff, but they do not have the ability to do, say, sin, cos, exp, log commands. I simply cannot believe that AMD would not allow these sort of math instructions in 64-bit mode. I think they just want programmers to stop using the x87 for basic addition etc. (which is 95% of FPU code, and more like 99.9% of most FPU code).

Actually, I've just re-read the AMD tech doc, and it is pretty clear that x87 does work in 64-bit mode (long mode). The same seems to be true of MMX instructions. The docs simply state that a recompile is necessary, but that it has no effect on program operation.

Cheers, --Matt
 

sao123

Lifer
May 27, 2002
12,650
203
106
http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/25112.PDF


I decided to go read the AMD documentation as well, to be sure about what I was posting.

What i discovered was i believe we were both part right and part wrong.
From the AMD documentation.

AMD Athlon? 64 and AMD Opteron? processors support multiple methods of performing
floating-point operations. They support the older x87 assembly instructions in addition to the more
recent SIMD instructions (SSE, SSE2, and 3DNow!? technologies). Many of the suggestions in this
chapter are also generally applicable to the AMD Athlon 64 and AMD Opteron processors, with the
exception of SSE2 optimizations and expanded register usage.
AMD Athlon 64 and AMD Opteron processors are 64-bit processors that are fully backwards
compatible with 32-bit code. In general, 64-bit operating systems support the x87 and 3DNow!
instructions in 32-bit threads; however, 64-bit operating systems may not support x87 and 3DNow!
instructions in 64-bit threads. To make it easier to later migrate from 32-bit to 64-bit code, you may
want to avoid x87 and 3DNow! instructions altogether and use only SSE and SSE2 instructions when
writing new 32-bit code.

x87 support wont be disabled by hardware means, but possibly dropped by operating system & api support. I am currently reading MSDN documentation to confirm whether x87 support will be present in 64bit threads running on 64bit Windows.


I'd also like a link to the documentation you read yourself.
 

Matthew Daws

Member
Oct 19, 1999
31
0
0
soa123: Weird! I'm reading the "AMD64 Architecture Programmer's Manual Vol 1" To quote some things:

Applications gain additional benefits using the 64-bit media and x87 instructions. The separate register sets supported by these instructions relieve pressure on the XMM registers available to the 128-bit media instructions. This provides application programs with three distinct sets of floating-point registers. In addition, certain high-end implementations of the AMD64 architecture may support 128-bit media, 64-bit media, and x87 instructions with separate execution units.

In a section labelled "advantages of x87", we have

Instructions that compute transcendental values, including 2x-1, cosine, partial arc tangent, partial tangent, sine, sine with cosine, y*log2x, and y*log2(x+1). The cosine, partial arc tangent, sine, and sine with cosine instructions use angular values expressed in radians for operands and results.

Which was the point I was making. I'd be very surprised if x87 was ever turned off, as then a lot of scientific code couldn't be run. In a game, you probably have pre-computed tables for sine etc. but this isn't accurate enough for scientific stuff (and you'd probably pre-compute a table of sine values in your program, not have it stored in a data file).

You can get the PDF file I'm reading from Volume 1: Application Programming, PID 24592, v.3.08, 04/22/03

It is curious that in the above file, it is suggested that applicated use CPUID to check that x87 exists. I was dismissing this as nit-picking by AMD, but maybe it is to allow for OSes to turn off x87. Still, there is no indication about what to do if x87 isn't present!

Hmm, well, in Volume 5: 64-Bit Media and x87 Floating-Point Instructions, PID 26569, v.3.03, 04/22/03 the point is again made that before using x87 in 64-bit mode, an application should check CPUID. Again, there is no explaination of what to do if x87 is turned off.

Ah ha! Actually, I'm not sure that the link in Vol 5 means this. The check it refers to is actually that "long mode" exists at all, and the use of any instructions in long mode seem to carry this warning. There is a rather more through-away comment about checking that x87 exists, but that's all.

So, well, we both do seem to be correct. Not that I understand really...

Cheers, --Matt
 

Matthew Daws

Member
Oct 19, 1999
31
0
0
Following on from this (though everyone else seems to have lost interest), I'm now even more confused. A google search seems to bring up the following information (it's summed up at XBitLabs message board) It appears that the *hardware* at present does support x87 under all 64-bit modes. Under linux, you can use x87 under 64-bit modes (and even mix x87 and SSE2 code). However, it appears that under Windows 64, you cannot: i.e. the x87 instructions are disabled. Various websites confirm this, at least for kernel code (i.e. drivers etc.) This is confirmed in Porting to Windows 64

The above says that MMX, 3dNOW, x87 do not work in WIN64. 3dNOW fuctionally is essentially duplicated in SSE/SSE2, and MMX is also duplicated in SSE/SSE2 (you just change the registers: the instructions stay the same, with the implicit move from 64-bit to 128-bit registers). It says that developers should use SSE/SSE2 for x87 code. It appears, although I can't find anything concrete, that functions like sin, cos, exp, log and the like will be implemented in software (presumably in some standard Windows library, or inlined via a clever compiler). This is reasonable: I believe that x87 implements such functions by computing increasingly good approximations. Still, it seems a little odd, especially as the hardware is present in all current Athlon 64 chips (though presumably might be removed at some point in the future). It might have been nice if AMD had extended SSE2 to allow sin, cos etc. to work with the SSE2 registers.

Does anyone know how sin etc. are implemented on Power architeture chips? Has anyone got a copy of Win64 and done some benchmarks of scientific code?

Cheers, --Matt
 

Machupo

Golden Member
Dec 15, 1999
1,535
0
76
www.overclockers-network.com
i know that i'm stepping extraordinarily out of my league here, but why would the x86 core have to be on the same die as the itanium?

i remember reading about a programmable grid array computer that was packaged with a quad itanium "secondary" computer.... 'course it didn't help that they were six figures a copy
 

sao123

Lifer
May 27, 2002
12,650
203
106
I dont think the documentation exists to prove it yet, because VS 2005 (containing x86-64 compiler and libraries) are still in development.

However in past... sin, cos ...etc were all contained in the math.h and MATH STL. In all programs you would add #include<math.h> or equivolent to use those functions. Since that is pre-written library code
...you could compile &amp; disassemble the library files when they are available. I would be that these have SSE/SSE2 optomizations present within them.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
following the discussions here, i just wanted to add 2 things:

1) i just got back from fry's grand opening in chicago and got a socket a motherboard and memory for $95 with $80 rebate. this motherboard has onboard sound, video, usb, etc. the only thing it lacks is a cpu and memory. now, i'm wondering why such highly integrated motherboards don't come with a built-in cpu and some memory? nothing too fancy of course, but an integrated 386 with maybe 8mb of memory would be pretty neat. this motherboard uses a via chipset and via has some experience with cpu design. the purpose of having this would be to at least get a computer running if none of the other parts are currently available (ie my situation where i have to go get a cpu somewhere).

2) with regard to x86-64 and fpu operations, x87 instructions include sine, cosine, and other instructions. is the sse2 optimized function better than the x87 instruction counterpart? sse2 sine function vs. x87 sine instruction?
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: jhu
now, i'm wondering why such highly integrated motherboards don't come with a built-in cpu and some memory? nothing too fancy of course, but an integrated 386 with maybe 8mb of memory would be pretty neat. this motherboard uses a via chipset and via has some experience with cpu design. the purpose of having this would be to at least get a computer running if none of the other parts are currently available (ie my situation where i have to go get a cpu somewhere).
Thats something I'd really like to see. If the north bridge (or southbridge for Athlon64 boards) had an on-die cpu (doesn't have to be x86, since the BIOS writer controls all the code it runs), we could get much better debug info even without a functioning CPU. 8MB memory is a lot (especially to fab on-die) I think 128k or so would be more than enough.
 

Matthew Daws

Member
Oct 19, 1999
31
0
0
(2) with regard to x86-64 and fpu operations, x87 instructions include sine, cosine, and other instructions. is the sse2 optimized function better than the x87 instruction counterpart? sse2 sine function vs. x87 sine instruction?

This is the problem: there isn't a SSE2 sine function! There is an x87 sine function, and modern compilers (e.g. Intel's or the next version of GCC) inline it, and so never call the math library (doing this trick manually with some macros with exp() and pow() increases massively the speed of a GCC compiler version of Povray I've done).

One hint of bad things to come is from CPUID article. Near the end, they do some benchmarking on WIN64 with a microsoft compiler (not, as someone pointed out, the actual release version). You see that it is replacing x87 code with (scalar) SSE2 code and getting 5-10% speed-up (mainly thanks to extra registers). They then say

However, all the functions we used for the test did not provide such a good level of performance, because of the compiler limitation we spoke above. As soon as a trigonometric function was used, the performance drastically dropped down. We just hope that the final release of the Microsoft compiler will solve all these problems.

The pre-release of the AMD64 compiler caused some problems. The generated code is completely safe, but the optimizer is not finished, and some optimizations are just not here. For example, the use of trigonometric functions (sinus, cosinus) drastically slowed the code. Indeed, the compiler uses standard C library sin and cos functions whereas the IA32 compiler uses the floating point dedicated instructions.

What I don't understand is that they don't point out what I believe to be the only possible case (unless I'm mistaken about something else) which is that the AMD64 compiler, under Windows64, has NO CHOICE but to call the math libraries for sine, cos etc.

Cheers, --Matt
 

sao123

Lifer
May 27, 2002
12,650
203
106
I am still as confused as you....I suppose the logic behind the change is this:

From http://arstechnica.com/cpu/01q4/p4andg4e2/p4andg4e2-5.html
Increasing floating-point performance with SSE2
I had mentioned earlier that MMX uses a flat register file, and the same is true of both SSE and SSE2. The 8, 128-bit XMM registers are arranged as a flat file, which means that if you're able to replace an x87 FP operation with an SSE or SSE2 operation then you can use clever resource scheduling to avoid the performance hit brought on by the P4's combination of a stack-based FPR and a non-free FXCH instruction. Evidence suggests that a highly advanced compiler that is able to convert large amounts of x87 code to SSE2 code can yield a remarkable performance boost.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
does the 'fxch' penalty apply to the a64 as well or does it only occur in the p4?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |