Discussion Intel Meteor, Arrow, Lunar & Panther Lakes Discussion Threads

Page 588 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
702
632
106






As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



Comparison of upcoming Intel's U-series CPU: Core Ultra 100U, Lunar Lake and Panther Lake

ModelCode-NameDateTDPNodeTilesMain TileCPULP E-CoreLLCGPUXe-cores
Core Ultra 100UMeteor LakeQ4 202315 - 57 WIntel 4 + N5 + N64tCPU2P + 8E212 MBIntel Graphics4
?Lunar LakeQ4 202417 - 30 WN3B + N62CPU + GPU & IMC4P + 4E012 MBArc8
?Panther LakeQ1 2026 ??Intel 18A + N3E3CPU + MC4P + 8E4?Arc12



Comparison of die size of Each Tile of Meteor Lake, Arrow Lake, Lunar Lake and Panther Lake

Meteor LakeArrow Lake (N3B)Lunar LakePanther Lake
PlatformMobile H/U OnlyDesktop & Mobile H&HXMobile U OnlyMobile H
Process NodeIntel 4TSMC N3BTSMC N3BIntel 18A
DateQ4 2023Desktop-Q4-2024
H&HX-Q1-2025
Q4 2024Q1 2026 ?
Full Die6P + 8P8P + 16E4P + 4E4P + 8E
LLC24 MB36 MB ?12 MB?
tCPU66.48
tGPU44.45
SoC96.77
IOE44.45
Total252.15



Intel Core Ultra 100 - Meteor Lake



As mentioned by Tomshardware, TSMC will manufacture the I/O, SoC, and GPU tiles. That means Intel will manufacture only the CPU and Foveros tiles. (Notably, Intel calls the I/O tile an 'I/O Expander,' hence the IOE moniker.)



 

Attachments

  • PantherLake.png
    283.5 KB · Views: 24,014
  • LNL.png
    881.8 KB · Views: 25,501
Last edited:

AMDK11

Senior member
Jul 15, 2019
438
360
136
That also depends heavily on the application. Here's one for Nehalem

It also assumes the skill of the teams are same.

Just cause it's not a catastrophic failure doesn't mean it cannot be a failure either.

Haswell was 10%, and Skylake was 10%. The design, methodology and the configuration of the team needs to be flipped upside down. They were coasting back then!

What if Intel decided Willamette was a 15 stage pipeline part clocking at 1.2GHz with 2 decoders and 16KB L1D instead? What if Prescott was a 25 stage pipeline part rather than 31?

We're in an era of low process gains now, and it requires more effort, and greater cost to do it. Thus increasingly every step they take needs to be taken into account. Mini-Willamette is still a failure.
Pentium 4 never had more than a 1-Way decoder and even in its last incarnation it was not intended to have a higher IPC but to achieve even higher clock speeds.

Compared to GoldenCove, LionCove, despite dividing the execution engine into a separate one for FP and a separate one for ALU and increasing the total number of execution ports from 5 to 10, absolutely gains only one FP, ALU and AGU unit.

I would reserve the final verdict until independent tests.
 

DavidC1

Golden Member
Dec 29, 2023
1,211
1,932
96
Pentium 4 never had more than a 1-Way decoder and even in its last incarnation it was not intended to have a higher IPC but to achieve even higher clock speeds.
And this was my point, which is that if they did less of "Netburst" by going with a 2-wide on Willamette rather than 1-wide, and by only having 15 stage pipeline rather than 20, it still would have proved to be a subpar product. Less bad is still not good.

Back in the P4 days it was easy to scale down, and you got huge benefits from a new process. Yet Netburst still did not make sense. Bulldozer did not make sense. Nowadays the gains are few and much more difficult. So even doing little bad, doing little-Netburst and little-Bulldozer is bad.
 
Last edited:

OneEng2

Senior member
Sep 19, 2022
259
359
106
That also depends heavily on the application. Here's one for Nehalem

It also assumes the skill of the teams are same.

Just cause it's not a catastrophic failure doesn't mean it cannot be a failure either.

Haswell was 10%, and Skylake was 10%. The design, methodology and the configuration of the team needs to be flipped upside down. They were coasting back then!

What if Intel decided Willamette was a 15 stage pipeline part clocking at 1.2GHz with 2 decoders and 16KB L1D instead? What if Prescott was a 25 stage pipeline part rather than 31?

We're in an era of low process gains now, and it requires more effort, and greater cost to do it. Thus increasingly every step they take needs to be taken into account. Mini-Willamette is still a failure.
I believe, based on following this dance for the last 30 some odd years like the geek I am, that Intel has gotten into the habit of lazy design being backed up and overcome by outstanding process technology. Netburst was SO bad an idea that even with a crazy good lead in process technology over AMD it was STILL losing in performance (and couldn't even begin to do real dual core or 64 bit). By going back to an architecture more like PIII, Intel was able to pummel AMD with Core 2. This continued up until Ryzen (and even then some).

The problem with this strategy ends up being the exponential cost of moving to the next node.... that eventually bankrupts you if you are trying to stay 2 die shrinks ahead of everyone else (Like Intel once was able to do).

I suspect you get kinda lazy in design when you have literally 2 times the transistor and power budget of your competitors.

Now we are seeing an age where being able to create a competitive CPU while a process behind forces your competitor into crazy high risk, and terrible margins. I think you are going to see a re-energized Intel in the next 2 years. Lets give the boys in blue a chance eh?

FYI, I have been an AMD fan for the past 40 years (since they started making x86 clones), so I am hardly blinded by Intel propaganda or anything like that.
 

Hulk

Diamond Member
Oct 9, 1999
4,701
2,863
136
Raptor Lake ST benchmark scores were/are basically fairy dust. Sure you can run "stock" 6-6.2GHz, but not for long. The voltage required for those favored cores will quickly cook the CPU.

As I've written many times here:

1. Intel's first problem is they are competing in ST (and MT to some extent) against the previous generations unrealistic frequency and scores.
2. Their second problem is that there is additional latency with the move from monolithic to tiles.
3. Their third problem is the removal of HT, which seemingly makes it hard to reach MT performance of the previous generation especially when coupled with #2.

It's still pre-release so how is ARL doing?

1. The reduced clocks of ARL vs RPL, coupled with tile latency, is making it difficult for Intel to surpass or possibly even match the previous generation in ST. Not to make excuses, but ST benchmark performance due to "favored cores" is CPU cooking and basically useless in the real work on real applications. More ST (IPC) out of Lion Cove would of course have been better but I don't think ST will hurt many application benchmarks because there are no applications that use 1 or 2 cores.

2. As noted above this appears to be more of an issue with moving the P cores ahead rather than the E cores.

3 Before somewhat reliable performance leaks and Intel's performance information came out I thought it impossible Intel to match, much less beat the 14900K MT performance with a 8+16 design not including HT. I was wrong. Very wrong. Skymont is a resounding success and is responsible for amazing ARL MT performance. Furthermore, the super area efficient Skymont clusters are somewhat of an ace up Intel's sleeve. If they can move the needle on the P's a bit they are looking to be very competitive moving forward.
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
4,701
2,863
136
I believe, based on following this dance for the last 30 some odd years like the geek I am, that Intel has gotten into the habit of lazy design being backed up and overcome by outstanding process technology. Netburst was SO bad an idea that even with a crazy good lead in process technology over AMD it was STILL losing in performance (and couldn't even begin to do real dual core or 64 bit). By going back to an architecture more like PIII, Intel was able to pummel AMD with Core 2. This continued up until Ryzen (and even then some).

The problem with this strategy ends up being the exponential cost of moving to the next node.... that eventually bankrupts you if you are trying to stay 2 die shrinks ahead of everyone else (Like Intel once was able to do).

I suspect you get kinda lazy in design when you have literally 2 times the transistor and power budget of your competitors.

Now we are seeing an age where being able to create a competitive CPU while a process behind forces your competitor into crazy high risk, and terrible margins. I think you are going to see a re-energized Intel in the next 2 years. Lets give the boys in blue a chance eh?

FYI, I have been an AMD fan for the past 40 years (since they started making x86 clones), so I am hardly blinded by Intel propaganda or anything like that.
For me the astounding fact about Netburst is that so many super smart people can get lost in an area they know so well, and have lived in and studied most of their lives, yet they still end up taking a devistatingly wrong turn on a super highway with no exit for 50 miles, which made them late for an exceedingly important event.
 
Last edited:

OneEng2

Senior member
Sep 19, 2022
259
359
106
Pentium 4 never had more than a 1-Way decoder and even in its last incarnation it was not intended to have a higher IPC but to achieve even higher clock speeds.

Compared to GoldenCove, LionCove, despite dividing the execution engine into a separate one for FP and a separate one for ALU and increasing the total number of execution ports from 5 to 10, absolutely gains only one FP, ALU and AGU unit.

I would reserve the final verdict until independent tests.
Indeed. Netburst was some executives wet dream where consumers would purchase a CPU because Intel had spent decades training people that higher clock = higher performance (linearly). It was the most stupid idea in the history of CPU's (although Itanic rates a close second in my book).

I guess it never occurred to Intel execs that AMD would simply abandon the HZ naming convention.

While Lion Cove is an improvement over Golden Cove in the respects you point out, it is also on a better process than Golden Cove and likely has a higher transistor budget. After all this, it is only a marginal increase in IPC ..... and doesn't appear to clock as high .... thus the problem.

You are spot on with the last statement though. Final verdict will lie with the independent (and quite extensive I suspect) tests.
 

cannedlake240

Senior member
Jul 4, 2024
207
111
76
For Intel, I have always heard that SMT only provided 10-15%. AMD's Ryzen cores seem to get 20-30% from their SMT implementation
Blender happens to be a workload that does benefit greatly from HT enabled P cores. Also it seems LionCove hasn't improved much in this workload. Add to that a slight decrease in all core frequency from 14900KS
would a die with only Skymont still face thermal limiting in a dense DC chip?
Server cores don't run at 4.6Ghz all core
 

OneEng2

Senior member
Sep 19, 2022
259
359
106
For me the astounding fact about Netburst is that so many super smart people can get lost in an area they know so well, and have lived in and studied most of their lives, yet they still end up taking a devistatingly wrong turn on a super highway with no exit for 50 miles, which make them late for an exceedingly important event.
LMFAO. Gave me a good laugh Hulk .

I don't know anyone from that design team, but it is my belief (because I know how smart these kinds of engineers really are) that the "orders from on-high" were given and likely more than one brave engineer provided ample evidence that the company was about to make a "devastatingly wrong turn on a super highway with no exit for 50 miles" .

What I DON'T believe is that Lion Cove is like that at all.
Raptor Lake ST benchmark scores were/are basically fairy dust. Sure you can run "stock" 6-6.2GHz, but not for long. The voltage required for those favored cores will quickly cook the CPU.

As I've written many times here:

1. Intel's first problem is they are competing in ST (and MT to some extent) against the previous generations unrealistic frequency and scores.
2. Their second problem is that there is additional latency with the move from monolithic to tiles.
3. Their third problem is the removal of HT, which seemingly makes it hard to reach MT performance of the previous generation especially when coupled with #2.

It's still pre-release so how is ARL doing?

1. The reduced clocks of ARL vs RPL, coupled with tile latency, is making it difficult for Intel to surpass or possibly even match the previous generation in ST. Not to make excuses, but ST benchmark performance due to "favored cores" is CPU cooking and basically useless in the real work on real applications. More ST (IPC) out of Lion Cove would of course have been better but I don't think ST will hurt many application benchmarks because there are no applications that use 1 or 2 cores.

2. As noted above this appears to be more of an issue with moving the P cores ahead rather than the E cores.

3 Before somewhat reliable performance leaks and Intel's performance information came out I thought it impossible Intel to match, much less beat the 14900K MT performance with a 8+16 design not including HT. I was wrong. Very wrong. Skymont is a resounding success and is responsible for amazing ARL MT performance. Furthermore, the super area efficient Skymont clusters are somewhat of an ace up Intel's sleeve. If they can move the needle on the P's a bit they are looking to be very competitive moving forward.
I totally agree with your assessment.

Based on all that Intel had to make up for in moving to the Arrow Lake design (they were behind in many ways architecturally as you pointed out), Arrow Lake and Lunar Lake look pretty good.... and on the right track to me.

Still, they aren't all the way out of the woods yet.

It still looks like 9950x is able to best Arrow Lake's best in single and multi-thread .... and it is doing it from a process node behind (N4P vs N3B). I am not sure, but I suspect that Zen 5c is more power efficient (more IPC/Watt) than Skymont while Skymont has more IPC per mm2. Still trying to get my head around this one.

Only 1 more week until we find out for sure.
 
Reactions: Tlh97 and Hulk

OneEng2

Senior member
Sep 19, 2022
259
359
106
Blender happens to be a workload that does benefit greatly from HT enabled P cores. Also it seems LionCove hasn't improved much in this workload. Add to that a slight decrease in all core frequency from 14900KS

Server cores don't run at 4.6Ghz all core
It will be interesting to see the comprehensive suite of MT benchmarks.

What I am wondering is what kind of performance/watt Skymont can get in a DC chip.

In the desktop and laptop markets, die size is very important because you can't ask a consumer to buy a computer chip for 5 figures and beyond.

In the DC market, such margins exist and the value of performance greatly overcomes silicon cost.

With these halmarks in place, the real issue will come down to something that has nothing to do with either of these, but rather licensing cost per core.

The cost of most servers is mostly in the licensing costs, not the hardware (despite the crazy prices these big chips command). This cost is ALSO annual in most cases (just to add insult to injury).

Because of this, the value proposition in DC comes down to:

1) Performance per core (because of the licensing costs)
2) Performance per watt (because the cores become limited by overall power requirements and heat density which effect their ability to achieve the first point)

SMT doesn't count in "core count" so it is pretty crazy not to include it in a DC chip.
 

DavidC1

Golden Member
Dec 29, 2023
1,211
1,932
96
Pentium 4 never had more than a 1-Way decoder and even in its last incarnation it was not intended to have a higher IPC but to achieve even higher clock speeds.
There was a popular website during those days called Geeks.com or something and there may have been a rumor that Prescott widened to 3-wide. Looks like my memory got stuck there.

It's interesting why they stuck to a 1-way decode when doubling that up to 2-way would have increased performance dramatically in lots of applications. It's not until 3-way where diminishing returns really kick in.
For me the astounding fact about Netburst is that so many super smart people can get lost in an area they know so well, and have lived in and studied most of their lives, yet they still end up taking a devistatingly wrong turn on a super highway with no exit for 50 miles, which made them late for an exceedingly important event.
Oftentimes, specialization of a field results in a very tunnel vision view.

While many may have disagreed, many also fell into the trap that clocks would somehow scale forever into eons of GHz when it's clear to a simple DIY builder of computers that based on LN2 OC results it could barely exceed 7GHz(back then), and doing that on conventional cooling is next to impossible. It's 2024 and Raptorlake with all it's tricks is degrading at a mere 6GHz.
 

cannedlake240

Senior member
Jul 4, 2024
207
111
76
It will be interesting to see the comprehensive suite of MT benchmarks.

What I am wondering is what kind of performance/watt Skymont can get in a DC chip.

In the desktop and laptop markets, die size is very important because you can't ask a consumer to buy a computer chip for 5 figures and beyond.

In the DC market, such margins exist and the value of performance greatly overcomes silicon cost.

With these halmarks in place, the real issue will come down to something that has nothing to do with either of these, but rather licensing cost per core.

The cost of most servers is mostly in the licensing costs, not the hardware (despite the crazy prices these big chips command). This cost is ALSO annual in most cases (just to add insult to injury).

Because of this, the value proposition in DC comes down to:

1) Performance per core (because of the licensing costs)
2) Performance per watt (because the cores become limited by overall power requirements and heat density which effect their ability to achieve the first point)

SMT doesn't count in "core count" so it is pretty crazy not to include it in a DC chip.
It's unclear what their plan in with HT on server. There's rumors that after Clearwater forest there'll be no more E core Xeons. Intel might have to bring HT back at some point whether it's on LionCove or Atom successor, whichever takes over as the primary IP
 
Reactions: OneEng2

DavidC1

Golden Member
Dec 29, 2023
1,211
1,932
96
It's unclear what their plan in with HT on server. There's rumors that after Clearwater forest there'll be no more E core Xeons. Intel might have to bring HT back at some point whether it's on LionCove or Atom successor, whichever takes over as the primary IP
Other more well known leakers said that's wrong and the successor to Clearwater Forest, Rogue River Forest, is still on the table.

Moving to Skymont, then having a cache base tile with 288 cores of them will do very well against 128/256 Granite Rapids.
 

cannedlake240

Senior member
Jul 4, 2024
207
111
76
Other more well known leakers said that's wrong and the successor to Clearwater Forest, Rogue River Forest, is still on the table.

Moving to Skymont, then having a cache base tile with 288 cores of them will do very well against 128/256 Granite Rapids.
Idk... Could be because of old info. It wouldn't be too surprising for Intel, they already said P core Xeon will be a lion's share of xeons for years. Most secondary products that serve smaller segments seem to be getting cancelled by intel
 

DavidC1

Golden Member
Dec 29, 2023
1,211
1,932
96
Intel might have to bring HT back at some point whether it's on LionCove or Atom successor, whichever takes over as the primary IP
Or... when everything moves to being based on the current E core architecture, SMT won't come back.

Read up into Eric Quinnell's talk. He goes into detail about both uop caches and SMT.
OP caches are large, complex, make branch prediction logic replicate, and use lots of power.
1. ~100-bit read/write array (vs the 16-32-bit original instruction)
2. Redundant branch prediction hardware
3. OP cache specific fetch-line buffering
Regarding SMT:
Claim: SMT in an out-of-order, virtualized, non-shared context will be beaten by singlethreaded SMP
-2x85% IPC SMP cores at ½ the size beat ~120% IPC SMT
-Full-context SMT takes >> 10% area in reality. Find a die shot (not a paper or claim) where this is untrue.
-“Noisy neighbor effect” with cross-polluting caches
-Spectre-style security problems
-HW-multi thread scheduling unable to predict SW multi-thread intent, forecasting, prefetching
-All threads must sleep before powering down
-Out-of-order resources lost to hold variable SMT retire state
-Turn off hyperthreading and see for yourself. Did you lose anything?
SMT also increases the complexity of validation, which is way more important than few mm2, because pitfalls in development can result in product cycles becoming longer, which has a knock-on effect to future lines. Then you must consider what you gain if you take the saved complexity from validation - easier development, or moving that complexity elsewhere, such as uarch.

Intel's own x86 Optimization manual says this:
This overall approach to x86 instruction decoding provides a clear path forward to very wide designs without needing to cache post-decoded instructions.
Uop cache days are limited for Intel. SMT may not be far away.

3x the core size difference in the same node. That means a 288 core Skymont spends less on core than 128 core Granite Rapids, and that's just on the client side, where the area-hogging AMX doesn't exist on Redwood Cove.

This is more than just technicals. It is a fundamental difference in ideology.
 

MoistOintment

Member
Jul 31, 2024
59
107
66

It is interesting that it is doing so well in multi-thread. Give credit to all those praising the Skymont cores, but still, there are concerns even in multi-core.

Still not a match for 9950X in multithread despite having 4 more cores. Also, it is thermally limited even on N3B.

Thoughts?
Is it though? 101C is 4 degrees below TJMax AFAIK. And exceeding 5.5Ghz is TVB.

And for all we know, this could be tested using a cheaper cooler. We'll see when testing comes out, but if it can hit and hold 5.7Ghz on a 240 AIO, I would consider the thermals just fine.
 

alcoholbob

Diamond Member
May 24, 2005
6,338
404
126

It is interesting that it is doing so well in multi-thread. Give credit to all those praising the Skymont cores, but still, there are concerns even in multi-core.

Still not a match for 9950X in multithread despite having 4 more cores. Also, it is thermally limited even on N3B.

Thoughts?

Don't 9950X usually score around 17k range in CPU-Z?
 

MoistOintment

Member
Jul 31, 2024
59
107
66
For me the astounding fact about Netburst is that so many super smart people can get lost in an area they know so well, and have lived in and studied most of their lives, yet they still end up taking a devistatingly wrong turn on a super highway with no exit for 50 miles, which made them late for an exceedingly important event.
Correct me if I'm wrong, or is Netburst being a failure more hindsight is 20/20? This is years before I started following hardware, but I heard that when Netburst design first started, many believed clock speeds would just keep increasing, so focusing on clockspeed at the expense of IPC sounded like a good idea initially at the time the development started. Only later on was it discovered to be a terrible idea. Going from 2Ghz to 4Ghz is 2x performance, without any change to IPC. Intel was predicting publicly that they'd be hitting 10Ghz in a few generations.
 

Hitman928

Diamond Member
Apr 15, 2012
6,391
11,392
136
Correct me if I'm wrong, or is Netburst being a failure more hindsight is 20/20? This is years before I started following hardware, but I heard that when Netburst design first started, many believed clock speeds would just keep increasing, so focusing on clockspeed at the expense of IPC sounded like a good idea initially at the time the development started. Only later on was it discovered to be a terrible idea. Going from 2Ghz to 4Ghz is 2x performance, without any change to IPC. Intel was predicting publicly that they'd be hitting 10Ghz in a few generations.

Intel was right in that the FET themselves continued to scale very well in frequency. The problems they didn’t seem to anticipate is that the effect of the parasitics on achievable frequency increases faster than the achievable frequency of the FETs. Additionally, the power density becomes a major issue as you continue to scale to smaller and smaller geometries.
 
Reactions: Elfear

gdansk

Diamond Member
Feb 8, 2011
3,276
5,186
136
Correct me if I'm wrong, or is Netburst being a failure more hindsight is 20/20? This is years before I started following hardware, but I heard that when Netburst design first started, many believed clock speeds would just keep increasing, so focusing on clockspeed at the expense of IPC sounded like a good idea initially at the time the development started. Only later on was it discovered to be a terrible idea. Going from 2Ghz to 4Ghz is 2x performance, without any change to IPC. Intel was predicting publicly that they'd be hitting 10Ghz in a few generations.
Here's an editorial from the Microprocessor Report Dec 1999 which I think shows the impression of speed demons at the time:
But personally after Willamette I think they should have realized some problems but they doubled down with Prescott.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |