Discussion Intel Meteor, Arrow, Lunar & Panther Lakes Discussion Threads

Page 600 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
702
632
106






As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



Comparison of upcoming Intel's U-series CPU: Core Ultra 100U, Lunar Lake and Panther Lake

ModelCode-NameDateTDPNodeTilesMain TileCPULP E-CoreLLCGPUXe-cores
Core Ultra 100UMeteor LakeQ4 202315 - 57 WIntel 4 + N5 + N64tCPU2P + 8E212 MBIntel Graphics4
?Lunar LakeQ4 202417 - 30 WN3B + N62CPU + GPU & IMC4P + 4E012 MBArc8
?Panther LakeQ1 2026 ??Intel 18A + N3E3CPU + MC4P + 8E4?Arc12



Comparison of die size of Each Tile of Meteor Lake, Arrow Lake, Lunar Lake and Panther Lake

Meteor LakeArrow Lake (N3B)Lunar LakePanther Lake
PlatformMobile H/U OnlyDesktop & Mobile H&HXMobile U OnlyMobile H
Process NodeIntel 4TSMC N3BTSMC N3BIntel 18A
DateQ4 2023Desktop-Q4-2024
H&HX-Q1-2025
Q4 2024Q1 2026 ?
Full Die6P + 8P8P + 16E4P + 4E4P + 8E
LLC24 MB36 MB ?12 MB?
tCPU66.48
tGPU44.45
SoC96.77
IOE44.45
Total252.15



Intel Core Ultra 100 - Meteor Lake



As mentioned by Tomshardware, TSMC will manufacture the I/O, SoC, and GPU tiles. That means Intel will manufacture only the CPU and Foveros tiles. (Notably, Intel calls the I/O tile an 'I/O Expander,' hence the IOE moniker.)



 

Attachments

  • PantherLake.png
    283.5 KB · Views: 24,014
  • LNL.png
    881.8 KB · Views: 25,501
Last edited:

Abwx

Lifer
Apr 2, 2011
11,612
4,469
136
It’s best to leave out conspiracy theories especially when the 9950X is better in CB2024 in MT and scores 2305 at 200watts.

While Intel requires 279 watts to score 2355. You also have keep in mind CB2024 was developed in collaboration with multiple vendors. Especially since it’s the first Cinebench with proper ARM support and the render has changed from Pro Render to Redshift.

What theory conspiracy are you talking about.??.
It s documented that Cinema 4D/Cinebench is foremost optimised for Intel, and for a reason, it use Intel s whole optimisation suite that target firstly Intel CPUs, as aknowledged by Maxon themselves.
Could you define your use of the term “doctored?” What trick is happening?

They surely wont put their sponsor at a disadvantage, i guess that all those softwares with multiples licences are not given for free, and as pointed by a hardware.fr reviewer those softwares have flags where the CPU brand is required before implementing a given code path.


 
Last edited:

Thibsie

Senior member
Apr 25, 2017
913
1,019
136
What theory conspiracy are you talking about.??.
It s documented that Cinema 4D/Cinebench is foremost optimised for Intel, and for a reason, it use Intel s whole optimisation suite that target firstly Intel CPUs, as aknowledged by Maxon themselves.


They surely wont put their sponsor at a disadvantage, i guess that all those softwares with multiples licences are not given for free, and as pointed by a hardware.fr reviewer those softwares have flags where the CPU brand is required before implementing a given code path.


That's how ICC behave. Is this still the case with modern Intel compiler?
 

SiliconFly

Golden Member
Mar 10, 2023
1,651
996
96
Could you define your use of the term “doctored?” What trick is happening?
C'mon, if Intel wins in a bench, the bench sucks cos it is doctored or optimized for Intel. But if Intel loses, Intel sucks! No two ways about it.

Feels like the return of mlid.

Dood, he says they use Intel compiler & tools to optimize the performance of the benchmark suite. Not optimize the benchmark suite for Intel platform. 🤦‍♂️

If Zen sucks in cinebench ST, it because Zen sucks in handling cinebench ST workload.

Like the Sysmark fiasco ? XD
But when Zen loses in cinebench ST, it's obvious cinebench ST is doctored & optimised for Intel.
 
Last edited:
Reactions: reb0rn

Abwx

Lifer
Apr 2, 2011
11,612
4,469
136
That's how ICC behave. Is this still the case with modern Intel compiler?

The only change with modern ICC is that now they had to put a disclaimer stating that ICC is foremost designed for Intel CPUs and wont forcibly produce the bests results on other brands CPUs.

To get back on CB it produce exactly the same AMD/Intel relative scores as Povray that enable AVX2 only for Intel, and i mentioned that Blender, Corona, Vray, and undoubtly Mental Ray, all have the 9950X ST IPC 12-17% higher than RPL while CB yield only 5% better ST IPC for Zen 5.

And last but not least LNL wich is trailing StrixPoint in CB R23 is miraculously ahead in CB 2024, exactly the same as ADL wich was on par with Zen 3 on CB R15 and wich was rescued by a 10% help provided by the oportunisticaly released CB R20/R23.

At the end there s a pattern that is systematicaly the same and hence cant be the result of randomness.
 
Last edited:
Reactions: Thibsie

Nothingness

Diamond Member
Jul 3, 2013
3,137
2,153
136
Omg! I assumed you would get the full picture by looking at those instructions and what they mean. Those 2 are one of the most common instructions and something a cisc can do in one instruction but a risc can't. Also, when benchmarking general app performance like office, browsers, etc, these type of instructions are used more than int/fp.
A respected reviewer shows data on a very well known benchmark and you show two instructions used before a syscall? And insist on having demonstrated x86-64 needs less instructions in general? Really?

If you have any knowledge of programming, go play with https://godbolt.org/ and learn a thing or two about 64-bit Arm.
 
Reactions: KompuKare

SiliconFly

Golden Member
Mar 10, 2023
1,651
996
96
Is this still the case with modern Intel compiler?
I typically prefer ms c++ compiler (generates excellent code). But if you really want to up the game and squeeze blood from stone, it's got to be Intel c++ compiler. Has more advanced optimizations.

AMD too has it's own c++ compiler I think. Sadly, no one uses them.
 

Nothingness

Diamond Member
Jul 3, 2013
3,137
2,153
136
Yep. I remember reading about this too. Once decoded into µops, the rest of the blocks function typically like a RISC cpu (both Intel & AMD I think). But how similar, I don't know. Maybe some forum member with better insight can shed some light.
I could explain but given how stubborn you are I will save my time. Go read Realworldtech.
 

Abwx

Lifer
Apr 2, 2011
11,612
4,469
136
If AMDs software Support is bad vs Intel why is it Intel's fault lmao Intel MKL is one such example
If AMD support was that bad they wouldnt be ahead in the 4 most used renderers, as said ad nauseam, Corona, Blender, Vray and Mental Ray, only CB and the AMD crippling Povray produce those mediocre results, if CB wasnt biaised it wouldnt produce the same results as a crippling soft, it would be within the geomean of the 4 stated renderers.
 

SiliconFly

Golden Member
Mar 10, 2023
1,651
996
96
A respected reviewer shows data on a very well known benchmark and you show two instructions used before a syscall? And insist on having demonstrated x86-64 needs less instructions in general? Really?

If you have any knowledge of programming, go play with https://godbolt.org/ and learn a thing or two about 64-bit Arm.
How to argue with someone who believes RISC & CISC code have same number of instructions? When by CISC definition, it can encode many simple ops into one instruction. And also stating clearly int and fp are not the only instructions used. I kindly recommend that we let go of this discussion.
 

MS_AT

Senior member
Jul 15, 2024
365
798
96
What theory conspiracy are you talking about.??.
It s documented that Cinema 4D/Cinebench is foremost optimised for Intel, and for a reason, it use Intel s whole optimisation suite that target firstly Intel CPUs, as aknowledged by Maxon themselves.


They surely wont put their sponsor at a disadvantage, i guess that all those softwares with multiples licences are not given for free, and as pointed by a hardware.fr reviewer those softwares have flags where the CPU brand is required before implementing a given code path.


It would make more sense to say R24 is skewed towards ARM as it is doing math mostly on scalar values instead of vectors, so playing more to ARM strength rather than x64. https://chipsandcheese.com/p/cinebench-2024-reviewing-the-benchmark?utm_source=publication-search But I guess it was done to ensure better portability with least effort rather than some mischievous intent.
That's how ICC behave. Is this still the case with modern Intel compiler?
ICC is obsolete. Now Intel is using ICX which is based on CLANG/LLVM, same as AOCC from AMD. ICX is used by Y-cruncher author to compile Y-cruncher, including Zen5 specific binary.
I typically prefer ms c++ compiler (generates excellent code). But if you really want to up the game and squeeze blood from stone, it's got to be Intel c++ compiler. Has more advanced optimizations.

AMD too has it's own c++ compiler I think. Sadly, no one uses them.
MSVC is objectively the worst compiler of the big three C++ compilers (GCC, MSVC, CLANG) when it comes to generating performant code... The reason people use it when Clang is available directly from Visual Studio with two button clicks is still a puzzle to me
A respected reviewer shows data on a very well known benchmark and you show two instructions used before a syscall? And insist on having demonstrated x86-64 needs less instructions in general? Really?

If you have any knowledge of programming, go play with https://godbolt.org/ and learn a thing or two about 64-bit Arm.
To be fair the full quote from C&C is this:
If I take the geometric mean of the difference in instructions executed, x86-64 and aarch64 are surprisingly close, with x86-64 executing ~1.17% more instructions. Loongson executes 10.6% more instructions than x86-64, a minor but still notable difference. In the floating point suite, Loongson requires 11.4% more instructions than x86-64. Interestingly, the geomean of difference in instructions executed for aarch64 and Loongarch64 is nearly the same, with Loongarch64 coming within a percent of aarch64.
In other words in SPECint aarch64 has the advantage, in SPECfp, x64 has the advantage. Overall, ARM has the advantage. Still I would approach these results with caution as I doubt Chester looked at generated assembly to also check the quality of generated code. I would expect the 16 architectural GP registers are hurting x64 compared to aarch64 in SPECint where it will have to spill more often compared to ARM. In SpecFP, since this is usually more math oriented code, wider SIMD registers could explain x64 advantage. But once again, I am just guessing, I don't have access to SPEC sources so I cannot compile it myself to check
 

digitaldreamer

Junior Member
Mar 23, 2007
22
14
81
Snip -> Those 2 are one of the most common instructions and something a cisc can do in one instruction but a risc can't.
I don't believe that's a valid comparison, if performance is your goal.
Yes, some "CISC instructions" can do more (addressing modes, for example) where there may not be a single equivalent RISC instruction. But, the comparison isn't that clear cut.

A CISC instruction may take 2 clock cycles to completion, for example, whereas 2 RISC instructions that accomplish the same thing can take 1 clock cycle each, making the comparison moot. That's why the industry moved away from CISC principles and adopted the RISC architecture, as it simplified the hardware/logic design and the instructions could be pipelined without worrying about losing prefetch performance. It does put more emphasis on compiler design.

However, the distinction between the two ISA architectures has a lot of overlap today, where mentioning it is almost meaningless. So called RISC designs, if you can call it that, are hardly simple anymore.
 

poke01

Platinum Member
Mar 8, 2022
2,581
3,409
106
It s documented that Cinema 4D/Cinebench is foremost optimised for Intel, and for a reason, it use Intel s whole optimisation suite that target firstly Intel CPUs, as aknowledged by Maxon themselves.
yes, maybe they used to but not anymore. CB2024 is evidence of that. In any case it’s one benchmark but a still a good one. For those that use Redshift or just want a plain performance outlook it’s a good benchmark.

CB2024 is reliant on scalar and FP and ChipsandCheese says it’s relatively good benchmark overall.

By itself, Cinebench 2024 is a moderate IPC benchmark with a sizeable instruction and data footprint. Code spills into L2, but the instruction stream is easier to follow than what we saw in games. Decoupled branch predictors can thus keep the frontend fed even in the face of L1i misses. On the data side, Cinebench 2024 spills out of L3 and requires a modest amount of DRAM bandwidth. High scheduler capacity across integer and FP operations help keep more memory operations in flight in the face of DRAM latency. In that sense, Cinebench 2024 can be seen as Cinebench R23 with more emphasis on DRAM performance.

In the end, Cinebench 2024 poses decent challenges to the frontend and backend. It has a more realistic instruction footprint than SPEC2017, which has no subtest with more than 12 L1i MPKI. Maxon has also addressed Cinebench R15 and R23’s small data-side footprint, which could be mostly contained by a 8 MB last level cache. High core count systems could be constrained by memory bandwidth, which happens across a lot of other well-threaded applications. These characteristics make Cinebench a decent benchmark. There’s area for improvement though. It could be a better stress test if it more heavily leverages vector execution. Hopefully the next version of Cinebench is better vectorized.
 

Abwx

Lifer
Apr 2, 2011
11,612
4,469
136
It would make more sense to say R24 is skewed towards ARM as it is doing math mostly on scalar values instead of vectors, so playing more to ARM strength rather than x64. https://chipsandcheese.com/p/cinebench-2024-reviewing-the-benchmark?utm_source=publication-search But I guess it was done to ensure better portability with least effort rather than some mischievous intent.

ST comparisons :

CB R20 : HX370 23% faster than LNL 258V

CB R23 : HX370 7% faster than LNL 258V

CB 2024 : HX370 5% slower than LNL 258V

When the CPU is not good enough just "update" the bench, R23 was released only one year after R20, and using the same scene but with a vastly different AMD/Intel ratio for the scores, this has all the characteristics of an emergency update edition.

Sorry but those numbers speak by themselves, moreover given that Zen 5 improved FP by 25% relatively to Zen 4 while Intel s FP improvement from MTL to LNL was much lower, yet with CB 2024 it look that it s the contrary.
 

jdubs03

Golden Member
Oct 1, 2013
1,079
746
136
ST comparisons :

CB R20 : HX370 23% faster than LNL 258V

CB R23 : HX370 7% faster than LNL 258V

CB 2024 : HX370 5% slower than LNL 258V

When the CPU is not good enough just "update" the bench, R23 was released only one year after R20, and using the same scene but with a vastly different AMD/Intel ratio for the scores, this has all the characteristics of an emergency update edition.

Sorry but those numbers speak by themselves, moreover given that Zen 5 improved FP by 25% relatively to Zen 4 while Intel s FP improvement from MTL to LNL was much lower, yet with CB 2024 it look that it s the contrary.
Maybe CB20 just wasn’t that good of a benchmark.
 

MS_AT

Senior member
Jul 15, 2024
365
798
96
ST comparisons :

CB R20 : HX370 23% faster than LNL 258V

CB R23 : HX370 7% faster than LNL 258V

CB 2024 : HX370 5% slower than LNL 258V

When the CPU is not good enough just "update" the bench, R23 was released only one year after R20, and using the same scene but with a vastly different AMD/Intel ratio for the scores.

Sorry but those numbers speak by themselves, moreover given that Zen 5 improved FP by 25% relatively to Zen 4 while Intel s FP improvement from MTL to LNL was much lower, yet with CB 2024 it look that it s the contrary.
I really invite you to read C&C article I linked, which explains why the benchmark behaves differently which instructions its using and how its dependence on memory subsystem is now more visible.
 

SiliconFly

Golden Member
Mar 10, 2023
1,651
996
96
I don't believe that's a valid comparison, if performance is your goal.
Yes, some "CISC instructions" can do more (addressing modes, for example) where there may not be a single equivalent RISC instruction. But, the comparison isn't that clear cut.

A CISC instruction may take 2 clock cycles to completion, for example, whereas 2 RISC instructions that accomplish the same thing can take 1 clock cycle each, making the comparison moot. That's why the industry moved away from CISC principles and adopted the RISC architecture, as it simplified the hardware/logic design and the instructions could be pipelined without worrying about losing prefetch performance. It does put more emphasis on compiler design.

However, the distinction between the two ISA architectures has a lot of overlap today, where mentioning it is almost meaningless. So called RISC designs, if you can call it that, are hardly simple anymore.
True. Even Jim Keller said the same. Comparison them is pretty much meaningless these days.
 
Reactions: digitaldreamer

Abwx

Lifer
Apr 2, 2011
11,612
4,469
136
I really invite you to read C&C article I linked, which explains why the benchmark behaves differently which instructions its using and how its dependence on memory subsystem is now more visible.

I did read it, it does explain nothing about the subject, for this they should have done a profiling of the actualy executed code for both Intel and AMD.

RPL had a 5-6% advantage over Zen 4 in perf/Hz for CB R23.

Zen 5 improved the FP IPC by 25% while Intel s latest LNL did improve FP by only 12% or so, yet at the end of the day the difference Zen 5/LNL is the same in CB 2024 as in CB R23 for RPL/Zen 4.

And this also doesnt explain why R23 improved Intel s R20 score by 10% relatively to AMD and this despite using the same scene, that s the biggest smoking gun so far.

If these were truly random variations then it should go in both ways but obviously it goes always in the same direction, and as already pointed CB is the only renderer that produce those results out of 5 renderers excluding Povray wich we know is biased ISA wise.

Edit : CB 2024 work only if the CPU support AVX256, that s mentioned in Maxon s site.
 

poke01

Platinum Member
Mar 8, 2022
2,581
3,409
106
Just because AMD had a 25% improvement in FP, (which is an average of a bunch of subtests in SPEC) doesn’t mean that you will get a 25% improvement in CB2024. The score in CB2024 is also affected by DRAM bandwidth.
 
Reactions: MoistOintment

Abwx

Lifer
Apr 2, 2011
11,612
4,469
136
Just because AMD had a 25% improvement in FP, (which is an average of a bunch of subtests in SPEC) doesn’t mean that you will get a 25% improvement in CB2024. The score in CB2024 is also affected by DRAM bandwidth.
Sure but i wouldnt expect it to get no more improvements than a CPU that was improved by 12% in the same Spec tests, anyway in the waiting of more explanations
and to get back on topic here s WCCF take on the moon overclocked 285K, no doubt that WCCF s men would do very good marketers :
new Cinebench testing screenshots have revealed that the CPU has managed to clock in power draw levels at around 360W-370W under extreme testing, exceeding the 250W power profile........which shows that the Core Ultra 9 285K still has immense capabilities embedded that are yet to be tapped.

 
Reactions: lightmanek

Nothingness

Diamond Member
Jul 3, 2013
3,137
2,153
136
To be fair the full quote from C&C is this:

In other words in SPECint aarch64 has the advantage, in SPECfp, x64 has the advantage. Overall, ARM has the advantage. Still I would approach these results with caution as I doubt Chester looked at generated assembly to also check the quality of generated code. I would expect the 16 architectural GP registers are hurting x64 compared to aarch64 in SPECint where it will have to spill more often compared to ARM. In SpecFP, since this is usually more math oriented code, wider SIMD registers could explain x64 advantage. But once again, I am just guessing, I don't have access to SPEC sources so I cannot compile it myself to check
I clearly specified SPECint in my original quote 😉 I was not trying to say that AArch64 was better or equal in all cases contrary to the OP that claimed that x86-64 requires less instructions. This is simply not always correct and C&C results clearly demonstrate it.

Even in SPECint you can see where x86-64 wins the instruction count comparison significantly: x264. It's due to the use of wider vectors, as that workload is partly autovectorized. The same applies to SPECfp as you correctly guessed.

Basically my point is that x86-64 and AArch64 use about the same number of instructions on integer workloads, contrary to what some uneducated guys claim. OTOH, as I already previously wrote in this very forum, the dynamic code size of x86-64 is ~10% lower (I measured that a dozen of years ago on SPEC 2006 gcc, this might have changed a bit since compilers got better, especially for AArch64).

Again I went off topic. It might be interesting to create a technical thread discussing comparison of ISA (including RISC-V). We could look at code generated with Godbolt (sorry I can't show code generated for SPEC, though of course we could use open source variants for some of the subtests). Thing is that if it's to read that making a syscall requires one more instruction on AArch64 (2 vs 3), I don't really see the point, that's utterly meaningless.
 

Nothingness

Diamond Member
Jul 3, 2013
3,137
2,153
136
However, the distinction between the two ISA architectures has a lot of overlap today, where mentioning it is almost meaningless. So called RISC designs, if you can call it that, are hardly simple anymore.
You made a clear expostion, thanks 😀

I would add that AArch64 is not that RISCy anymore. The architecture manual is more than 14k pages long and there are hundreds of instructions, and dozens of extensions. Though it obviously has roots in RISC principles (single size for instruction encoding, 3 operands, no memory + operation instruction), it has gone way beyond (and one could even say, a bit too far).

The main existing RISC ISA is RISC-V (though they went too far in the RISC approach IMHO and had to add some extensions to fill some of the blatant holes).
 

DavidC1

Golden Member
Dec 29, 2023
1,211
1,932
96
Skymont doesn't have a uop cache, but it does still decode instructions into micro ops like P6.
C&C doesn't have everything correct, and they also miss things(such as existence of the OD-ILD. Heck they don't even mention it!). Since they had macro op execution in Silvermont, it would have been quite a change to go back to micro op execution in successors. Agner Fog's x86 optimization manual implies even Tremont keeps the same scheme.
 

SiliconFly

Golden Member
Mar 10, 2023
1,651
996
96
I clearly specified SPECint in my original quote 😉 I was not trying to say that AArch64 was better or equal in all cases contrary to the OP that claimed that x86-64 requires less instructions. This is simply not always correct and C&C results clearly demonstrate it.

Even in SPECint you can see where x86-64 wins the instruction count comparison significantly: x264. It's due to the use of wider vectors, as that workload is partly autovectorized. The same applies to SPECfp as you correctly guessed.

Basically my point is that x86-64 and AArch64 use about the same number of instructions on integer workloads, contrary to what some uneducated guys claim. OTOH, as I already previously wrote in this very forum, the dynamic code size of x86-64 is ~10% lower (I measured that a dozen of years ago on SPEC 2006 gcc, this might have changed a bit since compilers got better, especially for AArch64).

Again I went off topic. It might be interesting to create a technical thread discussing comparison of ISA (including RISC-V). We could look at code generated with Godbolt (sorry I can't show code generated for SPEC, though of course we could use open source variants for some of the subtests). Thing is that if it's to read that making a syscall requires one more instruction on AArch64 (2 vs 3), I don't really see the point, that's utterly meaningless.
uneducated guys claim? Really? Throwing insults when you don't have the knowledge to refute an argument in the proper methodical way. Sounds way too desperate.

I mentioned twice to you to let this argument go, cos it was getting meaningless and started to go in the wrong direction. Now this!!!
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |