recovering Raid 0 drives from old laptop

mattpegher

Platinum Member
Jun 18, 2006
2,207
0
71
I have an old sager that was sold with 2 raid 0 hdd. The laptop is toast. I backed up all my old pics of the family, but the backup drive (old tech) wont read now. I can pull both drives. Is it possible to put them in an external raid tower and recover the files?
I was looking at the StarTech.com 4 Bay External Hard Drive Array RAID Tower, eSATA USB 3.0 Enclosure. https://jet.com/product/product/295...e62360d4b43e&gclid=CKr635fsl8gCFQcTHwoddIMNyg
But before I buy I wanted some opinions.

Thanks
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,886
3,233
126
i dont think u fully understand how raid 0 works.

the raid setup is mostly on the controller.
without getting the same identical contoller which your laptop used, your data nearly impossible to recover.

your better off trying to recover the data off the backup drive regardless if its old tech.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
i dont think u fully understand how raid 0 works.

the raid setup is mostly on the controller.
Euuhhh that is extremely rare and actually i know of no controller that does this. I only know about some hardware RAID that is not configurable that uses static RAID settings like jumpers and fixed stripesize and offset, and thus do not need to store configuration data.

Almost all RAID implementations - software and hardware - store the RAID data in the metadata sector at the end of each harddrive. Linux mdraid is one exception and stores it on the beginning of each harddrive.

without getting the same identical contoller which your laptop used, your data nearly impossible to recover.
If the configuration data is stored on the controller, then a similar controller will not help, will it?

Since the data is most likely stored on the disks themselves, it will actually work if you buy/use the same kind of RAID engine.

But i suspect with the OP there is no hardware RAID but just FakeRAID with drivers. In many cases you can use Linux/BSD to use the pseudo-RAID driver which detects foreign RAID formats and use its own software RAID engine. Then you can simply mount the drive and access the data off it.

Even if you do not have access to the RAID metadata, you can guess the disk order, offset and stripesize. This can be done with simple tools.

If you want mattpegher, i am willing to spend my time to try to fix this for you. If you want to do this, all you need are two spare/empty harddrives of at least the same capacity. Or one harddrive with at least double the capacity. First you would use ddrescue to copy the entire drive contents to a file on the single HDD or to the raw device node of both HDDs, then use software RAID engine of a proper UNIX operating system such as BSD.

You will need to provide SSH access to the machine, so i can login from remote. It is not very difficult for you to do this, but you do need how to forward port 22 to the test computer, and will need to create a USB stick or LiveCD to boot the test computer which has both the RAID disks and the empty disk(s) connected to it. After copying i probably only need a few minutes to get you access to your data again.

Cheers mate!
 

MongGrel

Lifer
Dec 3, 2013
38,751
3,068
121
Something sounds odd to me really.

RAID0 normally reads as one spanned large drive to begin with, you wouldn't be backing from one to the other.

I'm not saying recovery is not doable, just had always thought unlikely.

"RAID 0RAID 0 consists of striping, without mirroring or parity. The capacity of a RAID 0 volume is the sum of the capacities of the disks in the set, the same as with a spanned volume. There is no added redundancy for handling disk failures, just as with a spanned volume. Thus, failure of one disk causes the loss of the entire RAID 0 volume, with reduced possibilities of data recovery when compared to a broken spanned volume. Striping distributes the contents of files roughly equally among all disks in the set, which makes concurrent read or write operations on the multiple disks almost inevitable and results in performance improvements. The concurrent operations make the throughput of most read and write operations equal to the throughput of one disk multiplied by the number of disks. Increased throughput is the big benefit of RAID 0 versus spanned volumes."

Have had a few on here for many years, using a small hardware RAID0 with a couple SSD'a and a separate larger hardware one with 4 HHD's in RAID10.

I'm a long way from being the biggest expert in the world, I'll be the first to admit.
 
Last edited:

CiPHER

Senior member
Mar 5, 2015
226
1
36
I don't understand what you mean? Why would recovery be unlikely? What do you mean with 'backing from one to the other'?

RAID0 is not spanning - that is concatenating also called JBOD though that name can refer to individual disks without RAID as well and thus the term JBOD is ambiguous, since it can mean two different things.

So what is RAID0? A normal disk reads like LBA0 = sector 0, LBA1 = sector1, LBA2 = sector2, and so on. With RAID0, LBA0-7 can be on disk1, LBA8-15 on disk2, LBA16-23 on disk1 again, and so on. It is like 'round robin' or alternating pattern with just 2 disks. That is why striping - RAID0 - is also called interleaving. It interleaves both disks with each other.

Now about the 'recovery' part. You can use any software RAID engine from Linux or BSD to implement ANY hardware or software RAID no matter what RAID engine it is. Problem is: how do you know the stripesize? How do you know the disk order (disk3, disk1, disk2)? How do you know the offset (this one is for advanced use)? And finally you need to know the RAID level, but usually you know that.

Because RAID is a fixed scheme of where data is distributed, all you need to know is this information:

  • RAID level
  • RAID stripesize (JBOD and mirroring are exempt)
  • Disk order
  • Offset

If you have all 4 pieces of information, you are good to go. Any proper RAID engine can implement any RAID array contained on the disks. So your Intel RAID or your Areca Hardware RAID can just be read by Linux software RAID if you manage to configure it properly with the 4 pieces of information mentioned above.

If you do it right, you use 1:1 backups or you perform read-only operations, so you can test whether it works. If you do not know the stripesize, then guess! If you do not know the disk order, then guess! There can only be so many combinations. If one finally yields a partition table and it looks good, then this must be the right combination. That is what i mean with recovery.... this is how i did it so many times on BSD UNIX.

Finally: in the part you quoted, there are some inaccuracies:

  • Capacity of RAID0 is not the sum of capacities; but the number of disks times the capacity of the lowest disk; thus what you quoted is only true if the disks are the same size. If not, only the smallest disk counts. A RAID0 of 1TB and 4TB disks means 2TB total capacity, not 5TB.
  • Failure of one disk in RAID0 or spanned/concat/JBOD array does not mean loss of the entire volume. It means your volume will have holes in it, making data recovery very hard - but not impossible! Especially with spanning/concat/JBOD you only miss the first-middle-last portion but the existing parts can be recovered quite well actually.
  • The last part of your quote appears to imply that the only sequential performance ('throughput') is increased, while a properly configured RAID0 will also double Random IOps performance as well, and halves effective latency (not absolute latency) - assuming a RAID0 of 2 disks. Technically the quote is not incorrect, but it may be interpreted as something that is incorrect. Many people believe RAID0 only improves MB/s and not IOps. This is untrue, except for improperly configured arrays. It cannot accelerate blocking random read performance though - the only exception where RAID0 cannot improve IOps performance. This is the reason why SSDs stay at roughly 20MB/s random 4K performance and this is the same for almost all SSDs. Because as you probably know, SSDs are either RAID0 of 16 NAND devices (RAID0 of 16 'disks') or use RAID4 (Intel) or RAID5 (Crucial, Samsung) for added protection against unreadable pages ('bad sectors') to increase uBER performance.
 

MongGrel

Lifer
Dec 3, 2013
38,751
3,068
121
I don't understand what you mean? Why would recovery be unlikely? What do you mean with 'backing from one to the other'?

RAID0 is not spanning - that is concatenating also called JBOD though that name can refer to individual disks without RAID as well and thus the term JBOD is ambiguous, since it can mean two different things.

So what is RAID0? A normal disk reads like LBA0 = sector 0, LBA1 = sector1, LBA2 = sector2, and so on. With RAID0, LBA0-7 can be on disk1, LBA8-15 on disk2, LBA16-23 on disk1 again, and so on. It is like 'round robin' or alternating pattern with just 2 disks. That is why striping - RAID0 - is also called interleaving. It interleaves both disks with each other.

Now about the 'recovery' part. You can use any software RAID engine from Linux or BSD to implement ANY hardware or software RAID no matter what RAID engine it is. Problem is: how do you know the stripesize? How do you know the disk order (disk3, disk1, disk2)? How do you know the offset (this one is for advanced use)? And finally you need to know the RAID level, but usually you know that.

Because RAID is a fixed scheme of where data is distributed, all you need to know is this information:

  • RAID level
  • RAID stripesize (JBOD and mirroring are exempt)
  • Disk order
  • Offset

If you have all 4 pieces of information, you are good to go. Any proper RAID engine can implement any RAID array contained on the disks. So your Intel RAID or your Areca Hardware RAID can just be read by Linux software RAID if you manage to configure it properly with the 4 pieces of information mentioned above.

If you do it right, you use 1:1 backups or you perform read-only operations, so you can test whether it works. If you do not know the stripesize, then guess! If you do not know the disk order, then guess! There can only be so many combinations. If one finally yields a partition table and it looks good, then this must be the right combination. That is what i mean with recovery.... this is how i did it so many times on BSD UNIX.

Finally: in the part you quoted, there are some inaccuracies:

  • Capacity of RAID0 is not the sum of capacities; but the number of disks times the capacity of the lowest disk; thus what you quoted is only true if the disks are the same size. If not, only the smallest disk counts. A RAID0 of 1TB and 4TB disks means 2TB total capacity, not 5TB.
  • Failure of one disk in RAID0 or spanned/concat/JBOD array does not mean loss of the entire volume. It means your volume will have holes in it, making data recovery very hard - but not impossible! Especially with spanning/concat/JBOD you only miss the first-middle-last portion but the existing parts can be recovered quite well actually.
  • The last part of your quote appears to imply that the only sequential performance ('throughput') is increased, while a properly configured RAID0 will also double Random IOps performance as well, and halves effective latency (not absolute latency) - assuming a RAID0 of 2 disks. Technically the quote is not incorrect, but it may be interpreted as something that is incorrect. Many people believe RAID0 only improves MB/s and not IOps. This is untrue, except for improperly configured arrays. It cannot accelerate blocking random read performance though - the only exception where RAID0 cannot improve IOps performance. This is the reason why SSDs stay at roughly 20MB/s random 4K performance and this is the same for almost all SSDs. Because as you probably know, SSDs are either RAID0 of 16 NAND devices (RAID0 of 16 'disks') or use RAID4 (Intel) or RAID5 (Crucial, Samsung) for added protection against unreadable pages ('bad sectors') to increase uBER performance.

I'm pretty curious as how you know I have an Areca RAID card to begin with, more than anything.

I did have it i my sig at one time, but seems a pretty minor thing to remember.

:biggrin:

At any rate, if you can rebuild his busted RAID0 remotely I would be impressed at any rate.

It's not a typical thing that I'm aware of, especially if one will not read.
 
Last edited:

CiPHER

Senior member
Mar 5, 2015
226
1
36
RAID0 is super simple, so i am pretty sure i can. Unless some other things are going on, such as severe corruption so that there is no partition table on it any longer, or lots of bad sectors and other things - that would complicate the recovery. But just recovering RAID using BSD UNIX software RAID engine is pretty simple actually. And yes i can do it remote if the person provides some assistance like port forwarding and connecting all the disks and booting the system with a suitable operating system (BSD UNIX).

And Areca is just one popular hardware RAID vendor - i did not know you owned one. I own several too, but do not use them any longer. ZFS is far superior to any RAID.
 

RecoveryForce

Member
Feb 12, 2015
117
2
46
www.recoveryforce.com
If your files are important, you should get a full sector-by-sector clone of the drives, just to be sure that you don't make a mistake and mess up the originals.

But, you should be able to easily recover the data from a RAID 0 stripe with almost any data recovery program. I tend to recommend R-Studio where the demo is free, but you would have to pay for the license to copy the files out.

If you are looking for free, you can try http://www.freeraidrecovery.com/

I, too, can provide remote assistance, if you'd like. The only hiccup might be if the RAID controller has a hidden offset on one or both of the drives. All in all, this should be a stupid simple recovery, assuming that the drives are healthy.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
Indeed, a 1:1 clone is what I would propose as well; either to empty disks of at least the same size, or to simple files that can act as virtual disks with 'mdconfig'.

But without it is possible as well, with read-only RAID0 initialisation. This would allow to scan for the partition table and try to mount the (NTFS) filesystem read-only. This, this would not have any writes to the drives and should be safe if executed properly.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Being old, the drives shouldn't be too big. You can probably make images to any modern external HDD, before going further.

Then, is it an Intel RAID? If the laptop has an Intel chipset, probably. There's a good chance you could add the drives to spare Intel ports, on a system with Intel RAID support, and the Iastor driver installed, and it may, "just work." Do clone each drive before trying anything else, though, just in case.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,886
3,233
126
Euuhhh that is extremely rare and actually i know of no controller that does this. I only know about some hardware RAID that is not configurable that uses static RAID settings like jumpers and fixed stripesize and offset, and thus do not need to store configuration data.

Almost all RAID implementations - software and hardware - store the RAID data in the metadata sector at the end of each harddrive. Linux mdraid is one exception and stores it on the beginning of each harddrive.

Cheers mate!

really?

Cuz everytime i have had a raid controller die, and used another raid controller, it always makes me reinitialize the disk and wipe the data.

There was no feature to rebuild the raid array, unless i was using the exact same controller.

If you care to tell me and the OP how its done step by step i would also greatly appreciate it, as i have a few disks on a raid controller which i am not too happy with and would like to replace.

And even with difference in intel chipsets... ie.. ICH9R vs IC10R vs ICH11R i have had issues moving disks over and re initializing the raid array.

And lets not even go into the dedicated controllers... especially ones which use an Intel IOP series like the Areca's...
Infact my biggest headache was moving a Areca over to a LSI.

At any rate, if you can rebuild his busted RAID0 remotely I would be impressed at any rate.

It's not a typical thing that I'm aware of, especially if one will not read.

I would also love to see this.... because i am also an owner of an Areca card, and well, i KNOW YOUR PAIN!
 
Last edited:

AlienTech

Member
Apr 29, 2015
117
0
0
aigomorla I have had the same problems, although it might have been because the software was not smart enough and we could not configure anything on it. I think what they are saying is that now, BSD has some tools that will let you specify all the raid options to get data off the drive. Thats the important part.. If you have the program then it is not difficult.. Not having a program to do it means the data on those drives are gone.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
really?
Cuz everytime i have had a raid controller die, and used another raid controller, it always makes me reinitialize the disk and wipe the data.
Well, different firmware could be an obstacle as well. Particularly for Intel RST RAID (regular Intel onboard RAID with Windows-only drivers) this is known to be the case. The metadata format has changed many times. For Areca, i have no such knowledge. I have upgraded from 1.37 to 1.43 and performed the same procedure for a customer without problems. Though i have no detailed knowledge about newer firmware revisions.

If you care to tell me and the OP how its done step by step i would also greatly appreciate it, as i have a few disks on a raid controller which i am not too happy with and would like to replace.
Well first, there are two ways: automatic and manual.

Automatic RAID recovery
With automatic i mean that the new RAID software has knowledge of the metadata format that stores the RAID configuration data - usually in the last sector on each harddrive. Linux md-raid is one example, FreeBSD's geom_raid pseudoRAID driver is another. It has knowledge of several foreign RAID formats. I copy-paste the manual page:

Code:
SUPPORTED METADATA FORMATS
     The GEOM RAID class follows a modular design, allowing different metadata
     formats to be used.  Support is currently implemented for the following
     formats:

     DDF    The format defined by the SNIA Common RAID Disk Data Format v2.0
            specification.  Used by some Adaptec RAID BIOSes and some hardware
            RAID controllers.  Because of high format flexibility different
            implementations support different set of features and have differ-
            ent on-disk metadata layouts.  To provide compatibility, the GEOM
            RAID class mimics capabilities of the first detected DDF array.
            Respecting that, it may support different number of disks per vol-
            ume, volumes per array, partitions per disk, etc.  The following
            configurations are supported: RAID0 (2+ disks), RAID1 (2+ disks),
            RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+
            disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks),
            RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1
            disk), CONCAT (2+ disks).

            Format supports two options "BE" and "LE", that mean big-endian
            byte order defined by specification (default) and little-endian
            used by some Adaptec controllers.

     Intel  The format used by Intel RAID BIOS.  Supports up to two volumes
            per array.  Supports configurations: RAID0 (2+ disks), RAID1 (2
            disks), RAID5 (3+ disks), RAID10 (4 disks).  Configurations not
            supported by Intel RAID BIOS, but enforceable on your own risk:
            RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks).

     JMicron
            The format used by JMicron RAID BIOS.  Supports one volume per
            array.  Supports configurations: RAID0 (2+ disks), RAID1 (2
            disks), RAID10 (4 disks), CONCAT (2+ disks).  Configurations not
            supported by JMicron RAID BIOS, but enforceable on your own risk:
            RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+
            disks).

     NVIDIA
            The format used by NVIDIA MediaShield RAID BIOS.  Supports one
            volume per array.  Supports configurations: RAID0 (2+ disks),
            RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1
            disk), CONCAT (2+ disks).  Configurations not supported by NVIDIA
            MediaShield RAID BIOS, but enforceable on your own risk: RAID1 (3+
            disks).

     Promise
            The format used by Promise and AMD/ATI RAID BIOSes.  Supports mul-
            tiple volumes per array.  Each disk can be split to be used by up
            to two arbitrary volumes.  Supports configurations: RAID0 (2+
            disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SIN-
            GLE (1 disk), CONCAT (2+ disks).  Configurations not supported by
            RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks),
            RAID10 (6+ disks).

     SiI    The format used by SiliconImage RAID BIOS.  Supports one volume
            per array.  Supports configurations: RAID0 (2+ disks), RAID1 (2
            disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CON-
            CAT (2+ disks).  Configurations not supported by SiliconImage RAID
            BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10
            (6+ disks).
This means that the geom_raid module has knowledge about the mentioned metadata formats and can extract the required RAID configuration data from an array created with such firmware or driver software.

Similar functionality exists in Linux so that the md-raid driver can utilise their own software RAID engine to mimic the RAID scheme used by other controllers/implementations. Often, you can just boot Ubuntu Linux and mount your RAID volume by clicking on it in Nautilus. Easy does it!

This functionality is required if you use Linux and Windows in dual boot configuration on an Intel RAID (FakeRAID/PseudoRAID/DriverRAID) array.


Manual RAID recovery
Manual just means you 'guess' the configuration data. Assuming you know the RAID level and the offset is zero (LBA0), this means you only need to know the disk order (disk2, disk3, disk1, disk4) and the stripesize (128K, 64K, 32K, 16K, 8K). So try all combinations and see if it works.

But the trick is not to write to the disks, or have a 1:1 clone backup so you can afford to make mistakes. On BSD UNIX, you can utilise graid with read-only implementation:

Code:
DESCRIPTION
     The gstripe utility is used for setting up a stripe on two or more disks.
     The striped device can be configured using two different methods:
     ``manual'' or ``automatic''.  When using the ``manual'' method, no meta-
     data are stored on the devices, so the striped device has to be config-
     ured by hand every time it is needed.  The ``automatic'' method uses on-
     disk metadata to detect devices.  Once devices are labeled, they will be
     automatically detected and configured.

     The first argument to gstripe indicates an action to be performed:

     create   Set up a striped device from the given devices with specified
              name.  This is the ``manual'' method and the stripe will not
              exist after a reboot (see DESCRIPTION above).  The kernel module
              geom_stripe.ko will be loaded if it is not loaded already.
In my words: manual means nothing will be written to the disks, automatic mode means 512-byte sectors will be written to the end of each each disk, overwriting the existing data. This is not what you want when recovering existing data. The manual method is thus very safe if you know what you are doing.

And even with difference in intel chipsets... ie.. ICH9R vs IC10R vs ICH11R i have had issues moving disks over and re initializing the raid array.
It is not the chipset that is the problem - it is the firmware that is the problem. Thus, you can also have problems with different ICH10R chipsets because one has newer firmware which uses a newer metadata format. Again: the metadata is what stored the RAID configuration data: RAID level, stripesize, disk order, offset.

And lets not even go into the dedicated controllers... especially ones which use an Intel IOP series like the Areca's...
For Areca i generally use the manual method, works just great. Areca controllers do have the opportunity to use several 'RAID sets' which is kind of like partitions that have disks use several RAID schemes, like the first portion is a RAID0 and the other portion is a RAID5, for example. That would be a bit more tricky. But casual configurations such as RAID0 are easy to recover just by trying and guessing the stripesize and disk order. Only so many combinations to try.

I would also love to see this.... because i am also an owner of an Areca card, and well, i KNOW YOUR PAIN!
Well, i am happy to provide this service for free. I would just ask that the data is worth my time and trouble. Since my time is valuable - worth more than gold.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
i dont think u fully understand how raid 0 works.

the raid setup is mostly on the controller.
without getting the same identical contoller which your laptop used, your data nearly impossible to recover.

your better off trying to recover the data off the backup drive regardless if its old tech.

I'm not sure what some of the other posters are trying to say, but the above is in essence the only post here that's correct.

You have 2 choices :

1 - Try to do a "raid swap" to an *identical controller*. This may or may not work, things like your raid firmware level can affect this.

2 - Recover from your backup media


To do #1 with a reasonable chance of success, you'd likely need your old laptop back or an identical brand/model.

The idea is that you can take two identical machines, with identical RAID controllers, and move the RAID drives from one server into the same ports on the opposite server. I have done this successfully many, many times between servers that had the same type of hardware, but they were installed and tested with that in mind. Usually the swap is tolerant of different CPU core counts and RAM sizes, as long as the chipset and other components are the same.

Doing that on dissimilar hardware is likely to result in two drives that are ready to be formatted...

#2 is your best option.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
I'm not sure what some of the other posters are trying to say, but the above is in essence the only post here that's correct.
If you are not sure what the others are trying to say - how can you know they are incorrect?

You provided 2 options, i provided a 3rd, and it does not require an identical controller or a backup - which is not a recovery solution at all.
 

RecoveryForce

Member
Feb 12, 2015
117
2
46
www.recoveryforce.com
I, too, have recovered hundreds of RAIDs without the original RAID controller or system. It is so frustrating to see poor advice posted in threads like this and then to see people argue with experts when they try to correct their misconceptions.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |