Defragging SSD

Lipoly

Junior Member
Apr 16, 2012
7
0
0
Regarding this thread being closed by mod: http://forums.anandtech.com/showthread.php?t=2216605

Shmee said:
This thread continuing does more harm then good. The OP was probably just trolling anyways. If you have a specific question about SSD's or fragmentation, or whatever, go ahead and make a new thread. Closed.

I appreciate the need to close a thread filled w/misleading information but at least explain why it is harmful? Reading the OP's question, I see no indication of trolling other than lack of follow-up. I am seeking info, not a fight.

Has it been beaten to death? If there is a proven conclusion? I apologize if so, but is it that hard to link to data that shows how this is so (important when exercising the power to close a thread)...I don't think that is asking for too much.

My first post here on Anandtech and it got closed...did not expect that.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
With even the most basic of understandings of how SSD's work, you would know why there is no reason to defrag one. Once you understand that, it's completely irrelevant whether it is bad or not for the drive to defrag it, because you won't be doing it. Don't be lazy and go read up on SSD's. We shouldn't have to spoon feed you the most basic of concepts. If there are more specific questions after the basic research has been done, this is the place to ask.
 

razel

Platinum Member
May 14, 2002
2,337
90
101
You can find your answer when hardocp asked an Intel SSD engineer. If that's not enough, there's ZAPs sticky on the forum. If you still are not convinced, consider again what the Intel SSD engineer said along with other SSD experts such as Anand and Allyn.

If you're still not convinced, go ahead and defrag. No one is stopping you. At that point you'll need a lawyer, judge, or Bubba in the prison system to convince you.
 

Lipoly

Junior Member
Apr 16, 2012
7
0
0
Ok, thanks. I have looked up the basics of how an SSD works and what I found was simply:

SSDs have such a fast access time that moderate fragmentation is mostly inconsequential. (more to the story when they are accessed heavily when close to capacity).

But what about massive fragmenttation?

Either:

1. Trim/Garbage collection completely negates access time for fragmented files.
2. Trim/Garbage collection in some way defrags the SSD (seen this assertion many times).

If these are not universally true (I think they are not), why not defrag in CERTAIN circumstances if performance is of significant importance?

Not trying to start a flame, but please give some background other than "If you are not an idiot you would know that SSDs don't need a defrag."
 

Lipoly

Junior Member
Apr 16, 2012
7
0
0
You can find your answer when hardocp asked an Intel SSD engineer. If that's not enough, there's ZAPs sticky on the forum. If you still are not convinced, consider again what the Intel SSD engineer said along with other SSD experts such as Anand and Allyn.

If you're still not convinced, go ahead and defrag. No one is stopping you. At that point you'll need a lawyer, judge, or Bubba in the prison system to convince you.


Thank you razel...this is the information I was looking for. I don't understand why exactly, but based on info from industry experts, I am inclined to believe that you should not defrag an SSD...(ever?). THANKS!
 
Last edited:

readymix

Senior member
Jan 3, 2007
357
1
81
You can find your answer when hardocp asked an Intel SSD engineer. If that's not enough, there's ZAPs sticky on the forum. If you still are not convinced, consider again what the Intel SSD engineer said along with other SSD experts such as Anand and Allyn.

If you're still not convinced, go ahead and defrag. No one is stopping you. At that point you'll need a lawyer, judge, or Bubba in the prison system to convince you.


i read it to and won't suggest anyone do the un-holy and defrag regularly. but, the followup question at hardocp begs. why not occationaly?

Q. Do SSDs get slower as they get older?

A. This is a complicated issue. Multiple factors can contribute to performance degradation during the life span of a SSD. Most significant is the internal fragmentation of the data stored on the flash media.

defragging is bad but fragmentation is bad also.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
ok.. I'll jump in here. There are some truths, untruths, assumptions, and exaggerations in that(and now this) thread.

First off. Everyone gets so caught up on all the old spiels of data being located randomly anyways.. that they do not realize the percentage of logical file systems overhead and loss. There can be benefit and I have been purpose building optimized and consolidated images for the sake of recovery protocols that I've learned are most efficient and fastest.

I use Perfect Disk 11 and always build my images by installing the OS to a freshly secure erased SSD(or usually a raid array of SSD's since it's actually even faster to build). I add drivers, updates, OS tweaks, programs, and anything else needed with dedicated consolidations run between the various stages. The data location on the SSD is of no concern at this point. Neither is lifespan since these things will be like glorified USB sticks in 3 years anyways.. so what the hell do I care. I'll buy newer, faster, bigger ones then anyways.

To each his own here.. but good luck writing the necessary 200+ terabytes to reach a full read only state before upgrading that current SSD. I personally have no doubt that we will eventually be seeing 20 year old SSD posts of.. "my old 120GB SSD from back in the day". We already begin to make fun of all the tweaking, moving of page files and all the rest that everyone around here, and everywhere else, was spewing just 3 short years ago. Gonna' only get even funnier when we look back 5 years or more from now.

As a result of messing around with Tony Trim, which basically consisted of secure erases, data consolidation, forcing manual trims/GC, etc,.. I noticed a very subtle yet distinct pattern emerge through literally hundreds of images built, optimized, and restored for recovery and testing/comparisons sake(up to 10 per day at times). I found that image quality DOES matter without a single shred of doubt.

After discussing and showing distinct and fully repeatable results to many "experts"(many of whom had the same overwhelming and adamant feelings as the folks posting in this thread).. it became quite obvious that there are logical overhead losses associated with a Windows based OS. No big surprise there really, and the smarty's should already be aware of this. BUT.. what they don't realize is that there is a cumulative and overall effect even on a "super fast SSD". too. This is mainly because there are logical losses associated with any file system and hardware capability is affected by that to some degree or another. This is also partially because even SSD has latency(which was actually very insightfully mentioned prior in the other thread). Low latency.. is not the same as.. no latency,.. so you very well can affect the overall performance of an OS volume no matter where it would reside.

So, here it is all up in your grill. An SSD can show increased performance and efficiency from a purpose built restoration image which has been consolidated and defragged.. more so than 1 that has just been installed with typical protocol. IOW, a restored image that has been tweaked/consolidated?.. is faster than an image that is restored which has been installed as typical without consolidation/defrag. Secure erase before both image restorations to make the playing field even.. and you'll more easily see how image quality affects the overall picture. Not massive.. but perceivable, fully repeatable, and able to be measured by most all benchmarks.

I challenge anyone to try this test to see what the results will be.

1. optional.. but always "best case" is to first secure erase the SSD.
2. install the OS to the SSD, drivers, tweaks, updates, apps, settings, whatever.
3. make a backup image of that install.
4. now test the speeds in benchmarks, do some file copy tests, whatever. Make notes/screens of the numbers.
5. consolidate the image with Perfect Disk 10, 11, or 12(FYI.. windows "consolidation" doesn't even come close to what Perfect Disk does at the logical level).
6. Take another backup image of the optimized and consolidated version. Even test it if you want to peek at results. Even at this stage I usually see tangible results despite the tested/partially used or possibly benchmark spent drive state.
7. Secure erase the SSD once again to baseline it back to a factory fresh state.
8. Restore the optimized image once again and test the same way you did with the non-optimized drive.
9. Verification phase can be added by secure erasing once again and restoring the non-optimized image for the sake of retesting.

I have never.. not once.. not ever.. had a regular default installed image show more bandwidth and efficiency than that IDENTICAL install with the above mentioned imaging protocol being used. Image quality does matter and I'll gladly take that last 1 or 2 percent any day of the week and twice on Tuesday. All the little tweaks can and do add up for cumulative gains.

Furthermore.. the amount of fragmentation and degree of image granularity(how spread out the image is at the logical bitmaps level) at this early stage of install/image use is usually about 3% or less. The results climb noticably when you perform that test on a highly fragmented volume such as the OP of this new thread. I know because I purposely did nearly the exact same thing as he did to manully fragment the volume on my SSD(got mine to about 78% fragmentation and the bitmap was spread out to partially span almost every sector of the OS).

And.. if he decides to put in the work to press on in his efforts to understand things that many will not?.. he will be rewarded for his efforts. After all.. he has the perfect drive/volume to image and test right now. Of course.. I'll put cash on the consolidated image kicking the "doesn't really matter" images ass.

Here's a tiny sample of my hundreds of screens and tests.




And those test were on a very dirty/degraded volume which was NOT secure erased or idle recovered between consolidations/benchmark runs. So, imagine what it can do for an image that is restored after the SSD is secure erased.

PS. it's not normal for the mods around here to close up a thread like that and I can only assume that it was primarily due to the original OP having been banned.
 
Last edited:

RU482

Lifer
Apr 9, 2000
12,689
3
81
How about this scenario...
I am creating a golden image on an SSD that will be deployed to 100's of other units.
Do I defrag before capturing the Golden Image?

Or should I ask, is there a downside to defragging prior to capturing the image? (assuming I don't care about the SSD I am developing the image on)
 
Last edited:

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
How about this scenario...
I am creating a golden image on an SSD that will be deployed to 100's of other units.
Do I defrag before capturing the Golden Image?

Or should I ask, is there a downside to defragging prior to capturing the image? (assuming I don't care about the SSD I am developing the image on)

I certainly would. Hell.. I go so far just to do it for 7 measly machines that I tend to. And, no.. not at all(well aside from a write cycle to a few blocks of nand to get it all done) and was the main point that I tryed to convey above. Image quality does make a difference.

In fact.. at first I was sold on then current thinking and "facts" at that time.. that SSD lifespan may be of larger concern and started most of my image quality testing with HDD. But it was just sooo slow to install and reboot to HDD to do it.. that I just started beating up my SSD's like they are disposable(which they are IMHO).

Plus, you get the added benefit of immediate gratification as you can test right after tweaks/consolidations are finished. Which soon turned out to prove that if you can see slight gains on a drive/volume that's already been heavily used/tested to death?.. you can sure as hell see it on one that has been secure erased and given the strongest baseline of raw speed potential from a hardware capability standpoint. Trends do start to show rather quickly and you inevitably get sucked in even deeper to see what else can be learned or improved upon. Welcome to what I like to call.. "the 1 percenter club". lol

The optimised and consolidated image always showed greater efficiency over regular "who cares.. SSD has no latency anyways" installs. Always.
 
Last edited:

velis

Senior member
Jul 28, 2005
600
14
81
Nice job explaining groberts
If I may add a bit more:
SSDs have low latency + data is fragmented anyway.
So the impact of file system fragmentation is mostly unobserved. However, some files have a tendency of getting progressively fragmented up to thousands of fragments.
At that point the OS will have performance issues providing file data to your application. This will be quite noticeable.

Defragmenting the SSD will not put the data together in singular chunks on the drive itself, but it will put the data into singular chunks filesystem-wise. This means that you will remove one of the two fragmentations. It will show quite nicely on the particular (un)fragmented file performance.

I used to disable my Diskeeper service from defragmenting my SSDs. Thought I'd prolong SSD usable life. That turned out to be an illusion - I just can't really do anything to my drives that would exhaust their write cycles. Now I just leave diskeeper on and defragment the disks as if they were standard spindle type drives. Keeps me from experiencing any filesystem fragmentation related issues.

Sorry I can't attach an image right now, but my ATTO graph looks like a really badly damaged saw on my 4-year (without secure erase) Vertex 1.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Or should I ask, is there a downside to defragging prior to capturing the image? (assuming I don't care about the SSD I am developing the image on)
Wear on the SSD is basically it.

With luck, MS will come around and solve the issue in a near-future Windows (take the time to reduce fragmentation as part of creating and editing files, rather than as an extra service that will perform many times more writes to do it). Light fragmentation has no meaningful effect, but heavy fragmentation does, and without help, NTFS will get much worse over time. If they can get fragmentation over time to be similar to good *n*x FSes, you really could just ditch defragmentation (unless you fill your disk slap full, of course ).
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
Just adding or summarizing some key points:

1) Flash memory is accessed in blocks/pages at a time via commands to the chip interface. It doesn't matter if two blocks/pages are consecutive or in random locations, it's still two separate flash memory block/page transactions.

2) When dealing with multiple chips, you would WANT your sequential data fragmented across chips because each chip can handle the block/page request simultaneously. Reading a 4 block file as 1 block from each of 4 NAND chips would be 4 times faster than issuing 4 commands to a single NAND chip to access packed sequential data.

3) Due to wear leveling, the linear OS view is false, and defrag just simply moves data from one random location to another, even if the OS cluster map appears to have been improved.
 

GlacierFreeze

Golden Member
May 23, 2005
1,125
1
0

3) Due to wear leveling, the linear OS view is false, and defrag just simply moves data from one random location to another, even if the OS cluster map appears to have been improved.

This... is the reason I wouldn't defrag an SSD. But that's just me.

But then again, since some SSDs have proven to be able to take a beating (write wise), then *occasionally* defragging to have the files logically in order probably isn't really a big deal.

Also 1 and 2 are interesting. I always wondered about if an SSD could act on multiple areas compared to the single head of a HDD.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
But then again, since some SSDs have proven to be able to take a beating (write wise), then *occasionally* defragging to have the files logically in order probably isn't really a big deal.
It will also depend on FS fragmentation overhead. A 200k file in 12 fragments aught to be taken care of. A 100MB file, like a config file or DB, with 50 fragments in a 1-2MB hotspot, should be defragmented. A 50MB file made of 4 evenly-sized fragments is only good for scaring users into paying for extra defrag software. Severe fragmentation creates software overhead that even an instant storage device won't remove. Light to moderate fragmentation, while measurable in terms of performance loss, is not a big deal.
 

bradley

Diamond Member
Jan 9, 2000
3,671
2
81
Nice job explaining groberts
If I may add a bit more:
SSDs have low latency + data is fragmented anyway.

Nah, only you did a sufficient enough job. HDD have a longer seek time to their access time. SDDs have ultra low access times regardless of where the data is located. Therefore defragging modern SSDs is a waste of time and write cycles.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Nah, only you did a sufficient enough job. HDD have a longer seek time to their access time. SDDs have ultra low access times regardless of where the data is located. Therefore defragging modern SSDs is a waste of time and write cycles.
Usually. However fast the SSD is, though, you still spend time finding all the pieces to a file, and stitching it together, if it is highly fragmented. That can mean more reads, and will mean more CPU time and memory accesses for a given file. There is added overhead, especially when writing (you do have a journal state to worry about), even with a theoretically instant drive.

You can't defrag an SSD, realistically, but the thing is, you don't even care about the SSD's layout. You only care about the file<->LBA translation layer. Most filesystems today that aren't NTFS are implemented with the idea of keeping fragmentation so low you'll never care, whereas NTFS still needs defragging every once in awhile, especially on near-full disks. MS should come up with a way to deal with the problem. A tuned defrag service, for instance, which only defrags files that have exceeded some certain thresholds, rather than defragging everything while you're not at the computer, and which buffers as much as possible, to minimize small writes, would be a start.
 

Lipoly

Junior Member
Apr 16, 2012
7
0
0
First off, thank you groberts101 for the exhaustive input, much appreciated.

1) Flash memory is accessed in blocks/pages at a time via commands to the chip interface. It doesn't matter if two blocks/pages are consecutive or in random locations, it's still two separate flash memory block/page transactions.

So, the disk incurs a seek time for every page requested, regardless of the location.

Can somone explain what is meant by groperts101 statement:

"..they do not realize the percentage of logical file systems overhead and loss"

How does logical file system overhead relate to file fragmentation?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
First off, thank you groberts101 for the exhaustive input, much appreciated.



So, the disk incurs a seek time for every page requested, regardless of the location.

Can somone explain what is meant by groperts101 statement:

"..they do not realize the percentage of logical file systems overhead and loss"

How does logical file system overhead relate to file fragmentation?

There is access time involved in translating the page from the block the OS gave it, reading it and returning the data but I wouldn't call that seek time since there's no heads to seek.
 

Lipoly

Junior Member
Apr 16, 2012
7
0
0
There is access time involved in translating the page from the block the OS gave it, reading it and returning the data but I wouldn't call that seek time since there's no heads to seek.

Thanks, but how does this relate to fragmentation? How does a defragmentented (or "data consolidated") drive impact this?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Thanks, but how does this relate to fragmentation? How does a defragmentented (or "data consolidated") drive impact this?

Minimally as that time should all be in the nanoseconds. I would imagine some drives' firmware will be smart enough to merge split requests for pages that are physically contiguous but logically fragmented, however I doubt the difference would be appreciable right now.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
So, the disk incurs a seek time for every page requested, regardless of the location.

NAND flash is accessed in transactions. The chip is able to do so many transactions per second. Which block you are requesting is irrelevant, all that matters is how many transactions. The access time the SSD controller performing the logical to physical lookup and then the time to open/close the NAND interface.

Asking a NAND chip for blocks 3, 9, and 37 incur the same transaction and command overhead as asking for blocks 1, 2, 3. Block locality is irrelevant. Accessing block 9 then 37 has the same access time as accessing block 2 then 3. With the way NAND is physically wired, it doesn't matter.*

Where the random performance of an SSD suffers compared to sequential is not random NAND block access patterns, but the SSD controller performing the logical to physical lookups, controller cache locality for the lookup tables, decompression and error correction, etc, and dealing with millions of individual SATA commands on 4k files one at a time.

Also forget to mention in prior post, I guess this could be 4)

Raw data in flash blocks looks NOTHING like the data you would see viewing your SSD with a "disk" hex editor in an OS. It's all compressed random garbage, error correction codes, redundant copies, etc. It's all handled internally by the SSD controller. To think the OS is able to do anything meaningful with a defrag tool is a joke. It might even just make things worse if the SSD controller has worked to get everything arranged the way it likes it, then you come along and shuffle everything for no productive reason other than to give the SSD controller something to do next idle period (and use up P/E cycles).


* Not entirely sure on this, I know DRAM is more efficient if accessed sequentially with the way rows and columns are addressed, using a simple internal address counter, etc, to do sequential busts of consecutive pages, instead of suffering row/column access delays when accessed randomly, but NAND at the transistor/cell level is a completely different animal than DRAM.
 
Last edited:

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
ok.. I'll jump in here. There are some truths, untruths, assumptions, and exaggerations in that(and now this) thread.

First off...

As someone already mentioned. Because of the way SSD store and access memory, it makes no difference if the file contains blocks that are right next to each or on opposite "sides" of the drive. Each access will take the same amount of time. The way a traditional drive accesses and writes data and an SSD do, are completely different and can't be compared, which is why traditional disk utilities and worthless for SSD's.

Any performance increase you have seen from that exercise you listed is not a result of the data being reorganized on the drive but some other anomaly. In fact, Raxco, the company that makes Perfect Disk says so in their own documention:

"With SSD storage, since file level fragmentation results in negligble performance degradation, traditional file based defragmentation really doesn't provide any performance benefit and should be avoided as it has the potential to ultimately degrade SSD write performance. What can provide a performance benefit is free space consolidation to eliminate free space fragmentation and consolidate partially full blocks of data."

Here is a link to said PDF:

http://www.raxco.com/user_data/white_papers/perfectdisk_and_solid_state_drives_ssds_6_1_2011.pdf

So, to sum up. Even the developers of the software you are touting recommend against defragging SSD's because it won't increase performance and actually could end up decreasing write performance.
 
Last edited:

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
It's been mentioned in this thread, but didn't really jump out or get expounded upon.

SSD controllers internally RAID the NAND chips. Since the NAND is read in blocks, the absolute best performance in retrieving a file would be single, parallel requests to every NAND channel. Is there any defrag tool that knows to evenly split the files across the NAND channels? Hardly.

This is why it has been said that if you know how SSDs work (and how defragging works) you wouldn't even have to ask the question.

Now will performance be less than ideal when you have a file split into more blocks than it physically requires on your SSD? Yes. Is defragging going to fix that? It could improve it, as long as it randomly hits that middle ground where the file is split evenly across the NAND channels. But it could also take things too far and put the entire file on a single NAND channel, which is probably worse.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
How does logical file system overhead relate to file fragmentation?
To retrieve a portion of a file that is contiguous, the OS/app will need to request the file to be opened, and the part it needs. Barring various optimizations, the OS will need to start looking for the file starting at the root, traversing down the directories, with at least some in-memory activity at each level. Then, once it gets to the file's location, verifies permissions and all, then reads some offset.

Now, if the file has many fragments, it must instead look through a list (or tree) of parts of the file to find the location of the offset the program is looking for. I'm not sure about NTFS for this particular case, but some FSes also allow fragmented directory listings, adding another level of looking through a list, or searching a tree.

The process for writing is not merely a reverse, either. A new location for data needs to be found (if there's room), and the new data written, potentially fragmenting the file more. There is also a speed hit with keeping the journal up to date with many writes.

Until now, the many-ms seek times of HDDs represented a far greater overhead. With <0.3ms seeks, whether a fragment is at one end or the other of the disk's address range doesn't matter. Also, most users don't tend to get enough fragmentation to worry about it, these days, even on HDDs. If MS were to, in the near-future Windows, add a method to prevent or handle only severely fragmented files only (I've had game saves with thousands of fragments, FI--WTF!), which they should have done already, instead of rely on a crutch (diskkeeper), NTFS would be just fine for many more years to come.

Now will performance be less than ideal when you have a file split into more blocks than it physically requires on your SSD? Yes. Is defragging going to fix that? It could improve it, as long as it randomly hits that middle ground where the file is split evenly across the NAND channels. But it could also take things too far and put the entire file on a single NAND channel, which is probably worse.
That's a problem for the firmware devs to handle, and they seem to be doing a good job at it, over-all. They see those physical blocks, and the OS guys don't. Outside of price-sensitive proprietary embedded systems, that is how it should be. Windows can't put it anywhere specific on the physical flash, and that's not a bad thing.

With NCQ, all overhead except for the software-level overhead will generally be quite small. When that isn't true, you probably don't need a single consumer SSD for your work.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |