Well if it makes no difference where the data is or how broken up it is why is small file transfer rates so much lower than larger file transfer rates? I mean, if it makes no difference how the data is stored then it should not matter how big the file is.
Brian
It does make a difference. The SSD itself will arrange data for best access. What that means may vary by make and model. More IOs on the same channel, same die/plane, etc., are slower, so that should be mixed up, where possible. Most SSDs have a parity RAID 0-like striping setup, with proprietary implementation details, these days (plus added ECC, which is often RAID-5-like, and so on).
But, you don't control that, beyond the OS choosing LBAs. With plenty of free space, and TRIM, there will be enough room to make files that are large be stored in large contiguous chunks. Files that are small, or often-edited, are going to be small latency-bound IOs, no matter what. General software/memory overhead can make that slower, SATA can make it slower, the controller can make it slower, and potentially having to wait on a single chip to return multiple sets of data from multiple blocks can make it slower.
You can defrag an SSD, and some SSDs might even get a bit of a speedup from it...but if so, your best bet would have been to use a larger SSD from day one, so that the OS wouldn't have to split up moderate-sized files in the first place. OTOH, there's no technical reason why your OS couldn't take advantage of all your modern RAM, and perform an SSD-optimized defrag, either (IE, read and write 50-100MB chunks, or whole single files, if smaller, rather than the traditional tiny piecemeal work). Defragging is considered bad for SSDs because traditional defragging tools performed many small steps in the process, so that it could be safely stopped at almost any time, rather than determining an optimal final state and performing a much smaller number of writes to get there. Some 3rd-party tools may well do this, today.
However NTFS itself is vulnerable to fragmentation. We really should be using another file-system on SSD's. At least there should be a way to defrag the file system without physically shuffling data around in the SSD flash memory (by some fakery at the controller/translation level - it could just report that it has moved a block of data, when in reality it just modified the pointer in the translation layer...).
You really can't, though. To what end that could be done, most of it already is being done. However, there's no technical reason why files that get fragmented the worst can't be neatly rewritten contiguously, without making tens of times the file's size in writes, and the rest left alone.
The problem is that small writes tend to be, by nature, unpredictable, or requiring servicing
right now, and that can build up into dense fragmentation, over time. If you're running a program that has many database files, it will generally edit them in small parts, fragmenting them a lot over time (desktop accounting and tax software can be really bad about this, IME, and for some users, even AutoCAD). Just copying the files usually is enough to take care of the problem, but I see no reason why the OS/FS couldn't do the same, as it sees the excessive fragmentation, and automatically tackle those files only.
All that said, you probably don't have to worry. Most of the time, for most users, fragmentation doesn't become a major problem. It's almost all corner cases, these days, like lingering shadows (last PC I had to clean and set up for someone new had
15 active snapshots--WTF, no wonder updates were taking so long!), and client-side databases, from programs that don't optimize those files. MS started preventing fragmentation in the first place, by scattering file chunks across the drive, years ago. With plenty of free space, you'll usually be fine never defragging, these days.