That's the thing. On an SSD, it doesn't.
On an HDD, the *BLEARGH* becomes a bunch of *pfft*s because the HD has to go here, then go here, then go there, then go there, to get all the pieces of the file. So that's why everybody has spent the last 30 years pissing themselves about defraying their stuff.
You're mistaking controller latency for fragmentation behavior.
On an SSD it does, because the read requests that arrive at the controller will still be a bunch of addresses, and not a contiguous block of addresses.
Hence you lose a factor 5 (at least according to the last benchmarks) in read performance, since you bombard the controller with individual requests, instead of a single one.
Sure, on the flash side of things, speed is the same, but the SSD controller is quite simply slower when in "interactive mode", and this leads to real world performance loss. A factor 5 might not be much, compared to an SSD, but if your log-job runs for 150 minutes instead of 30 minutes, that factor five suddenly has a massive real world impact.
BD2003: very little correlation between hardware addresses and flash cell addressing. Local relationships are probably correct up to page-size, if you're lucky, but once you start using the SSD, and deleting/rewriting, load balancing and GC will mess it up completely. That spare area for example, won't show up as addressable blocks, but it's being cycled around the drive all the time.
As for sequential vs linear from reading a single file: I assume the difference comes through read-look-ahead, which is described here:
https://www.mail-archive.com/forum@t13.org/msg02556.html (for the case of mechanical drives, where it's even more of a crucial function)
This would make a seek (i.e. moving to a non-sequential LBA) clean the buffer and restart the read look-ahead at the new position. The performance gains are simply from the controller already having the required block in the buffer, before the OS's command to read it actually arrives over SATA.
mikeymikec: I'm not aware of any OS that knows anything about the underlying hardware. Layering is pretty strictly enforced, and ZFS' approach of breaking the layers isn't gladly seen by kernel devs.
Trim merely means the FS sends the SSD a command, telling it that a certain block can be erased, instead of containing just zeros, or persisting as unlinked memory (disabling trim means you can undelete data, enabling trim means that anything once deleted is probably gone forever very soon).
No FS I know of will do anything beyond that, and even cannot do anything beyond that, since it does not know what the SSD will do with the command. Sometimes TRIM is scheduled for deferred execution, all of which is up to the "hardware" end. The FS merely sees a block device with X blocks of LBA, and has it's inode trees to know on which LBA it placed which bit of which file.
Good FS will try to avoid fragmentation by using a certain way of filling the free space in their LBA allotment (i.e. best fit, largest free space, instead of just using the first free LBA available and mercilessly fragmenting everything), but there's no magic beyond that, that I know of.