I very rarely these days find users that see benefits from moving from one SSD to another SSD. Pulling the old Vertex 4 out to compare with my latest nVME was kind of a fun experiment before donating it on down the line.
I agree.
This current box, with R5 3600, 32GB DDR4-3600 and RAID-0 Intel 660p 1TB NVMe SSDs (and Win10), isn't all that much more impressive than my prior-gen box, with R5 1600, 32GB DDR4-3000, and a "lowly" 240GB Adata TLC SATA SSD and a 4TB spinner. I mean, in everyday usage, there's like no real noticeable difference. (Sure, the R5 3600 has twice the mining output of the R5 1600, due to the improved cache and AVX2 support. But everyday application usage, isn't a lot different, least of all the SSD difference.)
That said, there is ONE reason, that I can see, to replace an "Older Gen" SSD: dealing with power-loss, without bricking.
I have a friend, in which I gave him a Vertex2-family SSD, a 240GB I think, large for that era, but for free, it was a used refurb, kind of beat-up drive. But it worked... for a while. What finally killed it, was a power outage. Took out the drive. Friend had a UPS that I had sold him, never bothered to hook it up. (Or maybe I sold him the UPS, after that happened, and hooked him up with another SSD.)
Anyways, that's an important point, if you have critical data on an older SSD, 1) Back it up, possibly daily, and 2) consider replacing it with a newer device, with better power-loss prevention.
Supposedly, the Samsung EVO SATA drives, use a log-structured filesystem internally, to handle the block-mapping, such that they are able to roll back when power-loss occurs.
Crucial has taken another tact, and I've heard rumors that this is no longer true, but they at least used to have "power back-up capacitors", extra ones, on the MX500 (and previous M550 and M500 SSDs as well), such that they could "survive" a power-loss event, without corrupting their mapping tables.
Either way, modern SSDs are MUCH more resilient to "pulling the plug" on the PC.
Edit: Though, I get why you might want to run an older SSD, the older MLC-variety SSDs, had a certain amount of "snappiness", and most importantly, performance-consistency, especially with write speeds, that modern TLC (and QLC, UGH!) drives lack, even with DRAM and SLC cache (and those modern budget drives that lack those features, are even far worse!).
So, for example, 1st-gen Adata SU800 drives, early TLC drives, they have an SLC cache. But writing 50GB or so of ISOs continuously to one of those drives, and seq. write-speeds (according to Win10 file-copy dialog box graph), starts to bounce between 30-40MB/sec. That's like, USB2.0 external HDD speeds. (UGH!)
A decent HDD has better, more consistent seq. write speeds than that.
Thankfully, modern NVMe drives, some of them with very impressive specifications for read and write, still perform better than a HDD, when SLC cache is exhausted. (Heard that the 660p or 665p would hit 800MB/sec write speeds, down from 1800-2000MB/sec, when SLC cache is exhausted. Still bearable, I guess. Way better than the SU800 SATA.)