Duck, . . . what you are experiencing is "technolust." Enjoy!
well, judging from Biostud's reply, yours and cheese-whiz, there's not any total revulsion to the idea of it.
I've been following "certain principles" since I became "hardware obsessed" earlier in the last decade, and I follow these principles because my episodic explorations and chump-change spend-thriftiness drove me in certain directions -- learning from mistakes.
Principle #1: Clever solutions more likely include simplification outweighing complication. [This is why -- once again -- I've abjured water-cooling even with an AiO.]
Principle #2: Squeeze as much use out of old hardware as you can without limiting yourself. [I have SOOO many good HDD spinners sitting in storage -- some brand-new.]
There are other principles -- like "Reduce your power bill," and "reduce the wear and tear on parts."
I've been using PrimoCache now for more than two years. I'd used ISRT for close to three years; acquired some Marvell controllers that had the Hyper-Duo option which I never tried. Primo is agnostic to storage mode, so you can cache both AHCI and RAID configurations under two different controllers to the same cache.
I just have high hopes for this, which is still a calculated experiment. If you use an SSD-cache of greater than 100GB, you'll incline to fill all your RAM slots, because there is "overhead" in addition to RAM-caching. But 100GB is just right for a rig with >= 16GB (and "=" seems to be just fine.)
And it may be techno-lust, indeed.
EDIT: about the cache size and overhead. I've determined that DDR4 XMP speeds mean that RAM caching by itself requires half as much RAM as with DDR3 speeds to get higher benchies or the same.
So it now occurs to me I could probably reduce the size of an anticipated SSD-cache on the NVMe drive, and trim the RAM-caching allocation even further.
Maybe I'll bookmark this thread, and report back when I feel like spending the money when the "price is right."