I don't know if I could ever use a 660p. Not because I'm an SSD snob, or think it's inferior, just that, some days, I like to download swathes of Linux ISOs from the 'net, and I copy them back and forth to my NAS. My NAS can sustain 100MB/sec fairly easily over my gigabit LAN, but ... I used to use an Intel 600p (TLC), a predecessor to the 660p (QLC), and it wasn't pretty when it ran out of SLC cache. Write speeds tanked. I mean, like 1/3 of a modern HDD type of speed. Really awful.
I've experienced that with Adata SU800 as well, again, when the SLC cache is exhausted, from writing a plethora of ISOs to it at once, or sometimes, during an extended OS installation, or backup restore operation. Down to 30MB/sec. From 400-500MB/sec. PAinful at times. In fact, copying 12 Linux Mint ISOs to my Team Group C188 USB3.0/3.1 Flash Drive, I was averaging between 37-49MB/sec write speeds. Yeah, faster than the Adata 128GB SU800, when you exhaust the SLC cache, doing the same thing that I was doing.
So, I'm more a fan of SSDs that can maintain speed, or at least, not drop down so much when their SLC cache is exhausted.
I thought that I read in a review, that the 660p would drop down, not to 30MB/sec, but something like 100MB/sec, which to me, would still likely be acceptable, as it would still max out transfers coming from my NAS over GbE to my client SSD. (I'll have to scour reviews to try to find that spec., but now I'm thinking of dropping some coin next month on a pair of 2TB 660p drives. Why not.)
One of the SusWrite 10s intervals benchmark listings shows 54 and 73MB/sec that it dropped down to. That's still likely better than my 600p, which was actually one of the first TLC NVMe SSDs, at least from Intel, and suffered from some firmware issues.
https://ssd.userbenchmark.com/SpeedTest/557263/INTEL-SSDPEKNW512G8
Edit: I think that I paid close to $100 for my 256GB 600p, and now you can get a 1TB 660p with better specs, for around the same price. Sounds like a win to me.
Endurance-wise, I don't know if QLC is an issue yet or not.
Right now, I'm using an Adata SP550 240GB on my main rig, and I have 10TB of TBW so far, after like nearly 2 years of usage. (Recent restore attempts that kept failing, because of my Mesh wifi - I had to wire up again, just to get restores to complete successfully, caused my TBW to jump from like 8.2 to 10TB in as much as a week.)
And finally, an update to the Adata SSD Toolbox (or maybe my jump to 10TBW), is showing my SSD lifespan gauge actually dropping down a few pixels now from 100% full, so maybe 96-97% life left. Still plenty.
I honestly actually think that it's worse for an SSD (TLC and QLC) to just sit there, with data on it, but unpowered and unused, than it is, being used and "wearing". It seems to lag less if it's consistently powered and used. SSDs were meant to be used, people!
Edit: So, yeah, you probably won't be disappointed if you get a 1TB or even a 2TB 660p (I would suggest 2TB, obviously), but you could get better benchmarks and better endurance with a higher-end drive. Then again, if you're shopping for performance, PCI-E 4.0 is about the drop, and the 970 EVO/PRO is getting a little "old", surely, Samsung is cooking up some PCI-E 4.0 NVMe goodies for us in their product labs. The cost of the 970 EVO/PRO might go down once the PCI-E 4.0 drives come out. The high-end is not the market for the 660p, so I doubt that we would see their prices drop much, other than NAND production-cost related drops (QLC yields are getting better).
But my point is, maybe it's better to get a 660p, get the space for cost, with NVMe performance, today, and then WAIT if you REALLY WANT a "performance" SSD, for PCI-E 4.0 to drop. (Zen 2 / Ryzen 3rd-Gen CPU will have it.)
You can always get a cheap adapter bracket board to drop into a PCI-E slot for your spare NVMe drive, so you could continue to use your 2TB 660p, even if you get a Samsung 980 EVO PCI-E 4.0 1TB drive for $200-250. (This last sentence was purely speculative.)