The issue of slow I/O isn't an issue.
If PCIe 6 is actually delivered in 2021 (spec, not h/w), and RAM follows similarly, then I agree, I/O is actually starting to pick up the pace again. It'll be interesting to see how pcie6 is received considering the move from NRZ to PAM4 (and not strictly from, because backwards compatibility requires NRZ *and* PAM4), but bandwidth is certainly getting its day in the sun. It's about time, too.
But...
Also, they need to go for clocks again.
...doesn't follow from that. So long as you have sufficient compute throughput, bandwidth can be put to use. There's a ton of bandwidth on your graphics card, and your clocks are likely less than half what your CPU's is. OTOH, if you want to ... say ... run an IDS/IPS software firewall on your brand spanking new 400Ghz fiber drop, and you need packet passing latency to be measured in nanoseconds, then yes, you might need clocks. Heck, running a software firewall at 10Ghz is probably a stretch right now, never mind the network speeds that a faster bus might unlock. Anyway, clocks are about latency and/or single-threaded behavior. Amdahl has something to say here, but you want to eye the applications that require extremely low latency.
However, I suspect we're going to see much higher parallelism before higher clocks. If I can make my firewall rule resolution happen in parallel and build a simple hardware bit search to find which rule (if any) applies first, I can make my software solution run 10s of times faster. Call me when you see a path to 50Ghz CPUs. So long as we can extract parallelism, I expect core scaling to continue to be an easier problem and a better solution in most cases than higher clocks.
Speaking of which, whatever happened to Zen2 embedded V3xxx? It'd be nice to at least move to pcie4, never mind the theoretical pcie6 we're talking about here