SOI doesn't affect clock rates nor how much logic you can put down. SOI only effects the end outcomes: Yields and Lifespan.
OK, just going to point out the caveats here that should go along with that kind of statement lest some readers of this thread walk away with a completely wrong impression or understanding...
What you write about SOI vs Bulk-Si in your post is only true if you are comparing to completely comparable nodes that have been designed and engineered to deliver essentially identical drive currents, capacitance, and leakage such that removing the SOI element from the process flow solely imparts a change in electrical parameters that dictate yield and reliability.
In reality the development costs associated with creating such a bulk-Si node versus the development costs associated with creating the electrically equivalent node via Bulk-Si are drastically (emphasis on drastically) different.
If you are developing a node which will see limited production volumes (AMD vs Intel when AMD still owned its fabs) then the cost-savings that came from using SOI paid off in the end despite the slightly higher production costs per wafer (because there were so few wafers produced with the node).
But the numbers completely flip-flop when you start talking volume production the likes of an Intel or a TSMC or a Samsung.
With those production numbers it then makes sense to take the hit of having a higher node development cost to make an electrically equivalent Bulk-Si node because the amortized development costs are more than compensated with lower production costs per wafer over the lifetime of the node.
But you can't just say "SOI doesn't effect clockspeeds" without capturing the fact that in a limited R&D budget scenario (which all scenarios are in reality) going SOI most certainly does effect clockspeeds because it gives the R&D team more budget to develop higher Idrive xtors than they would otherwise have been able to develop had they been tasked with creating a bulk-Si process flow.