Wondering if that results in having to make additional compromises in the process in order to make it more 'general'.
Intel might be unique, it would not be the first time, but the model adopted by the rest of the industry for IDM's with a need for dual-purpose (or multi-purpose as in 3 or more) flavors of process technology on a given node is that they delineate them by xtor targets and designs and divy up the work into so-called sub-nodes.
So you have a low-power subnode, a midpower subnode, and a high-performance subnode.
They'll all share the same BEOL (in its usually lego connect fashion, skipping some metal levels if they are not needed and so on), and they can mix-and-match xtors (adding masksets) if need-be.
But you'll have your FEOL that is entirely designed for the low-power sipper IC's and then your FEOL that is entirely designed for the high-performance stuff.
The juggling and compromises that come into play are more in terms of priorities for development milestone timeline. Who gets the R&D budget to procure more R&D wafers for faster/wider learning cycles in the fab on the pilot line, etc.
It's not the kind of compromise that I think you are thinking of in which the high-performance guys and the low-power guys drive a development program that results in a universal xtor that can be the jack of all trades.
At least this is my experience from the rest of the industry (specifically TI, AMD, NatSemi, Moto/Freescale,Philips/NXP, Lucent/Agere, UMC, TSMC, IBM, Chartered, SMIC, Samsung) but I readily admit I have zero information or experience about how Intel is managing this side of their new dual-purpose process development model. They so many other things their own special way, who knows maybe this is something new for the industry to get use to as well!