But imagine if 100 or more years from now, computers still had hardware to rename registers, deal with 64KB segments, real mode etc. Truly crazy. And that doesn't just apply to CPUs but also OSs, C-syntax programming languages etc.
You're talking about apples and oranges, though.
Everyone has hardware to rename registers. It would be stupid not to. That there should be enough registers to not need to rename was a dumb idea of early RISCs, desiring to expose the processor's workings with the ISA, which only helps in narrow worthless cases (worthless because the same things can be done w/ less registers and renaming, on other ISAs).
Very few CPUs do segmentation, mostly x86 ones. Common use for it is ancient software, hence ditching it for x86-64. Software lasts longer than hardware. x86 users today don't even
get to use such features (long mode segmentation is not segmentation as we generally consider it). As 32-bit binaries die off, so will the last vestiges of it all, and good riddance.
Real mode, and equivalents, are still quite common in uCs, and aren't going anywhere any time soon. Like segmentation, we don't get to use it, haven't for many years, and nobody has looked back (I stopped in either late 1996 or early '97...most everyone else got stuck until 2001, the poor sods!).
However, 100 years from now, I would expect the very basic underpinnings of how computers are implemented to have been uprooted, leaving theoretical models for deterministic behavior of computation as the only survivors of the pre-WWII era (Lambda Calculus, FI). Everything you know today is predicated upon bandwidth and latency limitations, much of which are physical limitations. Solve them, and our assumptions of how low-level implementation of scalar computation should be done will go *poof*.