With the recent acquisition of the Alpha team by Intel, I was thinking about where the future of high-performance CPUs will lie.
The RISC/CISC barrier has blended a lot over the past 5 years, with x86 CPUs using a lot of RISC philosophies. While x86 CPUs still lag in FP performance based on SPEC (though the difference is no longer that significant), integer performance is more-or-less identical, and their cost is a fraction of RISC CPUs. Will the x86 instruction set reach a barrier in performance (discounting clock speed increases)?
On to RISC...many of the design philosophies that defined the RISC revolution in the 80s seem to have been broken:
1. Keep the instruction set small, thereby using hardwired control instead of microcode. Current RISC instruction sets are much larger than the original MIPS R2000, especially with the addition of SIMD extensions.
2 Only add a function in hardware if it shows a substantial increase in performance without a substantial increase in transistor count. The MIPS R2000, which stands for Microprocessor without Interlocking Pipeline Stages, attempted to use software to insert pipeline bubbles whenever there was a data or branch hazard. This proved infeasible, and the R3000 had hardware interlocks. Also, early RISC designs tried to rely on software compilers to schedule instructions in the pipeline, but compiler technology was not advanced enough and performance lagged. Current high-end RISC designs employ a number of traditionally software features in hardware at a huge expense to the die area: out-of-order execution and retiring, large re-order buffers, register renaming, etc.
3. With most scheduling functions supported in software, die size and design time will be minimized. RISC die sizes are now huge, and most companies cannot keep up with the design and production costs. Alpha's EV68 and EV7 have experienced years of delay, and the team has recently been bought by Intel, and SGI, HP, and IBM have dropped or reduced support for their RISC designs in favor of Itanium for their workstations and servers. IBM's Power4 is taking the only route left: the extreme high-end with ultra-scalability for mainframes and super-computers. The only truely successfuly RISC platform has been Sun's UltraSPARQ (despite it's lack-luster performance), since they have been able to market the entire hardware/software package.
With these highlights in mind, will RISC's niche market continue to shrink, or does it have a future?
As for VLIW, the only CPU with mass-market potential seems to be Itanium and future IA64 processors. Crusoe will likely remain a laptop CPU, and Sun's MAJC is aimed at high-end scalability. But a lot of Itanium's performance depends on its compilers. When (if ever) will the IA64 compilers mature to the point where its integer performance makes it feasible to be introduced to the desktop?
Discuss.
The RISC/CISC barrier has blended a lot over the past 5 years, with x86 CPUs using a lot of RISC philosophies. While x86 CPUs still lag in FP performance based on SPEC (though the difference is no longer that significant), integer performance is more-or-less identical, and their cost is a fraction of RISC CPUs. Will the x86 instruction set reach a barrier in performance (discounting clock speed increases)?
On to RISC...many of the design philosophies that defined the RISC revolution in the 80s seem to have been broken:
1. Keep the instruction set small, thereby using hardwired control instead of microcode. Current RISC instruction sets are much larger than the original MIPS R2000, especially with the addition of SIMD extensions.
2 Only add a function in hardware if it shows a substantial increase in performance without a substantial increase in transistor count. The MIPS R2000, which stands for Microprocessor without Interlocking Pipeline Stages, attempted to use software to insert pipeline bubbles whenever there was a data or branch hazard. This proved infeasible, and the R3000 had hardware interlocks. Also, early RISC designs tried to rely on software compilers to schedule instructions in the pipeline, but compiler technology was not advanced enough and performance lagged. Current high-end RISC designs employ a number of traditionally software features in hardware at a huge expense to the die area: out-of-order execution and retiring, large re-order buffers, register renaming, etc.
3. With most scheduling functions supported in software, die size and design time will be minimized. RISC die sizes are now huge, and most companies cannot keep up with the design and production costs. Alpha's EV68 and EV7 have experienced years of delay, and the team has recently been bought by Intel, and SGI, HP, and IBM have dropped or reduced support for their RISC designs in favor of Itanium for their workstations and servers. IBM's Power4 is taking the only route left: the extreme high-end with ultra-scalability for mainframes and super-computers. The only truely successfuly RISC platform has been Sun's UltraSPARQ (despite it's lack-luster performance), since they have been able to market the entire hardware/software package.
With these highlights in mind, will RISC's niche market continue to shrink, or does it have a future?
As for VLIW, the only CPU with mass-market potential seems to be Itanium and future IA64 processors. Crusoe will likely remain a laptop CPU, and Sun's MAJC is aimed at high-end scalability. But a lot of Itanium's performance depends on its compilers. When (if ever) will the IA64 compilers mature to the point where its integer performance makes it feasible to be introduced to the desktop?
Discuss.