Commenting on the link that Rand put up on CPU validation: the ZDNet article is interesting, but it mixes up two commonly confused aspects of CPU validation: verification of the design, and validation of the product. Verification is used to check whether or not a chip works correctly against the model. It is basically checking for bugs - mismatches between the architectural model/specification and the actual chip operation. Validation is checking for manufacturing defects - ie. Wingz falls asleep at his fab machine and creates a defect that kills just one transistor on the chip.
These are fundamentally different tasks and the ZDNet article only really discusses the former while skipping over the latter. The former, verification, starts early on in the design. You are attempting to find "corner-cases" - or areas that are likely to be missed in 'normal' code but that may occur in rare cases. You do this long before the chip actually hits manufacturing by running on simulation models (either with hardware or software) and you continue to do it after the chip has shipped.
The latter, validation, is a totally different affair. You are trying (usually) to switch as many transistors on and off as you can in as short a time as possible. Usually the common stuck@ model is used where you assume that the manufacturing defect causes the faulty transistor to be stuck at a high or low value, and so you toggle every transistor on the chip on and off as quickly as possible. But in reality it is literally impossible to test and observe every single transistor (this is mathematically provable) so you do the best you can.
So there are actually two goals: the first is that there aren't any bugs (errors that affect all chips) and the second is that there aren't any defective chips being shipped (errors that affect only a few chips).
I'm only commenting on it because I noticed that the ZDNet article seem to have mixed these two up. And I'm commenting on it because I seem to spend all my days lately doing validation work.