It could've been the other way round. They had the plan for another server processor from the beginning. That's as business is expected to happen if it is full of MBAs. The R&D teams and engineers have to deliver. But with such complex things as CPUs where you've still to find the solutions, which are already scheduled and tied to milestones, there could always be delays, buggy features, wrong assumptions (heuristics), imprecise simulations, etc. This is the human factor at work at every level.
You are correct here, and being unable to deal with these "human factors" also falls in the category of incompetence.
They left out finishing some projects like Krishna, Wichita, Skybridge, and others, which was either related to GF processes, market situations or changed plans.
Which begs the questions, what prompted the poor marketing conditions or change of plans? I can't see the big analytical leap in realizing that all the markets served with AMD CMT chips reported a precipitous drop in AMD sales and market share, and that even the cat cores, a low cost initiative, fared better when compared to the competition.
When did they flush the entire pipeline? They left out finishing some projects like Krishna, Wichita, Skybridge, and others, which was either related to GF processes, market situations or changed plans. The last big µarch thing being cancelled which I know of is David Christie's (and even Jim Keller's) original K8 design. Nothing is really missing in the construction line. Ok, the only thing I might miss from the original µarch roadmap is "greater parallelism" of SR, but this is interpretation (in the end +0.1% is "greater"). Other topics like ASF have been developed, but didn't get a go - possibly due to discontinueing the server line. Otherwise they'd have announced it, since ASF is nothing else than TSX.
I'm not really sure of what make up of this post here. You being an informed engineer surely knows that a sizable part of the R&D budget and project schedule of a given chip is related to physical design, testing and validation, so the fact that AMD finished the uarch but didn't develop a server chip doesn't mean that the pipeline wasn't flushed, especially when the consumer chips doesn't use things not even close to the interconnect and the better cache architecture AMD would need to go beyond 8C on a die and stay competitive. Basically AMD scrapped all they have and went back to the drawing board with Zen.
Chuck Moore proposed throughput (more threads) being the solution. And he (or someone else) decided to take the risk of not getting enough compiler support and lower performance on existing code and code compiled for other archs. This is a decision and didn't play out. But that's economic life in a perfection seeking world. Nobody has a crystal ball. And only bigger companies have enough power to ensure the following of the software crowd.
And that's why sites like Anandtech were expecting Bulldozer to shine on the server market, they were all hopping into AMD marketing bandwagon instead of doing a simple TCO analysis that any junior analyst would manage to do. The real issue wasn't poor performance (performance was "ok", as stated by AMD marketing), but perf/watt, which was (and still is) atrocious.
On the other hand I wouldn't think low
enough of AMD engineers and management to think that AMD would risk their 10 years project for the sake of a few hundred million dollars in compiler and software support. The real answer is that AMD got the numbers and got the TCO costs, and likely decided that no amount of money spent in software support could make up for the shortcomings of the chip. It was a problematic chip with a very problematic development, the first version was canned in 2009, they had to know they had issues, even if their internal controls weren't working as advertised.