Given that I don't think AMD management is stupid anymore, my bet is that they're either sandbagging on GPUs (a bigger GCD or multi-GCD product is coming, but has been well hidden), or they tried to do multi-GCD and failed.
Maybe they are sandbagging. Wouldn't be the first time now.
If AMD took the crown by 30% with a ~500mm2 GCD, which is very possibly conservative as Nvidia wouldn't bother boosting power consumption so much if they knew they were going to lose so badly, then reversing those mindshare gains in a single generation would require a very one-sided generation indeed. If they lost the next generation by only a small margin, AMD would still retain much of their mindshare (eg. 9700/9800 pro -> x800xt/x). But what if AMD actually won the next generation too? Even if it were to only be a moderate win, then Nvidia would probably be cemented as the 2nd rate brand. You'd effectively see the G80->GT200 / R600->RV770 timeframe, but in reverse.
You have no evidence whatsoever that producing a ~500mm2 (or ~425mm2, or whatever) GCD would somehow cause AMD to lose in the server space. Pure speculation. You could also say that AMD shouldn't have designed Phoenix Point because they could have put that effort into servers, and that would be just as ridiculous. Sure, TNSTAAFL is a thing, but also realize that AMD designs an enormous number of different dies. Capturing the lion's share of consumer GPU mindshare (and increasing ASPs on every other GPU) is easily worth the effort.
I didn't say that making a 500mm2 GCD would make AMD lose in the server space. I said that making a 500mm2 GCD would mean less wafers for CPUs in the server space. Wafer allocations are all set far in advance, so they have to maximize profit for the budget spent on cutting edge wafers. AMD was already supply constrained in the server space and couldn't grab market share as fast as they'd like; customers were waiting months just to get Milan. With a dominant server product that will sell like gangbusters, it makes sense that most of the N5 wafers be allocated towards Genoa and other high margin products. Consumer GPUs have traditionally not been as high margin as enterprise. As to your point regarding capturing the lion's share of consumer GPU mindshare, targeting the performance tiers up to the RTX 4090 is likely >80% of the market. It's likely even higher, i.e. I think less than 10% of consumers buys the Titan or the xx90 Ti. As for ASPs, RDNA 3 may not increase it much, if at all, but it should cost less for AMD to make so their profit margin is higher.
Lastly, consider that AMD's problem in the server space isn't performance, it's mindshare. And mindshare is force multiplicative across markets. Winning in one market increases your mindshare in others. If Ryzen were performing like Bulldozer, it would be more difficult to market RDNA, and so on.
Agreed that mindshare in one market has a positive impact on other markets. Having an overclocked N31 be roughly equivalent to an RTX 4090 at 450W is pretty good already in my opinion, especially if N31 is a few hundred dollars cheaper.
Making a larger GCD is not "taking the chiplet strategy to the max."
See my response in my post above regarding 500mm2 dies.