Well the 390/390x and Fury X was the first glance at AMDs downward spiral. Remember that the 390x was pretty much just a rebranded 290x with more vram and not reference card. In fact AIB 290x perform the same as 390X and uses the same power. The 290/290x (hawaii) was a pretty good card as it was competitive in performance and price. Yes, it used a lot of power but at least it could keep up and was cheap. The reference card and mining kind of destroyed it but for the patient people it offered best performance/$ (those sub $250 deals) that only polaris manged to beat barley several years later. 390x was already a huge fail IMHO as it was a rebrand and price was raised.
What is wrong with Vega? It is less efficient than Fiji while going from 28nm planar to 14 nm finfet. I actually call that a huge achievement to regress in efficiency while doing a 1.5 node jump. That's why vega is fail.
I don't know if its the political climate, people's obsession with apocalyptic/zombie stories or what, but holy carp, there's a lot of people that seem to be doomsday prophets.
Not sure how you can even make this argument since wasn't most of the 2x0 lineup a rebrand? The 290/X was one of the few actual new chips. The 390 was hardly the "first glance" of AMD doing that. What you're ignoring is that even with the increased price, due to them doing only AIB and the progress of performance with GCN, the performance was good enough that lots of sites recommended them, even up against the 980/970, because they were competitive (look at all the people talking about how the 780/Ti was better than the 290/X at launch, and how that changed, and the 290/X consistently outperform the 970/980 these days).
The 290X was not cheap. It was $550 MSRP. And miners had that pushed up to $800+. It only became cheap after miners dumped them (which we've repeatedly been told is the worst thing ever for AMD, the GPU market, but most of all the people that "couldn't buy even when I had the money to", many of which didn't lose out either buying 970/980, or like you said, buying cheap 290/X when they got dumped for ridiculously cheap prices).
Fiji clearly was the focus of their resources. It was hardly the disaster people act like it is either. Not saying it was stupendous, but most of the stuff people are saying about Vega was said about Fury (early on, it was often more in line with the 980, while consuming more power, ~33% more in Crysis 3 in the AT review; the Fury Nano offered similar perf/w and close to the performance of the 980, but was also more expensive). Fiji improved with time and has held up quite well. Vega has bigger differences than Fiji did. I'd guess much of Fiji's progress was dealing with the 4GB memory constraints, which Vega won't have to deal with as much, but I'd guess that the SRAM/cache changes and change to the bus will require some. There's something weird about HBM2 compared to HBM1, look at HBM2 chips, they're like twice as wide, so either each stack is more wide and less tall, or they've got two stacks side by side inside of the overall package. Not to mention the change to memory addressing that HBC is bringing, will require as much if not more work. And there's other changes, that absolutely might not necessarily provide gaming benefits, but look at even Packed Math where people are saying its an enterprise feature, when game developers are even talking it up - granted its part of AMD marketing, but it absolutely means that has potential for gaming benefits; many of the compute stuff that AMD added with GCN that was allegedly for enterprise/compute has turned out to have gaming benefits as well.
AMD is betting big on Infinity Fabric, I have a hunch that plenty of the extra transistors over Fiji went towards adding that to Vega. It might be something that will only benefit certain Vega products (likely the server/compute cards), but it is absolutely a feature likely worth AMD adding even if it hinders their low margin consumer gaming cards.
Vega obviously isn't executed as well as Nvidia is doing with Pascal, but claiming its a total failure is ridiculous. Your "evidence" of it being a total fail is based on two (possibly 3 if you consider Intel, but look at how much they've failed at GPU like complexity) companies that are even trying to make chips as complex as this. There's literally billions of opportunities for the chips to be screwed up. Basing it off your specific wants is fine (so sure it not offering similar outright performance, perf/w, and/or perf/$ in average games is fine), but some of you are saying that every single thing about this chips is a complete disaster that spells the end of AMD's GPUs.
So, once again, we'll see exactly where it ends up at launch. It could definitely be a dud (seems to be for gaming), but people acting like this is the end of AMD GPU, that they need to completely change everything, that it won't improve in performance, etc, all remains to be seen. We have fairly recent history that disputes a lot of the doom and gloom about GCN and AMD's architecture, that people just dismiss. Some have been doing this for years.
If AMD bothers you that much then just buy Nvidia's very solid offerings and save us the doomsday prophecies or jilted lover syndrome or whatever is causing this rampant bitterness.