Even _if_ AMD can deliver a custom core on 14XM by 2016, Intel will be transitioning to 10nm. This really doesn't look good for AMD or any other ARM server company. Dang, I want to root for AMD, but Intel has made this a game where node process improvement is destiny.
At the top end of the market, such as Xeon chips, Intel can charge a very high price (with high margins), and potentially pay the huge (latest/smallest) node development and production costs (until the new smallest node becomes more established and cheaper, later).
But Microservers are really intended to be relatively low cost items, which may be best made using slightly older nodes, because they are often the cheapest, until the latest (smallest nodes) have dropped in price.
I.e. Not only are the Arm Microserver producers saving a lot of money on the development of the chip(s), but the chip production node (on a older, more established size) can be a lot cheaper, than the latest "bleeding edge, smallest node size".
Graphics chips use to be made on older, larger node sizes, because they were significantly cheaper to make, and the graphics chips could cope with the lower frequencies and higher power consumption.
If the intended use of a Microserver needs lots of performance, then they probably need to buy a proper, full sized server. Microservers are a relatively new market, which may prefer lowest cost, rather than outright performance.
Analogy: E.g. People buying $175 pre-built computers are NOT bothered about exactly how fast it is, they just want to read emails, browse the internet, etc.
Maybe Microservers will be the same ?