They do have the R&D capital. But it won't make sense financially to do it if their products don't have the volume to justify the higher expense of doing a "10" nm node compared to "14"/"16".
You're assuming the costs will be substantially higher, and will remain that way. My understanding was that the jump from planar to FinFET was the big hurdle for the foundries, and once that was crossed, the next die-shrink shouldn't be as agonizing a wait as this one was. Of course, there will be another wall further down the line, but 16/14nm->10nm should be a fairly routine die shrink as these things go. Will TSMC and Samsung want to recoup their investment? Of course. Will smartphone SoCs be the first products before GPUs come along? Sure. But does that mean we'll never see 10nm GPUs? No, it doesn't - assuming both AMD and Nvidia are still in the business, they will both have to go there, or risk the other one doing so and being stuck behind with an obsolete and uncompetitive lineup.
You only have to look at the 20 nm planar nodes that virtually everyone ignored except for Apple - they couldn't justify the expense and had to wait until they added FinFets to it to get the cost at least comparable to the 28 nm nodes.
The TSMC 20nm planar process was a mess - there were reasons other than cost that it wasn't ever used for GPUs and was skipped even by most smartphone SoC manufacturers. For instance, it was reported that in addition to poor yields, power characteristics made the process completely unsuited for GPUs.
This article is informative:
The essential difficulty of the 20 nm planar node appears to be a lack of power scaling to match the increased transistor density. TSMC and others have successfully packed in more transistors into every square mm as compared to 28 nm, but the electrical characteristics did not scale proportionally well. Yes, there are improvements there per transistor, but when designers pack in all those transistors into a large design, TDP and voltage issues start to arise. As TDP increases, it takes more power to drive the processor, which then leads to more heat. The GPU guys probably looked at this and figured out that while they can achieve a higher transistor density and a wider design, they will have to downclock the entire GPU to hit reasonable TDP levels. When adding these concerns to yields and bins for the new process, the advantages of going to 20 nm would be slim to none at the end of the day.
We know that AMD tested 20nm GPU designs (I think it even got as far as tapeout), but they never made it to market. If AMD already paid the big cost of design and tapeout up-front, then it would make no sense for them not to release the products if they were at all viable. Just having higher wafer costs wouldn't have changed this. Don't you think they would rather have charged premium prices for new, smaller chips while Nvidia was still on 28nm? Don't you think Apple would have liked something better than Cape Verde (a 2012 GPU) to put in their 2015 MacBook Pro? Do you think the 300-series across-the-board rebrands for 2015 were Plan A?
20nm was a dud of a process. AMD and Nvidia
wanted to use it for GPUs, but simply couldn't.