That's more or less what I'd guess at. I was thinking they might only do the 8 core (single module) Ryzen parts for now, and then wait to launch the 12 and 16 core ones alongside the 500 series motherboards at CES (since early boards are likely to be more premium, but 400 series ones should be able to manage 16 cores that are within the Ryzen 2000 series power envelope with probably the highest clocked 16 core chips pushing beyond that and needing higher spec). Maybe even brand them as a version of Threadripper (maybe name the TR4 ones Threadripper XL or the Ryzen ones Ryzen TR). Or maybe even just wait til Intel announces their 10 core parts and/or other CPUs to steal some thunder (which maybe Intel is going to do that at CES).
I'm curious how long ago this Vega II stuff was, as I got the strong impression that was decided quite some time ago (before Vega 20 ever entered production, basically not long after Vega 64 came out and it was quite a bit of a dud in gaming, which led to them just focusing on enterprise focused stuff and not tweak the rest of the chip for graphics for Vega 20, and putting their graphics development focused into Navi). I thought they outright said quite some time ago though that Vega 20 was enterprise and Navi would be their consumer chip. Think it started with them no longer listing 7nm Vega consumer chips on their roadmap (and I think they might've also removed Navi 10x2 or whatever dual Navi thing they had as I believe they're just going to do a small - consumer - Navi 10, and then a larger - pro - Navi 20 similar to how they changed Vega. Which hopefully maybe we'll get 3 moving forward. A large-ish consumer chip (that has the max rendering/rasterization capability) for the $400-600 market, a smaller one for the mainstream segment ($100-300), and then the Enterprise one. And if they go with an I/O chip, that could give them more flexibility over memory for different products. As well as they could maybe do a specialized ray-tracing module (not sure how much it relies on the raster pipeline though, so not sure how feasible it would be to split them, but Nvidia seems to be using a pretty specialized block that isn't integrated right into the traditional graphics pipeline on RTX so that makes me think its possible). Which that might be how they segment the premium consumer stuff, as it just being the normal typical GPU paired with a ray tracing module and more memory/bandwidth. Which I wouldn't hate that if it meant that the mainstream chips got a very strong graphics focused chip. I personally wouldn't mind if they integrated display stuff into the motherboard (and the CPUs getting the video processing blocks, meaning its no longer on the GPU, but rather be part of say the I/O module, which I guess they could put that in the I/O module of the graphics card, but I'd like them to be able to move the display connectors so that the video card would just be able to vent as much of the heat as possible). They can include a card that has connectors (and make it so you can pick and choose what connectors you want and place them where you want so maybe you put them in the bottom slot of the board). On motherboards they're probably gonna head to being largely USB-C Thunderbolt, since that can takeover for all the other connectors (obviously be some transition period, or maybe they include adapters for HDMI/DP/older USB/etc).
Hopefully Navi brings some good improvements. Will be interested to see how it turns out, if they maybe pushed for it to be more graphics focused since they'll have stronger CPU in the consoles. Which by that I mean, it'd offer compute capability of One X/PS4 Pro level, but not a ton more, instead choosing to use transistor count to maximize the rasterization capability to try and offer native 4K at 60FPS (with some other ways of getting there, like lowered precision where they can get away with it). Which I wonder if they wouldn't develop a compute specialized module even, especially for something like AI or other specialized hardware. I know there's a decent amount of the modern graphics pipeline that is suited for some compute and I don't expect that to change (if for no other reason than compatibility, but moving forward they look to specialized co-processors more).