Good point on frame generation. You have to tick every box Nvidia has.
I'm not sure how important that is. They have a lot to fix on Alchemist. We know the gap between synthetic and games are mostly addressed. We know the utilization is better, and engines that use Execution Indirect will perform better, among other small changes. We know Day 0 compatibility is going to be better too.
What else? Power with multiple monitor and 2D. ReBar needed both for performance and compatibility. Fixing those will make Battlemage a lot better GPU. I don't really care about the power aspect, but the ReBar thing is a big deal. It automatically makes it suck for older systems. But ARC is supposed to be affordable! Nothing says affordability like sprucing up an older system with a newer GPU.
My question is this: For whichever RTX card the raster is fairly equivalent to, will it be the first to beat current RTX at ray tracing?
I have doubts. Xe2 has 50% more RT units per shader but overall it performs 50% higher. Since RT takes up a fixed amount of compute time, it needs to be 50% faster just to make up for it.
Let's say a game is:
0.7 for raster, and 0.3 for RT. A card with 1x raster and 1x RT performance will render a frame in 1 Time Unit.
If you speed up raster by 1.5x, then the overall performance is 0.7/1.5 + 0.3 or 0.767, or only 30% faster. To get full 1.5x faster, the RT unit also has to be 1.5x as fast.
You could get raster 10x the speed, but the overall performance is only going to be 2.7x as fast. RT performance is basically single threaded portion of "Amdahl's Law" for GPUs. Shall we call it "Amdahl's RT Law"? At least, that's my understanding.
So how it compares to competition depends on how much they added for RT units.