It's called DirectML and it's been around for a while as a part of DX12.
It cannot be DirectML, this would treat the AIE engine just like another SIMD capable device with mixed precision and/or matrix operations support.
The AIE graph compiler would need model/network information in order to lay out the layers/parameters/activation functions across the different tiles of the AIE PIM blocks.
Therefore WinML is more suitable.
Learn about Windows AI solutions, such as Direct ML and Windows Machine Learning
docs.microsoft.com
If you are developing on Visual Studio, there are NuGet packages to use and import already trained ONNX models directly, for inferencing within an app. It is fairly easy I would say.
MS is fairly active on WinML in GitHub and you can see devblogs articles regularly on the topic.
Intel Movidus already have WinML support
Access the resources and tutorials you need for Windows Machine Learning* and ONNX*.
www.intel.com
I think it's up to AMD to make sure that their AI acceleration hardware (AIE or otherwise) is compliant with existing APIs/programming toolchains. If someone has an inference workload, they're going to target an API, not a specific logic block.
Indeed, they already stated it will be WinML, which makes sense, as stated above.
Also mentioned by Victor is that it is for inferencing mainly and will support only Ryzen processors with AIE and likely fallback to CPU if no AIE.
Of course WinML can use GPUs too, but the at the cost of efficiency but devs would probably have to take a call and AMD is limiting its WinML support to Ryzen CPUs, see fine print.
The framework exist, how pervasive it gets on Windows ecosystem remains to be seen.