If anything AMD acquiring Xilinx means death for SYCL. Xilinx will likely drop all of their future plans on working with SYCL and will start doubling down by working on their proprietary ROCm stack.
That is what I fear, too. Well, I do not fear for the future of SYCL — I feel pretty confident it will be bright, eventually. But I fear Nvidia and AMD will leave it to Intel and others to promote it, while Nvidia tries to hold on to their head start with CUDA, and AMD focusses all their resources on providing a painless migration path from CUDA to ROCm with HIP and automated translation tools (HIPify).
That said, I have been encouraged lately by AMD's apparent participation in the SYCL Working Group, according to SYCL chair Michael Wong's latest slides:
SYCL Webinar - December 2021 - YouTube
Notably absent is Nvidia. However, AMD+Xilinx, Intel, Arm and Qualcomm have all joined. Also to be noted is that SYCL chair
Michael Wong, VP of R&D at
CodePlay, is a long-time
C++ standard committee member, highly influential in setting the direction for
ISO C++, especially regarding support for programming heterogeneous systems. The explicit aim is for SYCL to be subsumed by ISO C++ at some point. The adopted development model for SYCL will track ISO C++.
people should not assume that SYCL will be in the their plans because it absolutely isn't, as explicitly laid out in the [
Xilinx + AMD Converged Runtime Technology Demonstration (@SuperComputing 2020)].
There is nothing in that presentation that says anything at all (and hence nothing "explicit") about SYCL, regrettably.
Whatever the plans are for the programming model, i.e. to extend HIP to FPGAs or adopt SYCL as a more general heterogenous programming model (which seems to be the inevitable direction of ISO C++), we know that SYCL can be very nicely implemented using ROCm and consequently CUDA as backends, as
hipSYCL has so brilliantly demonstrated.
CodePlay is working on direct support for Nvidia and AMD GPUs in DPC++/oneAPI as well, and I presume they will use ROCm and CUDA as backends to achieve that. With that support in place, and with DPC++ upstreamed to the mainstream Clang C++ compiler, things are starting to look good for SYCL on Linux. If Microsoft could just get aboard and replace their outdated
C++ AMP dialect by SYCL instead, forcing AMD to put some effort into ROCm support on Windows, we would have pretty broad industry support for SYCL.
Making SPIR-V kernel compilers for [blah blah blah]
Those are backend details. SYCL concerns the programming model, akin to HIP and CUDA C++. The kernel code to be run on the accelerator is written in pure C++. What it is compiled into is an implementation detail not specified by the SYCL standard.
HIP is NOT a reimplementation of CUDA so let's just get this out of the way.
Are you trying to trip us up on a technicality? Of course, due to copyright and trademark law, AMD cannot call their programming model CUDA, but HIP is as close as they can get without breaking any laws, it seems. Functions have identical specification with names just differing in prefix ("cu" vs "hip"). Likewise, the support libraries are similarly specified and named (e.g. "cuBLAS" vs "
hipBLAS/rocBLAS"). The following porting guide for the LUMI supercomputer is a good read on the similarities:
"The HIP API syntax is very similar to the CUDA API, and the abstraction level is the same, meaning that porting between the two is easy [...] The big distinction to make here is that HIP is the parts that are portable between AMD and Nvidia hardware, whereas the rest of [the] ROCm stack is for AMD hardware. [...] The way HIP handles the GPU hardware is the same as CUDA, as such the terminology is very similar. [...] Generally, the kernel code looks the same for CUDA and HIP and the same C++ language features are supported. [...] The HIP runtime API generally mirrors the CUDA one, simply by replacing the cuda text in the call with hip gets you the equivalent HIP runtime call in most cases."
Preparing codes for LUMI: converting CUDA applications to HIP - LUMI (lumi-supercomputer.fi)
The only similarity HIP has in common with CUDA are their programming concepts at which AMD attempts to mimic as much as possible.
AMD is obviously aiming for the same programming model and libraries with a few name changes — I take that as pretty much everything of essence to a user. AMD strives to make the porting process effortless with automated tools (HIPify) doing most of the work.
Hopefully, my long reply has swayed your negative take on SYCL somewhat, but if not, hopefully it has been helpful to other readers. Those with a genuine interest in SYCL are invited to participate in my SYCL thread in the Programming forum:
I've had my eye on SYCL for some time now. I don't think it is widely recognised that this is going to be the standard for heterogeneous programming models going forward (eventually to be merged into the ISO C++ standard, it seems). If you read any discussion on programming, tech and investor...
forums.anandtech.com