With all the NDAs in place, finding a "smoking gun" argument is pointless. I don't even know how CD Projekt got away with saying as much as they did.
Anyhow, AMD did address this with DX12 & Vulkan through creating Mantle. It becomes very visible with a profiler and whatnot identifying what is happening within the game when using the correct or incorrect codepath.
Case in point, Raja himself tweeted about this VR review :
https://mobile.twitter.com/GFXChipTweeter/status/760303615516880896
Similar situation happened with Gears of War when nvidia's gameworks features broke the game completely for AMD. They were quickly able to identify what was going on.
EDIT: I just read what CDproject said back then. Everything that they said was public knowledge already. Hairworks is a GPU accelerated effect and it doesn't work well on AMD cards, BUT it can be turned OFF. Do that and the problem is gone.
---------------
That doesn't prove NVidia is actively blocking the developer from optimizing the non-NVidia code for AMD cards.
How are we sure it's the gameworks, which is CPU based to begin with, that's breaking the game for AMD cards? It's quite possible that it was something else in some other part of game's code. Even if it was the case, which looks more and more unlikely at this point, how do we know NVidia intentionally introduced that code, just to break AMD cards?
In those tweets, Raja is just saying that it could be because of the old UE build. That could mean many things. There is no way to tell which part of the UE was causing issues. It might very well be the code written by Epic.
EDIT: Ok, gears of war used HBAO+ for AO, and no other mode was included. It's been known for a long time that it's an NVidia feature that doesn't play well with AMD, and yet MS did not bother to include an AMD friendly AO or at least label it properly or even work with AMD before showing the game, but all this is on MS.
Another issue was that MS didn't even bother with putting an on/off switch for PhysX, which was apparently GPU based. Again, that's the developer's fault for not implementing the switch, or they should've disabled GPU PhysX if they were not going to put a switch for it.
Except Nvidia is the one writing the code and not Havok.
How can an entirely CPU based code mess with the GPU directly? Doesn't matter who wrote it, unless you're saying they purposefully inject bad code to mess with AMD.
EDIT 2: Huh, never knew this. The entire gameworks package, except for waveworks, has been on github for a while. Waveworks is incoming. Now AMD and any developer can change and optimize the code as they see fit. It seems NVidia isn't completely against openness either.
http://benchmarkreviews.com/36129/nvidia-hbao-source-code-now-github/