Yes it is. DX12 doesnt behave different to DX11. If the hardware doesnt support it the context switch is exactly the same. You dont get "negative performance" over DX11 because nothing changed. That's the reason why anyone is promoting it as a huge feature.
Page 32 - 34:
http://on-demand.gputechconf.com/gtc/2015/presentation/S5561-Chas-Boyd.pdf
There's nothing in p32-34 that supports your statement: "Async shaders doesnt cost any performance on hardware which doesn't support it".
The compute job is a queue, if it's not supported asynchronously it has to default back to the normal serial behavior and the compute task is done on the shaders AFTER rendering is done.
If a dev is calling for compute shaders to do dynamic lights on a scene, and the hardware can't handle that asynchronously, do you think the hardware is going to IGNORE the call and never render the lights? That would tantamount to cheating.
The default behavior (current APIs) is a serial pipeline, rendering gets done, then compute, then rendering, then compute. This is the expected behavior of Kepler since it cannot do async compute at all. So its no shock to see it lose performance when async compute is used.
Now you see why there's potential for async compute to cause delays in rendering if the hardware can't handle it. So the drop in Maxwell perf is either 1) Driver bug (remember everyone saying drivers have less impact in DX12 because devs talk to the hardware directly?? So its unlikely a driver issue), 2) Oxide's fault or 3) Maxwell is also gimped for async compute/shaders.
Your statement: "Async shaders doesnt cost any performance on hardware which doesn't support it" implies hardware which cannot support async compute ignores or discard the async compute task, never rendering it, ie. skipping it. There's no chance of that happening or there be a crapstorm as features aren't rendered, skipped. It will be rendered, just in serial and slower.