Indeed... I prefer to think that's because these people are fanboys rather than plain stupid.
I don't think it's that simple either. SPEC has been around a long time and has been used a lot in marketing and academic research so it has a strong reputation. And important, authoritative people continue this. Most of the other CPU benchmarks used in mobile reviews are in a lot of ways worse than SPEC, but most of them are also not broken by ICC like SPEC is. There was AnTuTu, possibly others like some of XPRT.
So given SPEC's reputation, including the prestige that's attached to its high price, and the people who push it's only natural that others are going to take the scores as a legitimate holistic reflection of processor performance.
While people might understand and accept the role that the compiler plays in the benchmark's success they'll think that it should be counted as a natural win for the platform anyway, even if it's extremely abnormal for the compiler to show those kinds of gains in other programs, even if far from everyone is using that compiler. It's really hard to demonstrate these things convincingly.
Blame the test not the compiler.
If you think Intel/AMD both don't include compiler improvements in their quoted perf increases per generation, you don't know jack.
Everyone knows better than to trust the CPU manufacturer's word alone when talking about performance. When it comes to websites independently running benchmarks that are widely called the best in the industry it's a different problem.
We can talk about whether or not it's fair to criticize Intel for putting the sorts of optimizations in ICC that they have but that's really not even the point. I don't care who blames whom, I care about people being better educated on what the compiler is doing and what the implications are for benchmarks comparisons. Unfortunately there's a lot of ugly technical background involved so it's hard to really convince people.