Preamble to stop panic that might result in misinterpretation:
All your Ivy Bridge-based systems and servers are safe. This proof-of-concept cannot be applied retroactively to harm your existing Ivy systems.
And why wouldn't the same "vulnerability" exist on any chip?
Because hardware RNG module debuted with Ivy Bridge. No other chip had it before, which is what is being exploited by their proof-of-concept.
You would not only need access to the computer but you would need to disassemble the CPU, perform nanosurgery on a few specific transistors without causing any collateral damage, and reassemble the CPU before you get your "trojan". Really?
You misunderstand the point of the published paper.
It is not saying that Ivy Bridges worldwide are compromised.
It is not saying that it is easy to put an undetectable Trojan in your previously clean Ivy-Bridge-based system or server.
It is saying that during a minimum of two points in the supply chain (before it gets to your hands, the user), their proof-of-concept work can be used to effectively infect the chips so that their hardware RNG module is effectively broken, without it being detected by any of the built-in self tests. All of the systems and servers that those chips eventually end up in are effectively immediately compromised if they rely on the IVB RNG module.
I know reading a scientific paper is hard work for non-scientists, and the Tech Spot article linked did a terrible job explaining it. Perhaps the writer himself had no idea how to interpret the paper. I do not blame you. I blame the writer of that Tech Spot piece.
A lot of work just to add a bug that would be corrected with a micro-code update in a heart-beat. The world has nothing to fear, except the wasting of its tax dollars on incredibly unlikely hacker scenarios.
I will have to disagree a little bit, here, Phil. Although, as for the incredibly unlikely part, yeah, pretty much. I'm not sure it was a waste of tax dollars, though.
This paper is significant for the same reasons that NIST recently had to withdraw
Dual EC recommendation due to suspected NSA tampering (see
official publication here) which only recently got revealed as part of the Snowden leaks - even though as far back as 2006, security researchers already smelled something was wrong. It is probably important that I preface this with why Dual EC came about in the first place - it was developed by NIST specifically to address a long-standing weakness in the FIPS standard. This FIPS weakness is a very limited number of PRG (or PRNG, or just RNG, whatever your preference in naming it) algorithms, and most had known design weaknesses. They had to go, so we needed new ones. NIST made a new one. Actually, four. 3 symmetric ones, and, strangely, a non-symmetric one: Dual Elliptic Curve. Almost right off the bat, academic cryptographers smelled stink from Dual EC, and we all smelled "NSA tampering" on it, because not only was it super slow, it also didn't come with a security proof (haha, now that's a joke. NIST doesn't actually hand out security proofs, they release standards and let academia deal with coming up with the proof). No proof of such NSA conspiracy, so no luck - maybe it's just NIST being rookies. It happens - standards bodies of all sorts do often come up with mish-mashed, shoddy protocols - see for example the mess that was SSL 1.0 / TSL and even the current iterations.
I don't think I can go on with what exactly was wrong with Dual EC without going into too much detail that none of the CPU crowd here will appreciate anyway. It's probably more a thing of the Security subforum we have, but even there I don't actually see chit-chat regarding academic crypto.
Anyway, going back to the IBV RNG Trojan paper, the paper itself is not significant because it has happened already (the authors clearly stated that they have not observed any tampering in real life), and also not because it can retroactively apply to all your existing IVB systems and servers (because it clearly can't, and the authors were clear about that). The paper is significant only because it allows us security researchers to view another possible vector, which then allows us to come up with oversight and/or new techniques to mitigate or stop attacks from this new vector.
It sounds impossible now, yes, how could anyone (even the NSA?) force chipmakers (Intel, AMD, ARM or its licensees) to cripple their baked-in hardware security module? (To readers: Don't feel too bad the paper "targeted" Intel. Intel has the only useful chip with a hardware RNG module installed, so it's not like the authors had too much choice in the matter). But 7 years ago, way back in 2006, that was also the claim: it was impossible that the NSA could force NIST to weaken cryptograhic standards, so all the stink academic cryptographers had was just that - worst case, incredibly unlikely hacker scenarios more at home in "Enemy of the State" than real life. 7 years forward to the current time, we have the Snowden leaks and the 'evidence' from it suggests that we were pretty much right 7 years ago - NIST still denies it now, but at the same time they've officially dropped Dual EC recommendation after those leaks happened that pointed to some NIST standards being weakened by the NSA on purpose.
That's the only thing this paper is really saying: it is feasible to do so, and in such a way as to be undetectable in routine tests. So if someone (like the NSA) wanted to, they could use the techniques in this paper to weaken the crypto in the CPU's before they are shipped to distributors and retailers, much in the same way that they seemed to have weakend some NIST protocols. If they (NSA) can twist the arm of NIST to weaken crypto standards, or threaten CEO's of search companies with "Treason" for not complying to their orders, then maybe it isn't so far-fetched that in the interest of national security, they would twist the arm of chipmakers in order to bundle security hardware in the chip that they (NSA) can easily exploit.
The world was not this crazy before; 3 months ago, I personally would have laughed out loud at the absurdity of the scenario here - tampering with masks? Hahaha!
Then the NSA leaks happened. I don't care about the homeland spying thing, since I'm not an American and effectively not my problem (that's all yours, my American friends). But when your NSA gets its hands on tampering with international standards, especially standards that are supposed to keep us safe, secure, and private, then that specific portion also affects my work, even though non-American.