Ivy-Bridge Hardware Trojan???

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
To be clear here, this is all proof-of-concept/what-if work. It's an exercise in seeing how the IVB PRNG could be modified to reduce the randomness. Retail chips are in no way flawed/compromised in this manner.

If someone gains physical hardware access then you have already lost, so it's a meh at best.
It's even more involved than that. The experiment was done entirely in simulation, with the idea of getting it into mass production, which requires access to the litho masks themselves. It's not actually practical.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
As far as I can tell, they only simulated it. And they need physical access.

So scary? Not at all. There is a billion easier ways to get your data.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
It's even more involved than that. They have to get access to the litho masks themselves. The attack is just a theoretical way to modify the chip to reduce the randomness of the PRNG. It's not actually practical.

I understand that, however, it breaks the same barrier of physical intrusion at some point in the chain, which has always been game over for security by default.

And I would argue that it is actually very practical.

All that needs to happen is that the US Government forces Intel to lower the randomness of their PRNG.

The study only makes it known that this can be done in a way that is undetectable to the end-user with current tools when only having access to the chip itself (assuming no knowledge of the compromising of the chip from other sources)
 

_Rick_

Diamond Member
Apr 20, 2012
3,937
69
91
This could be exploited by anyone at Intel, from corporate management, to design, to the people actually making the litho masks. Or someone hacking into Intel, which is not as unfeasible as it sounds, given the potential advantage this kind of exploit would give to any espionage agency.

CPU-level cryptography attacks are currently probably the scariest ones, as motherboards no longer provide memory controllers, so extracting a key is no longer feasible to do in a way that is difficult to detect.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
They should say postulated, not demonstrated.

They did software simulations of a circuit derived from the description of the IB RNG.

They did not actually attempt this on any real silicon, and did not run the simulation against an IB design, nor demonstrate the ability of this so called Trojan to affect an actual IB CPU.

Even if they had, I think it's pretty safe to assume that modifying Intel's layout and getting CPUs produced with that layout would be next to impossible.

In other words, nothing to see here.
 

GreenChile

Member
Sep 4, 2007
190
0
0
This is complete rubbish. These guys are talking about injecting dopant into specific areas that are commonly shared between specific transistors thereby inducing an unwanted voltage. How is this in any way practical?

You would not only need access to the computer but you would need to disassemble the CPU, perform nanosurgery on a few specific transistors without causing any collateral damage, and reassemble the CPU before you get your "trojan". Really?

Someone actually wasted time getting this published? And why wouldn't the same "vulnerability" exist on any chip?

Edit:
Actually after some further reading it seems they are proposing the insertion of the "trojan" during the manufacturing by modifying the dopant masks.
In this paper we introduced a new type of sub-transistor level hardware Trojan
that only requires modi cation of the dopant masks.
Unless I am misunderstanding what dopant masks are, how do they propose someone do this?
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
A lot of work just to add a bug that would be corrected with a micro-code update in a heart-beat.

The world has nothing to fear, except the wasting of its tax dollars on incredibly unlikely hacker scenarios.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Preamble to stop panic that might result in misinterpretation:
All your Ivy Bridge-based systems and servers are safe. This proof-of-concept cannot be applied retroactively to harm your existing Ivy systems.


And why wouldn't the same "vulnerability" exist on any chip?
Because hardware RNG module debuted with Ivy Bridge. No other chip had it before, which is what is being exploited by their proof-of-concept.

You would not only need access to the computer but you would need to disassemble the CPU, perform nanosurgery on a few specific transistors without causing any collateral damage, and reassemble the CPU before you get your "trojan". Really?
You misunderstand the point of the published paper.

It is not saying that Ivy Bridges worldwide are compromised.

It is not saying that it is easy to put an undetectable Trojan in your previously clean Ivy-Bridge-based system or server.

It is saying that during a minimum of two points in the supply chain (before it gets to your hands, the user), their proof-of-concept work can be used to effectively infect the chips so that their hardware RNG module is effectively broken, without it being detected by any of the built-in self tests. All of the systems and servers that those chips eventually end up in are effectively immediately compromised if they rely on the IVB RNG module.

I know reading a scientific paper is hard work for non-scientists, and the Tech Spot article linked did a terrible job explaining it. Perhaps the writer himself had no idea how to interpret the paper. I do not blame you. I blame the writer of that Tech Spot piece.

A lot of work just to add a bug that would be corrected with a micro-code update in a heart-beat. The world has nothing to fear, except the wasting of its tax dollars on incredibly unlikely hacker scenarios.
I will have to disagree a little bit, here, Phil. Although, as for the incredibly unlikely part, yeah, pretty much. I'm not sure it was a waste of tax dollars, though.

This paper is significant for the same reasons that NIST recently had to withdraw Dual EC recommendation due to suspected NSA tampering (see official publication here) which only recently got revealed as part of the Snowden leaks - even though as far back as 2006, security researchers already smelled something was wrong. It is probably important that I preface this with why Dual EC came about in the first place - it was developed by NIST specifically to address a long-standing weakness in the FIPS standard. This FIPS weakness is a very limited number of PRG (or PRNG, or just RNG, whatever your preference in naming it) algorithms, and most had known design weaknesses. They had to go, so we needed new ones. NIST made a new one. Actually, four. 3 symmetric ones, and, strangely, a non-symmetric one: Dual Elliptic Curve. Almost right off the bat, academic cryptographers smelled stink from Dual EC, and we all smelled "NSA tampering" on it, because not only was it super slow, it also didn't come with a security proof (haha, now that's a joke. NIST doesn't actually hand out security proofs, they release standards and let academia deal with coming up with the proof). No proof of such NSA conspiracy, so no luck - maybe it's just NIST being rookies. It happens - standards bodies of all sorts do often come up with mish-mashed, shoddy protocols - see for example the mess that was SSL 1.0 / TSL and even the current iterations.

I don't think I can go on with what exactly was wrong with Dual EC without going into too much detail that none of the CPU crowd here will appreciate anyway. It's probably more a thing of the Security subforum we have, but even there I don't actually see chit-chat regarding academic crypto.

Anyway, going back to the IBV RNG Trojan paper, the paper itself is not significant because it has happened already (the authors clearly stated that they have not observed any tampering in real life), and also not because it can retroactively apply to all your existing IVB systems and servers (because it clearly can't, and the authors were clear about that). The paper is significant only because it allows us security researchers to view another possible vector, which then allows us to come up with oversight and/or new techniques to mitigate or stop attacks from this new vector.

It sounds impossible now, yes, how could anyone (even the NSA?) force chipmakers (Intel, AMD, ARM or its licensees) to cripple their baked-in hardware security module? (To readers: Don't feel too bad the paper "targeted" Intel. Intel has the only useful chip with a hardware RNG module installed, so it's not like the authors had too much choice in the matter). But 7 years ago, way back in 2006, that was also the claim: it was impossible that the NSA could force NIST to weaken cryptograhic standards, so all the stink academic cryptographers had was just that - worst case, incredibly unlikely hacker scenarios more at home in "Enemy of the State" than real life. 7 years forward to the current time, we have the Snowden leaks and the 'evidence' from it suggests that we were pretty much right 7 years ago - NIST still denies it now, but at the same time they've officially dropped Dual EC recommendation after those leaks happened that pointed to some NIST standards being weakened by the NSA on purpose.

That's the only thing this paper is really saying: it is feasible to do so, and in such a way as to be undetectable in routine tests. So if someone (like the NSA) wanted to, they could use the techniques in this paper to weaken the crypto in the CPU's before they are shipped to distributors and retailers, much in the same way that they seemed to have weakend some NIST protocols. If they (NSA) can twist the arm of NIST to weaken crypto standards, or threaten CEO's of search companies with "Treason" for not complying to their orders, then maybe it isn't so far-fetched that in the interest of national security, they would twist the arm of chipmakers in order to bundle security hardware in the chip that they (NSA) can easily exploit.

The world was not this crazy before; 3 months ago, I personally would have laughed out loud at the absurdity of the scenario here - tampering with masks? Hahaha!

Then the NSA leaks happened. I don't care about the homeland spying thing, since I'm not an American and effectively not my problem (that's all yours, my American friends). But when your NSA gets its hands on tampering with international standards, especially standards that are supposed to keep us safe, secure, and private, then that specific portion also affects my work, even though non-American.
 
Last edited:
Aug 11, 2008
10,451
642
126
It may be a weakness, but it seems very unlikely to be exploited. I do see a chance for this to be blown way out of proportion if it gets into the popular press, especially since it is being called a "trojan", which most people think of as software that can infect your computer during normal use, not something that requires hardware access.

Also, would it be possible for the encryption programs themselves to have some sub program that tests whether the encryption is truly 64 or 128 bit, or whatever it is supposed to be?
 

GreenChile

Member
Sep 4, 2007
190
0
0
You misunderstand the point of the published paper.

It is not saying that Ivy Bridges worldwide are compromised.

It is not saying that it is easy to put an undetectable Trojan in your previously clean Ivy-Bridge-based system or server.

It is saying that during a minimum of two points in the supply chain (before it gets to your hands, the user), their proof-of-concept work can be used to effectively infect the chips so that their hardware RNG module is effectively broken, without it being detected by any of the built-in self tests. All of the systems and servers that those chips eventually end up in are effectively immediately compromised if they rely on the IVB RNG module.
Please correct me if I misunderstand their method but as I stated in my previous edit it seems they are saying they would need to need to modify a specific dopant mask in order to accomplish this hack. The likelihood of such a feat is in the realm of science fiction.

It would require an intimate knowledge of Intel's process flow, security measures, quality control, and they would have to intercept not just one but dozens of litho masks that Intel creates in-house and ships to fabs worldwide. Also, how to you get these compromised masks past the inspection process or defect metrology where a patterning change would be detected immediately?

No amount of NSA subterfuge could slip this by Intel. It would require Intel's help, but for that matter Intel would simply build in a backdoor for NSA if they were cooperating.

But of course you are correct in that I may be misinterpreting the paper.
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
I wouldn't be so sure. AES, SSL and TLS were supposed to be completely safe too, but that didn't stop the NSA.

Exposed software is never secure. Plus you can edit software, you cant edit hardware.


Together with JimmiG's quote your reply sounds very weird to me. What are you implying? You can certainly verify if someone tampered with software by comparing hashes or digital signatures.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
145
106
www.neftastic.com
To be clear here, this is all proof-of-concept/what-if work. It's an exercise in seeing how the IVB PRNG could be modified to reduce the randomness. Retail chips are in no way flawed/compromised in this manner.

It's even more involved than that. The experiment was done entirely in simulation, with the idea of getting it into mass production, which requires access to the litho masks themselves. It's not actually practical.

It's not so much the masks that they need access to, but rather the design and engineering spec. Theoretically they could sabotage the process, and no one would be any wiser. The chips would all pass QC and be 100% functional, and the RNG would even pass the certification tests. This could all be done without any alteration of the masks at all, but does require knowledge of the design and process.

Still it would need to be nearly an inside job at minimum, or require the resources of a very, very large state-backed effort plus a lot of additional cards to fall in the right places.

A lot of work just to add a bug that would be corrected with a micro-code update in a heart-beat.

The world has nothing to fear, except the wasting of its tax dollars on incredibly unlikely hacker scenarios.

See above. This isn't something that would even be detectable through standard certification or testing. You would need to perform fairly extensive numerical analysis to even notice the issue, and the only possible "microcode" fix for it would be to completely disable the RNG unit.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
A lot of work just to add a bug that would be corrected with a micro-code update in a heart-beat.

The world has nothing to fear, except the wasting of its tax dollars on incredibly unlikely hacker scenarios.

Well, I think it just reinforces that you should not 100% rely on a hardware provided RNG or encryption module if you are operating at a near maximum state of paranoia.

Linus' response to this kind of discussion is that it is policy in Linux to not rely on a single RNG source reducing the feasibility of an attack from a single compromised source.

But for anyone doing highly sensitive things there is cause to be concerned:

http://www.fierceitsecurity.com/sto...assified-network-ban-5-governments/2013-07-30
 
Last edited:

Rakehellion

Lifer
Jan 15, 2013
12,182
35
91
Well, I think it just reinforces that you should not 100% rely on a hardware provided RNG or encryption module if you are operating at a near maximum state of paranoia.

You shouldn't 100% rely on anything, but you should have a fair degree of certainty that any link in the chain is operating within specifications.

I like how when a new hack is announced someone always replies "well, you shouldn't have been using that anyway" as if you've somehow been doing something superior all your life.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
You shouldn't 100% rely on anything, but you should have a fair degree of certainty that any link in the chain is operating within specifications.

I like how when a new hack is announced someone always replies "well, you shouldn't have been using that anyway" as if you've somehow been doing something superior all your life.

Unless it's software you code yourself and hardware you produce yourself (how much for a microfab?) you can't have much certainty regarding security. Open source software gives you a fair shot at digging for anything bad but it will still be running on commercial hardware. That's just the way things are when you are dealing with complex systems electronic systems.

Thinking about it a bit more this proof of concept is very interesting in regards to fabless companies. Companies like TSMC, Samsung, GlobalFoundries, UMC, and those Chinese fab companies trying to break into the market could secretly make this part of their production chain. The IP companies would never know without a tip off from an insider.
 
Last edited:

Rakehellion

Lifer
Jan 15, 2013
12,182
35
91
Unless it's software you code yourself and hardware you produce yourself (how much for a microfab?) you can't have much certainty regarding security.

Even that isn't realistic because anything worth using is probably made using a team. If you wrote it with a one-man team, then you can't be sure it doesn't have crippling bugs.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Yes, hardly anything is 100% maybe death and taxes. But at least if you are producing it all in house, or auditing open source code before integration, you only have yourself to blame.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |