Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Call me paranoid but I consider even a clean, freshly installed and fully updated Windows PC already compromised by the NSA.


Distrusting Windows was the wisest thing you did since you climbed off your horse. [1]

No, seriously. How is it paranoia to think the NSA was/is surveilling your Windows installation if we already have proof that they have the means [2] and motivation [3] to do it at scale?

[1] http://www.quotes.net/show-quote/34121

[2] https://en.wikipedia.org/wiki/EternalBlue

[3] https://en.wikipedia.org/wiki/PRISM_(surveillance_program)


There is no proof of means or motivation to use 0-days at scale. In fact, using EternalBlue "at-scale" would have caused it to not stay a 0-day for very long.


They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates. Also Microsoft began to heavily spy onto Windows users as part of normal operation making it difficult to impossible to fully opt out.


I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless. There would be very little positive gain yet a whole lot of negative blowback from doing such a thing.


MS engineers can login to your machine and run programs / download documents. There also is some keylogger that sends data back without warning you. I can't remember which bits you can turn off, which bits got backported to 8/7 without warning, etc.

To make a long story short: From what anyone can tell, there is no way for consumers to obtain a version of windows that has security patches and has the ability to run with sane privacy settings. There is an acceptable version called Windows LTSB, but you have to pirate it.

This has been discussed ad nauseum on HN and elsewhere.


What change?

Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?

If you are suggesting that, are you suggesting the trust root for that particular stack is something other than the vendor? If so who?

Take the example of Windows. Let's say they agree to put in a backdoor like DoublePulsar. Microsoft release the official OS and say 'we promise this is all good and only stuff that should be in here is in here. Honest.' How do we as third parties detect they've put something in there that shouldn't be?

I see you're CEO of verify.ly and have some background in this, so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.


> so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.

"Closed-source" certainly does not mean you cannot see the changes, just that far less people know how to read assembly/machine code to understand what is going on.

People frequently reverse engineer patches and updates as addition of features means more vulnerabilities. Security companies generally get a whole lot of free marketing in the press if they find and disclose major vulnerabilities (along with building detection/prevention into their products, so there is a large incentive there. Of course it requires trusting security companies to not hold back findings like that, a valid concern, but it at least a step up from completely trusting the vendor to deliver non-backdoored updates.

> Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?

The security researcher mindset would be along the lines of "How does this new added/changed functionality work, and how could it be abused?" (You are correct that there is no guaranteed manner to find this, otherwise all software would be un-hackable which is not the case).


Thanks.

So to go back to these two points:

> They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates.

> I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless.

It would seem to me that these things are happening. 0days are being added (often to look like simple bugs) and security companies are detecting them and we're talking about them...eventually. So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.

Take the example in this thread - EternalBlue. That particular flaw was introduced in XP wasn't it? And it survived all this time despite the uncountable security researches pouring over the code for a decade and more. It took a hack to reveal these tools.

Maybe the EternalBlue exploit really did just exploit a bug. Maybe it was a backdoor. It doesn't matter though. If it was a bug, it lay undiscovered for years which means there's plenty of opportunity for an actual backdoor to remain undiscovered too. So we have to deal with the possibility that 'exploitable code' (however it originated) may be around for decades and can be in every system as a result.

Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.

What about Heartbleed. This was another piece of 'exploitable code' that was around for years undetected. The example of this are no doubt many.

It would seem to me then that there are plenty of cases where a 'backdoor' has been placed and plenty where a genuine mistake was made, but we can't ever really know which is which.

I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.


> So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.

EternalBlue was a vulnerability, not a backdoor, as a backdoor would imply it was intentionally inserted. Again, any proof of malicious code being intentionally inserted would be huge news and would permanently kill trust in the vendor.

> Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.

This would be huge news. A negative cannot be proven, but it would not really serve much benefit to theorize about intentional backdoor insertion without proof. Anger at something like that is best saved for a provable case (Think of it this way: To a non-tech person, it would be great for them to be able to express outrage/call their reps/etc when there is definitive proof of this, versus saying "oh I heard this was already happening so whatever").

> I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.

There is nothing wrong with being overcautious. Problems arise when worrisome conclusions are reached, causing some (for example) to be unsure about the safety of automatic updates. The effect of this would be users avoiding a perceived risk of a malicious update, yet allowing them to be more exposed to real known vulnerabilities by not installing important security patches.


The theory is that back doors are designed to look like bugs, precisely so you can make the argument you just made - that they are not back doors.


I honestly cannot tell if this is brilliant sarcasm or if you'be somehow missed all the "very loud discussion" about Windows 10 on HN. :)


If you are referring to the level of analytics gathered, I fully agree! My point is, there would be a similarly loud reaction (at a wider scale) if a backdoor were introduced.


How could you tell a backdoor from a regular bug?

From a code perspective, of course.


Have you installed Windows 10 lately? It's all there in plain English.


I am definitely not a fan of all the default analytics gathered, not cool, but I took "cooperates" to be referencing legitimately malicious software.


That's not true. When an exploit shows up on a computer, "How did it get there?" is often the hardest question. There's no way to know short of capturing it in a lab environment.

If you're talking about "at scale" being "the entire world," then yes. But usually the NSA tends to target their operations regionally, e.g. Iran.


To clarify, I am not talking about attribution. When I say "not stay a 0-day for very long" I am referring to the fact that 0-day use by any threat actor is generally going to be very targeted, because the chance of a PSP and/or network tap logging artifacts or alerting the user is extremely risky in regards to potential exposure of the intrusion, causing the 0-day to likely get burned (Since discovery allows for detection signatures and patches to be quickly created, as well as remediations applied to affected systems).


Any use of a zero-day risks burning it, and this was one of NSA's most potent zero-days. I imagine they used it rarely and wisely; probably trying other exploits first.


>and this was one of NSA's most potent zero-days.

Says who? We have no idea what they're sitting on, even our guesses come from terrible data.


And so now it's in the hands of people who have no such foresight. Which means soon it will be mitigated. Which means that despite all the pain right now, in the long run Wikileaks actually may end up having kind of helped humanity.


> Which means soon it will be mitigated.

It was fixed in a security patch one month before the Shadow Brokers leak. All computers affected by this ransomware outbreak (and WannaCry) were those who decided not to patch.


I suppose with the word "mitigation" kind of already having a connotation in the security community, I probably shouldn't have used it without making clear that I wanted the term to include its more banal implications such as "install the patch" and/or "get your systems off that old-ass OS!"


Wikileaks was not involved, they're securely posting CIA documents.


This is absurd nonsense, but my viewpoint is a lonely one on HackerNews.


You're not the only one who thinks the idea of wearing a tin foil hat when you use Windows because the NSA only knows how to attack Windows is demeaning to the intelligence of other tin foil hat wearers.


What should I trust more:

A trade secret proprietary and obfuscated operating system from an organization known to collude with the government

Or

Code I have read in part, and know others read, and stand to believe that among all of us using those with the money or time would also audit

Given, we are all on predominantly x86 computers with proprietary obfuscated control processors that can seize control of the system and do whatever they are told by the manufacturer / those the manufacturer gives access to, so the security is in general a whiff.

Or more generally, don't use Linux for a false sense of security, because the security holes go much, much deeper than just the kernel and whats running on top of it, and Linux itself is nothing outstanding from a security architectural standpoint.


From the phrasing of your question, I suspect we disagree on the answer to your theoretically rhetorical question. I don't care what people could or would like to audit with their free time, I care what people do audit with their actual time, generally because they are paid or have a financial motive to do.

Windows is fuzzed, analyzed, traffic analyzed, attacked, and picked apart inside AND outside Microsoft with higher frequency and greater depth than Linux is, regardless of which happens to be open source and theoretically easier to examine. If Microsoft were to inject malicious stuff into Windows it would be found and reported and exploited. There is too much money, too much exploit opportunity, and too much security researcher brand cred available to anyone who discovers even a hint of malicious behavior on Microsoft's part for it to go unnoticed and unreported.

And again, the point of the comment wasn't "Windows is secure" as nothing in tech is secure. The point was that someone who advocates wearing tinfoil hats around Windows to protect against the NSA while thinking Linux somehow gets a pass from those same bogeymen is not making a rational case for how to behave or what to fear.


It makes sense if you consider that some folks will only read headlines and potentially skim news coverage without checking any further into validity.


It's compromised by Microsoft, who would willingly (and would be required to) cooperate with the NSA upon request.


Yep, forced updates + NSL = they don't need 0days anymore.


That would never happen. A network tap would be able to detect a malicious update even if the main PC was implanted very well, and a Microsoft-signed malicious update would be worldwide news.

Please correct me if I am wrong, but I don't think there has ever been a single instance of this actually occurring, only "this could possibly happen" theories. I am definitely interested to hear more if this is not the case.


> That would never happen. A network tap would be able to detect a malicious update even if the main PC was implanted very well, and a Microsoft-signed malicious update would be worldwide news.

While I don't know of that specific scenario, Stuxnet used a hardware vendor's key to install infected drivers[1]. There was also a Chinese registrar that allowed a customer to man-in-the-middle Google[2]. Depending on how Windows organizes their driver updates, I could see an adversary doing a man-in-the-middle between Microsoft and their target, and pushing a bad driver update.

1. https://www.welivesecurity.com/2010/07/22/why-steal-digital-... 2. https://www.techdirt.com/articles/20140909/03424628458/china...


I am talking specifically about a malicious Microsoft-signed OS update in this context.

I fully agree with you regarding general problems which could occur with PKI.


"That would never happen" doesn't fly as a security proof.


I will concede that phrasing may be poor, better way to put it is that "forced updates + NSL" would result in detection and a media firestorm, giving absolutely no benefit and obliteration of any trust in Microsoft.


It's extremely risky to put out a mass update, yes. But if it were a targeted attack against an individual, the risk is greatly reduced, especially if that individual won't think twice about it.

With that said, you do have individual targets that are suspicious (e.g. https://citizenlab.org/2016/08/million-dollar-dissident-ipho...). There's always risk.


> It's extremely risky to put out a mass update, yes. But if it were a targeted attack against an individual, the risk is greatly reduced, especially if that individual won't think twice about it.

At that point, you'd have to hope the target would not check the hashes of update files. If detected, then there is the same issue: A signed malicious update being detected (and easily verified cryptographically if given to a reporter) would cause a catastrophic media firestorm, eroding trust in the vendor forever.

> With that said, you do have individual targets that are suspicious (e.g. https://citizenlab.org/2016/08/million-dollar-dissident-ipho...). There's always risk.

0-day use against perceived "high value targets" is indeed a possibility and valid concern. No argument at all there.


A signed malicious update would be a Big Deal(tm), but the entity would also be able to survive it by claiming it was negligence. I don't believe negligence has not been significantly penalized in the marketplace, aside from perhaps CAs where damage can be limited (prevent new certs from being seen as valid, plenty of other options for sites). There's no such option available for penalizing Microsoft, and their lock-in is significant enough to limit nuclear options for doing so.

"We've revoked the signing key that was hacked by blah blah we have the utmost regard for security and adhered to best practices" and everyone would probably gloss over it for one instance.


Their update signing is surely performed using an HSM with strict procedures for getting production builds signed, due to the exceptional sensitivity.

I think you might underestimate the gravity of such a thing happening, it would not be glossed over.


What are the alternatives once an event occurs and Google/Microsoft/Redhat/?? claim it was an accident outside of their control (possibly due to negligence)? Yes, outside experts will be investigating to the best of their ability and there will be a statement about what measures have been put in place to mitigate the issue in the future. But what else would happen?


@willlstrafach, Nothing you have said convinces me the commentator you are replying to is wrong. Especially since an NSL would prevent ANYONE who detected anything from speaking about it. Updates that tweak code to introduce vulnerabilities, is not something thats science fiction.


> Especially since an NSL would prevent ANYONE who detected anything from speaking about it

Forced malicious updates would indeed be a reasonable concern if this was somehow actually the case. It is not, though, and I am not sure how that would even work. Are you saying that when it is detected, the government would somehow become aware of the detection and threaten the finder with an NSL before they could tell anyone?


Just because YOU cant figure out how it works, does not mean its not possible my friend. But I will say, that when you have a backdoor, and suddenly that backdoor stop providing intel/data/whatever, its usually a good indicator.


I do not know what you mean by this. Again, my point was that any backdoor is highly unlikely to stay hidden.


I point out yet again, to the Yahoo Email debacle. Google it please.


>a Microsoft-signed malicious update would be worldwide news

https://twitter.com/craiu/status/879690795946827776

>only "this could possibly happen" theories

Pre-Snowden a lot of things had been considered "could possibly happen" tinfoil hat theories, turned out a lot of them had not been mere theories.


> https://twitter.com/craiu/status/879690795946827776

1. That screenshot clearly shows the certificate is being treated as not valid. I assume it is being shared for IOC purposes.

2. I am referring to a software update, in the context of revmoo's "forced updates + NSL" comment.

> Pre-Snowden a lot of things had been considered "could possibly happen" tinfoil hat theories, turned out a lot of them had not been mere theories.

I could believe that is the case for those outside of the information security community, but nothing novel/tinfoil-hat-worthy was in the leaks, just confirmations of predictable sources/methods used for intelligence gathering and CNE work. Forcing a company to issue a blessed update containing malicious code is very different, and again, I am very interested to hear of any proof of such a thing occurring without detection (It doesn't seem possible for that to happen without it being detected and being discussed very loudly).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: