I'm on the engineering team at Blockstack, and wrote the client-side encryption functions used by applications.
I totally agree with the idea that if software uses encryption, it should be documented, open-source, and ideally use a standard encryption protocol. Being able to say "this is exactly how encryption works" in a system is important, and I'm glad you're asking these questions.
Encryption in Blockstack apps is performed client-side via library calls in blockstack.js (our javascript library). The encryption routines are implemented here [1], and implement ECIES, using the user's application-specific private key. That private key is passed to an application during the application authentication process [2]. All a blockstack application has to do is pass { "encrypt": true } in the storage routines, and this is invoked.
We definitely would like to provide better documentation and messaging around how applications engage and use our client libraries -- and documenting our encryption routines is part of that. However, in the meantime, you can feel free to check out or codebase (it's all open source), and we'd always welcome any kind of feedback!
Nobody who's serious about security is going to use an app that does crypto in javascript. Why not make browser plugins to avoid this complication?
Not to mention, if browser makers take their existing browser storage functionality and make more flexible interfaces for them, your app will be kind of useless, as the browser could sync user data with arbitrary cloud providers.
The normal complaint about crypto in JS is that, as a user, I cannot tell what JS is going to be delivered to me this time. Perhaps a security letter forced an update to broken crypto.
> THEN I'LL JUST SERVE A CRYPTOGRAPHIC DIGEST OF MY CODE
1) Javascript is open source and you can audit the code you are running.
2) You can save the HTML of a page and run your local copy so that you know the JS can't change or check the hash every time
Can you audit the code of your OS or Browser? In theory, if you are on Linux, but in practice it is too complex and voluminous for one person to do.
A browser based app is usually in the thousands of lines of open source code running in a sandbox that is very easy to debug.
The browser environment is the most secure and most easily user auditable environment there is.
Unless you expect all of your users to build your app from source on linux that they built from source you can't really get better security.
"Javascript Cryptography Considered Harmful" is old FUD. It was barely coherent when first published and the only legitimate arguments it had have been fixed.
Please stop propagating this article. Its positions are out of date.
FTA: WHAT'S HARD ABOUT DEPLOYING JAVASCRIPT OVER SSL/TLS?
You can't simply send a single Javascript file over SSL/TLS. You have to send all the page contentover SSL/TLS. Otherwise, attackers will hijack the crypto code using the least-secure connection that builds the page.
----
Seriously? Serving entire websites over https is the norm these days. It appears as though the article has been added to but not updated. Current advocates should re-read the article.
Yes, I didn't really want to go through each point in the article. But here's a few things to consider as you read it:
- The article reads like those arguments we have in our heads where we always win.
- The article is from 2011
- The article says "come back in 10 years when people aren't running browsers from 2008"
(we're not to 2021 yet but the majority of browsers are evergreen now)
- We have SubtleCrypto now
- We have SubResourceIntegrity
- We have CORS and CSP
The article has some valid points, but I posit that it is more harmful than helpful. We need an Mozilla-style "arewesecureyet" website instead.
The article hammers home that you should not trust client-side javascript crypto. And you shouldn't. Because you can't. Because 30 points in that article. If we spent all day on this forum, we could go back and forth over every single one, re-establishing the above held truth.
It's like a Cloud OS. If my whole OS is running in the cloud, you can claim it's secure, "because crypto". But it's still actually running on a random pizza box in one of Google's datacenters. There's like 10 layers of trust and assurance needed between them and me.
If my OS is running on my laptop, I only need to trust ME, and maybe Intel's dodgy engineers, and whoever wrote the rest of what's running on my laptop. The control over the security of the system stays in my hands.
That is the basic trust problem, and on top of that are all the other technical problems that make client-side javascript crypto untrustworthy. Even if you solved all the other technical problems, I still don't trust what you are delivering to me more than I trust code that lives on my machine, designed by cryptographers to the highest standards of consumer security.
The following example is pretty much the chief reason why I will not be porting much software to Python 3:
$ python3
Python 3.6.3 (default, Oct 3 2017, 21:45:48)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 4/3
1.3333333333333333
Python3 is basically a different language than Python 2. If I wanted to port software to a different language, I would use any number of available languages that make other kinds of improvements over Python as well. The only remaining use case for Python for me will be as a quick scripting language, and data analysis and graphing tool.
It is a change, but you can use the // operator for floor division:
$ python3
Python 3.6.3 (default, Oct 3 2017, 21:16:13)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 4//3
1
> Even if thousands of developers from HN switch, that would hardly move the needle. Ordinary users just won't care about any of this.
Thousands of developers from HN make websites. If they switch to Firefox, at the very least, the websites they make will support Firefox. This won't necessarily make Mozilla commercially viable, but if people are really that concerned about it, they can donate to the Mozilla Foundation.
I am horrified by the number of developers who only test in Chrome. At my last gig I found glaring UI bugs (whole menus not responding to mouse clicks, for ex) in FF. Sad.
At my job I work in Firefox, a coworker works in Chrome, and a third person works in Safari (to be honest we don't bother checking the site in any Microsoft browser). That way we are able to do cross browser testing fairly easily.
Not really. Assertions like yours are a moral assertion that we should ignore the moral points at issue and instead favor some unspecified pseudo-business-y ones.
But even taken on business terms, you're sweeping a lot under the rug. As developers and entrepreneurs, we've benefited hugely from the web being an open, competitively specified platform. The more one large company can control the platform, the more it will get tilted toward that company and away from the rest of us.
That may not be bad for any given business next week; these things take time. But for anybody building a serious business, you're going to have to worry about the long-term, large-scale stuff. Google's been going 20+ years; Microsoft and Apple, 40+; IBM, 100+. They didn't get there by only thinking about the next quarter, and you won't either.
if using Chrome meant you needed to step on three kittens a day, I think I would agree with you.
but it's just browser preference, so the whole "moral" thing factors in less than whatever logo is printed on the pen I take from the junk drawer. I just want a pen that works.
One of those words that is often a tell is "just". That's where people sweep a lot of things under the rug. Including here, where you've hidden the fact that you made an unsupported assertion that assumes an answer to the question we're discussing.
I'll note that it's a different bad argument, one about consumer choice, than the one I was addressing, which was about business choices. But consumer choices too always have implications. That's why, e.g., boycotts are a thing: small decisions add up.
I'm going to have to agree with 2bitencryption here.
Everything is a "moral choice" when the person demanding the choice feels strongly about it, but that typically means you just lack perspective.
At the end of the day we're talking about browsers and websites, and while people may not LIKE it, when a business writes software it's a business decision as to whether or not they'll target all browsers or a subset.
By all means, keep on asserting things without demonstrating them and ignoring arguments and examples to the contrary. It doesn't actually convince, but I'm sure it makes you feel better.
Firefox got big in large part because they had good developer tools long before anyone else did. I worked several places where management would say things like, "don't waste time, we only need this to work in IE". The developers would nod and go right back to creating in Mozilla and then fixing it in IE after, because it was faster.
In that way there was a quiet revolution toward cross browser support.
I can confirm this, everyone I knew at the time was coding on Firefox even if no-one required any compatibility with it just for that reason, it was just much easier to code with.
When I shared my observations with coworkers, they would nod and say they had experienced the same thing. Same with peers I knew outside of work. Either we were in a very large bubble, or that was happening everywhere. And I think the rise of Mozilla aligns with those observations. It 'just worked' because everyone quietly made sure it did, even when people told them not to.
Our business decision is that Firefox needs to be supported as well. The fact that most of the developers use Chrome as their daily driver, however, results in a lot of bugs being seen and caught early (or at all) there.
This is just flat inaccurate. Given the GP comments' premise of optimizing only for the business's direct interests, the expected value of your contribution against monoculture is so negligible that it won't balance out changing damn near any habit that you had already chosen. It's a pretty basic collective action problem; if you're optimizing for yourself and your business alone, ignoring the wider picture is still the optimal decision.
The actual argument against (which others are making and which I'm sympathetic to) is that one shouldn't optimize only for direct bottom-line business interests, that businesses and people have a social responsibility, etc etc.
But that's entirely different from what you're talking about.
That's it. I didn't say we should optimize for direct bottom-line business interests. I said IT IS A BUSINESS DECISION.
It is not the decision of the developers unless the BUSINESS GIVES THEM THE ABILITY TO CHOOSE.
And even in THAT, it's a business decision.
That's all I said. The business that pays for the labor and chooses the direction they go in.
This idea that a business targetting a specific browser is some horrible social problem is silly. If I'm making a product that's meant to sit in a kiosk running Chrome OS, I'm sure as shit not going to pay for FF and Edge support. If I get it on accident, fine, but if something breaks in FF I'm not putting any effort in fixing it.
I agree about this ethical concern, but this attack also shows that reporting the holes to manufacturers is of limited use -- these exploits have been known to manufacturers since at least March, and while patches have shipped, the computers remain vulnerable. Clearly, automatic security updates are still not aggressive enough to prevent these kinds of problems. Though it isn't clear from the article how out-of-date the vulnerable systems are, which would help in planning for the future. For example, Windows 10 pushes security updates very aggressively, and I wonder how many of the infected computers were running Windows 10 -- health care providers' computer systems are often notoriously out-of-date.
No-one running a large organisation's IT systems is going to be letting individual machines just install whatever updates the software maker feels like pushing, even on Windows 10. That would be a big risk in itself: plenty of software makers, including Microsoft, have pushed horrible breaking changes in updates in the past.
Personally, where I would point the finger squarely at Microsoft is in its recent attempts to conflate security and non-security updates. Plenty of people, including organisations who are well aware of what they're doing technically, have scaled down or outright stopped Windows updates since the GWX fiasco and other breaking changes over the past few years.
This also leads to silliness like the security-only monthly rollups for Windows 7 not being available via Windows Update itself for those who do update their own systems (not that this matters much if Windows Update was itself broken on your system by the previous updates and now runs too slowly to be of any use). Instead, if you don't want whatever other junk Microsoft feel like pushing this month, you have to manually download and install the update from Microsoft's catalog site. Even then, things like HTTPS and support for non-IE browsers took an eternity to arrive, and whether the article for the relevant KB on Microsoft's support site includes things like checksums to verify the files downloaded were unmodified seems to be entirely random.
I get that Microsoft would like everyone to use Windows 10, but since for some of us that isn't an option or simply isn't desirable. Since we bought Windows 7 with Microsoft's assurance that it would be supported with security patches until 2020, this sort of messing around is amateur hour and they really should be called out on it a lot more strongly than they have been.
I would be curious about this too. I'd assume many of them would be running Windows 7, maybe? (Let's hope it's not XP).
Also, does Windows 10 Pro attached to a domain controller still have the same aggressive updates? Or do domain admins dictate that policy?
At one company I worked at, everyone in IT could volunteer for the patch group to get security patches a few days before the rest of the machines. That seems to work pretty well. Is there any evidence there might have been a 0 day involved that wasn't patched? I find it disheartening that so many machines in large managed networks like telecos and hospitals could be so far behind on patches! (3 months is A LOT in Internet time).
If people are just doing really basic stuff like order entry for doctors/nurses, we really need to get away from the full PC model. Seems like most of these machines should just be Chromebooks, Linux boxes that boot straight to a browser or something of that nature instead of a full PC/Macs. Lower the attack surface with something that's easy to update. Those machines would be lower cost too and easier to manage/patch -- moving back to the terminal/thin-client model.
BMJ released a report[0] just two days ago alleging that up to 90% of the NHS's computers are still running XP.
> Many hospitals use proprietary software that runs on ancient operating systems. Barts Health NHS Trust’s computers attacked by ransomware in January ran Windows XP. Released in 2001, it is now obsolete, yet 90% of NHS trusts run this version of Windows.
It appears the Theresa May is trying to deflect attention from the fact that there has been massive under investment in NHS IT infrastructure by reinforcing that it is a 'international attack on a number of countries and organisations'.
Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.
>Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.
Good developers are rare enough, but good IT security and security-minded developers are even more rare. And it's even more rare that they decide to work within healthcare.
There just isn't enough of you to go around and you can't be everywhere.
Even if you can afford to have a dedicated pentesting team (I'd like to work at a healthcare system/hospital network that did), physical security is still a major problem if only because it's very easy to impersonate people.
From [1] "in the case of blocking workloads, it’s extremely difficult to determine the number of threads that optimizes overall throughput because it’s hard to determine when a request will be completed." So the argument they make is that responding to the number of blocked threads directly could lead to an over-correction, where you add a lot of threads to the thread pool when threads are about to unblock. This reduces throughput because you now suffer context switches and poor cache locality.
This is only a problem when you're not scheduling cooperatively. If you are scheduling cooperatively, then you don't ever have to experience unnecessary context switches.
Have you thought much about address space randomization? It's a reasonably effective security strategy, but it essentially requires enough unused bits in the address space. If you have a system with one big memory space, presumably your address space is now more precious, or do you still have enough bits for randomization (both in the local and the global spaces).
Being SAS and having PIC and data makes the way ASLR works different. But we're big fans of exploit mitigation techniques and I will be preparing a decent white paper or talk about it shortly.
This is actually an interesting point. A compromised user table could conceivably be used for all sorts of nefarious purposes. If the attackers "having access" to the information in that table includes the ability to modify that table, then it is pretty much open season on Slack. For example, an attacker could replace a target user's password-hash with a hash that the attacker knows the plaintext of. Depending on the implementation of the random salt, the attacker may have to replace the salt as well. Then, the attacker logs in as the user, downloads the desired chat history, logs out, and sets the password hash to the original. Not enough information was really given in the blog post, but by the sounds of it, some teams experienced more targeted attacks.
What about this exploit really requires a US supply chain? Why do you believe that the Five Eyes have fewer exploits for hardware originating from BRICS nations? Don't intelligence agencies have explicit missions to spy on those countries?
If you think that because you are running less popular hardware, that the NSA wouldn't bother obtaining exploits for them, you may be right. In that case, though, you're simply making a popularity trade-off. (And it is a trade-off, more popular hardware is more popular for a reason.)
From all documents released by snowden thus far, it is clear that the Five Eyes are targeting their own supply chains. They're using arm7 and arm9 chips in their hardware bugs, but it's clear that they have no focus at all on arm exploits.
Both the RK3xxx and A23 chips are fully supported in Linux 3.16, so I see no trade-off. Just low wattage and a bit of peace of mind that my government has to custom-tailor exploits for me and my companies, which isn't very cost-effective. :-)
Let the switch cost something like 1 unit and the analysis on the Wikipedia example still holds - at the switch, drivers face a choice between a 41 minute route and a 45 minute route. Yes, this paradox is a property of the specifics of the network in question, but it may hold for real networks is the claim.
In Go, and other type-safe languages, the compiler usually knows exactly which ints are being used as pointers. This is the difference between "precise" and "conservative" GC and is one of the reasons that comparing GC performance in C++ to GC performance in other languages is difficult.
I was under the impression (again, folk-lore, mailing lists etc.) that Go had a conservative collector. I can't find an official documentation link that'll tell me if it is or not at the moment.
Atom also says 1.0's was more conservative, but, as Brad also said, still didn't scan "objects such as []byte" (meaning all plain-old-data arrays? who knows). The Go 1.1 Release Notes mention the collector becoming more precise, which was a particular issue on 32-bit because big heaps could span a lot of the address space.
At some point, this sort of discussion probably gets you less useful info per unit effort than just playing with a Go distribution, trying out whatever toy programs you find interesting.
I totally agree with the idea that if software uses encryption, it should be documented, open-source, and ideally use a standard encryption protocol. Being able to say "this is exactly how encryption works" in a system is important, and I'm glad you're asking these questions.
Encryption in Blockstack apps is performed client-side via library calls in blockstack.js (our javascript library). The encryption routines are implemented here [1], and implement ECIES, using the user's application-specific private key. That private key is passed to an application during the application authentication process [2]. All a blockstack application has to do is pass { "encrypt": true } in the storage routines, and this is invoked.
We definitely would like to provide better documentation and messaging around how applications engage and use our client libraries -- and documenting our encryption routines is part of that. However, in the meantime, you can feel free to check out or codebase (it's all open source), and we'd always welcome any kind of feedback!
[1] https://github.com/blockstack/blockstack.js/blob/master/src/...
[2] https://github.com/blockstack/blockstack.js/blob/feature/aut...