"Hey, they reported cross-site scripting! Let's blacklist angle brackets, that'll do the trick!"
In case this is not clear to anyone in 2016, blacklisting known-dangerous characters is not an adequate bug fix. It's a rabbit hole, you will burn hours trying to blacklist every character or character combination that can cause a vulnerability just to have someone own you anyway.
The proper fixes for common web application vulnerabilities are as follows:
Session Hijacking/Fixation/etc.: Use TLS.
SQL Injection: Prepared statements that AREN'T emulated; PHP's defaults are bad here.
EDIT: If you're writing in another language, make sure it's not providing string escaping masquerading as prepared statements, but actual prepared statements. (My earlier comment was too broad; some forms of emulated prepared statements might be OK, but PHP's is dangerous.)
Encryption, Digital Signatures, Authenticated Key Exchanges, etc.: Hire an expert, don't do it yourself based on the advice contained within HN comments.
File Inclusion / Directory Traversal: Don't write your applications in a dumb way that makes these vulnerabilities possible. But if you must, use something like realpath() with a sanity check based on the expected parent directory (in PHP).
XML External Entities: Make sure you disable the entity loader:
libxml_disable_entity_loader(true);
PHP Object Injection in PHP 5: don't ever pass user input to unserialize(); use json_decode() instead.
PHP Object Injection in PHP 7: either disable object loading or whitelist the allowed types; i.e. unserialize($var, false); or unserialize($var, ['DateTime']);
These are just some of the common problems I frequently find, of course. There are more basic ways to mess up an application ("not even checking that you're authenticated" being at the top of that list).
"Encryption, Digital Signatures, Authenticated Key Exchanges, etc.":
If you just want to get data from A to B over the network, TLS 1.2 (but upgrade to 1.3 when it's ready). For an app(lication) where you control the code on both ends, with additional certificate pinning. Probably still worth hiring an expert to make sure you're doing it right but you have less chance of shooting yourself in the foot than if you try and roll your own.
Sometimes I think if cryptographers wrote libraries that the rest of us could use and "just work", security worldwide would improve. Bernstein's NaCl and the derived libsodium is a good starting point though.
> If you just want to get data from A to B over the network, TLS 1.2 (but upgrade to 1.3 when it's ready).
Right. If you're not using TLS for your network communications, then your communications are not secure.
Some people also have other requirements (e.g. "I need to store SSNs, how can I encrypt them and still be able to search by them in MySQL?") which require separate app-layer crypto. In those situations, don't roll your own. :)
> Probably still worth hiring an expert to make sure you're doing it right but you have less chance of shooting yourself in the foot than if you try and roll your own.
Agreed.
> Sometimes I think if cryptographers wrote libraries that the rest of us could use and "just work", security worldwide would improve.
Ah yes, boring cryptography. :)
> Bernstein's NaCl and the derived libsodium is a good starting point though.
>PHP Object Injection in PHP 5: don't ever pass user input to unserialize(); use json_decode() instead.
>PHP Object Injection in PHP 7: either disable object loading or whitelist the allowed types; i.e. unserialize($var, false); or unserialize($var, ['DateTime']);
I'd stick to not unserializing user input in both cases, that's a can of worms you just don't want to open.
Also, RNG bugs are common and exploitable enough to be worth noting: Never use mt_rand, stick to openssl_random_pseudo_bytes.
Do prepared statements count as emulated if the DB doesn't support prepared statements, but the DB adapter is doing replacement during the encoding-to-typed-binary-wire-protocol step (i.e. replacement of typed tokens with other typed tokens) rather than by just concatenating strings?
By prepared statements, I mean your application actually sends the query string in a separate packet from the data, and thereby gives the data no opportunity to corrupt the query string.
What PHP does is silently perform string escaping for you instead of doing a prepared statement. This is stupid, but PHP Internals discussions are painful (so changing it is unlikely to happen any time soon) and the userland fix is easy:
That doesn't really address my question. There are real prepared statements like you're talking about; there's the crap PHP does; and then there's what you get if you use e.g. Erlang's Postgres library, which is that you pass it this:
Postrges's prepared statements aren't being used, but the distinction between "tainted" user-generated data and the "trusted" statement is maintained, because the 5 in the above is typed data being sent over the wire in a length-prefixed binary encoding, rather than string data being serialized+escaped into another string.
Which is to say, if you (or your users) tried to put a fragment of SQL in place of the 5 above, it'd just get treated as string-typed data, rather than SQL. You don't need packet-level separation to achieve that.
But is this approach still bad for "emulating" prepared statements, somehow? I don't see how.
The answer to your question is: I don't know, that's a new solution to me.
It looks like it could be safe, but I'd have to dig into its internals to know for sure. My gut instinct is that it's probably safer than escape-and-concatenate.
If any Erlang experts want to chime in with their insight, please do.
EDIT:
> Which is to say, if you (or your users) tried to put a fragment of SQL in place of the 5 above, it'd just get treated as string-typed data, rather than SQL. You don't need packet-level separation to achieve that.
>
> But is this approach still bad for "emulating" prepared statements, somehow? I don't see how.
Above you said:
> the distinction between "tainted" user-generated data and the "trusted" statement is maintained
If this holds true, then you've still solved the data-instructions separation issue and what Erlang does is secure against SQL injection. So, yes, you don't need to send separate packets to ensure query string integrity in that instance.
The shit PHP does is what I meant to decry when I was talking about emulated prepared statements.
Thanks for broadening my horizons a bit. I've edited my earlier post. :)
Is it too early to be suggesting Argon2? I've not heard of it until now, but the Wikipedia entry[1] shows that the paper was just released late last year.
Most environments don't have an implementation for it yet, and the ones that do will probably only get it through libsodium for the first few years.
> I've not heard of it until now, but the Wikipedia entry[1] shows that the paper was just released late last year.
Argon2 was the winner of the Password Hashing Competition, a several-year cryptography competition to find a new password hashing algorithm that would be secure against an attacker armed with a large GPU cluster.
The judges included a lot of famous cryptographers and security experts. Of particular note: Colin Percival, the author of scrypt, and Jens Steube, the project lead for hashcat.
I've read the paper and I think Argon2 will stand the test of time, but I could (of course) be wrong.
Most environments don't have an implementation for it yet
The speed with which environments actually got implementations of previous secure algorithms was half the problem with their use, but I think Argon2 has this nailed. The README now links bindings for Go, Haskell, JavaScript, JVM, Lua, OCaml, Python, Ruby and Rust.
I don't trust them. The various language bindings are maintained by random people who have gone through no particular vetting, and their code is not formally reviewed by anyone.
When I started looking through the node bindings, I found a number of minor bugs and a critical issue that left ~1% of passwords vulnerable.
I trust that the C developers do a good job, but phc-winner-argon2 does not appear to have ever made a formal release. Is master really always perfect?
My suggestion, if you really want to overkill and knock it out of the park: use both. Run it through bcrypt, then through Argon2. If something happens where one of them is deemed insecure/bad practice, you've still got the other one.
This falls into the category of "coming up with your own system". It sounds theoretically as strong as either one, but it could end up weaker overall.
Define X as the maximum time you can allow a hash to run on your server, before it either starts to annoy users, or becomes a DoS issue. Moving from "Argon2, such that it runs for X" to "both algorithms, with a total cost X" means both of them are running with a much reduced work strength.
In the case of Argon2, there is an "iterations" counter, but t=2 is already reasonable, and on low end hardware, you may see t=1. So as per the spec, reducing runtime in order to make whole thing work is going to involve reducing m.
Except bcrypt is already not memory hard, and you've just reduced the only memory constraint in your algorithm.
And entirely possible there are bigger issues I didn't up with two minutes of thinking about it.
Comodo, TrendMicro, AVG... A lot of security suites made it into headlines the past couple of months, because of their incredible questionable practices. What's the reason for this?
Extremist answer: because the security industry is not about security, it's about psychology. People like to feel empowered to take responsibility for the security of their PCs, and an industry sprang up to sell them this belief. The relationship between this industry and making client PCs less vulnerable is tenuous at best.
Similarly, "compliance" is a ritual executives perform because they have found it to have a calming effect on each other. Any relationship between corporate security activities with "being less vulnerable" is accidental - mostly it is about generating reams of paper to prove that the ritual is taking place. When a security consulting firm sells an audit to a business, it's not about fixing what the audit finds, it's about the ability to say "Well, such-and-such reputable firm did an audit so we did our jobs." If you're clever you can charge a clueless SME $100,000 just to pay your technician $15/hour for a 2-hour Nessus scan and give them an auto-generated PDF audit report recommending that they turn on Automatic Updates.
A very, very small subset of the security industry is actually engaged in making software or business processes harder to exploit.
Wasn't there a similar issue in another browser here on HN recently? How does this actually happen - two different security companies both push out "secure" browsers that are fundamentally insecure. I'm not even in the security business and I know it would be fatal to publish a Chrome build without cors. What I can't understand is why would they ever disable it? Seems almost like an act of malice.
I'd argue Comodo isn't a security company. It's a software company that markets software which intends to have a positive effect on one aspect of your security (namely, malware). They're using 20 yearold ineffective techniques to do attempt to have a positive effect on your security, and whether there is a positive, negative, or neutral net effect is to be debated.
They continue to make hundreds of millions of dollars, so they keep going.
I just uninstalled this "Internet Security" piece of software recently and had only kept it on my media PC because it was more of a burden to remove it. Once upon a time, they used to receive high marks for their antivirus software, but as of late, their antivirus software has done nothing but plague me with ads that popup over the taskbar and rob me of my computational resources. It isn't surprising that it is also riddled with security issues like this.
This seems to be the trend in antivirus software (like the other gangbuster revelation with Trend Micro). They've slowly turned their software into the crapware they used to defend against in response to their increasing irrelevance.
Interesting video, thanks for sharing. One question about it if you don't mind: with the Moxie's proposed client-based solution, how do I know that the communication with notaries is safe? If there's an (active) MITM in the network, they could hijack the connections to all the notaries as well, and whatever the query from client, they'd respond "yeah that cert is totally valid".
I guess I'd have to manually install notaries and somehow verify their certs myself upon installation.
Edit: well I could rely on Firefox/Chrome to prebundle some "trusted" notaries' certs, like they do with CA certs now, but then I would be able to delete all but a few, contrary to the current situation where deleting some CA certs is breaking the internet.
Comodo's browsers should not be trusted. They have jumped the shark and do not value their users security or privacy in the slightest. Why do I say so? For things like this: http://forums.comodo.com/help-cd-b206.0/-t108748.0.html
Comodo's "secure" browsers have a tendency to lag rather badly behind Chrome. So a major security fix will land in Chrome and be pushed to stable along with the relevant security bug being made public but Comodo's Chrome-based browser won't land the patch for weeks or months.
Wow, that's disconcerting. Different product, but I'm almost tempted to uninstall the Comodo Firewall that's running on my Windows laptop out of fear that there's some other blatant security blunder waiting to be exploited.
Anyone have any suggestions for a free firewall alternative?
Pretty much this. I stopped using Comodo when the built in Firewall was "good enough" I did miss (briefly) the yay/nay for every connection that Comodo offered.
It's reasonable to assume Windows Firewall on Windows 10 prevents the user from manually blocking connections to Microsoft advertising and telemetry domains.
You can be pretty sure that none of their software is fit for purpose if they're not using basic protections for a process which runs as a super user and parses every file it can find.
I don't understand the testcase they provide. It opens a window at https://ssl.comodo.com/ and sends a message to it with `postMessage`. However, the whole point of postMessage is to provide cross-origin communication. Continuing, the message they send is:
Apparently https://ssl.comodo.com/ used to then proceed to execute that code. However, this is not a vulnerability in the browser, but in that website. Am I missing something? Was Chromodo breaking the `messageEvent.origin` property, breaking same-origin checks in JavaScript? Seems far-fetched.
No, that shouldn't fail. `postMessage` is not a random function defined inside the window at ssl.comodo.com. It's a function defined by the browser, available on every Window object, including ones returned by window.open():