What is a "Symantec-internal testing process" that leads to Google certs being leaked outside of Symantec? Is some engineer poking around and just used "google.com" as an example? Seems like a pretty serious wtf moment. If I was Google I would be pissed.
If it was leaked externally, it'd be more than just pissing off Google, it'd be justification for removing Symantec's CA certs from browsers. I.e., who knows what other domains Symantec might be "testing" that would go undiscovered because the owners of those domains don't have Google's resources.
But as the other poster says, this probably wasn't leaked at all.
> it'd be justification for removing Symantec's CA certs from browsers
More than half of the CAs have publicly violated trust at some point. The governments of the US and China, who are arguably the biggest threats to HTTPS, still have CAs.
While I agree with you wholeheartedly, it doesn't look like either incompetence or malice vis a vis security are substantial enough justifications for the browser makers to pull the plugs here.
Can you point to USG or Chinese CAs that publicly mis-issued or used certs? CNNIC comes to mind and they've been removed. Which others were you thinking about?
No, the definition of mis-issuing a cert is when you issue a Google cert to someone who isn't Google. Not doing due diligence on what people to whom you've issued a cert are doing with it is a little different.
This is just semantics, though. I think everyone agrees they done bad.
The kind where they used Chrome to browse to an internal testing website? Though using a real website, like Google, instead of a mock is quite wtf. What I'm curious about is whether the fired* Symanted engineers were just scapegoats or had they actually been reckless and unprofessional.
> In addition, we discovered that a few outstanding employees, who had successfully undergone our stringent on-boarding and security trainings, failed to follow our policies. Despite their best intentions, this failure to follow policies has led to their termination after a thoughtful review process. Because you rely on us to protect the digital world, we hold ourselves to a “no compromise” bar for such breaches. As a result, it was the only call we could make.
> As much as we hate to lose valuable colleagues, we are the industry leader in online safety and security, and it is imperative that we maintain the absolute highest standards. At the end of day, we hang our hats on trust, and that trust is built by doing what we say we’re going to do.
Wow.
I have to say that I respect that decision. Without knowing the circumstances, I have to say that willful disregard for security policy while handling materials as sensitive as a CA cert is indeed not something I'd want to see from employees at a CA.
Agreed that the steps taken vis-a-vis these employees may have been the right one if they indeed breached company policy.
But I do have an issue with publicizing this so openly, and using this to showoff of how serious "we" are.
Even with the best intentions, you will run into bad apples. You still need to have the right controls, preferably automated, to avoid sensitive material to be used for internal purposes. Blogging on how they terminated employees doesn't help to showcase their leadership imho.
There is no basis for respecting a decision stemming from a "no compromise" policy. Such a policy is designed to substitute mechanical action for judgment and discretion.
Cf. the child-porn case also on the front page now (kid has picture of self on phone; https://news.ycombinator.com/item?id=10247764). It's probably also based on some kind of zero-tolerance policy or campaign promise.
Had the announcement merely referred to the "thoughtful review process" (which is good) but not then nullified the meaning of that process with a thoughtless "no compromise" standard (which is silly), then it'd be at least eligible for respect.
One has to wonder how much of these CA shenanigans have been going on before these news sytems were put in place systems to catch the rogue certificates.
It would stand to reason that people are more wary of it now that there is a high risk of getting caught.
After reading DrDuh's guide to install yosemite, I thought a bit more about the ~200+ trusted CAs on my computer. I removed about ~50 using various heuristics, mostly arbitrary stuff like removing goverment agencies, and international CAs that I was skeptical of or otherwise assumed I would not need.
To get to my question though, how many CAs does one need to trust for the safest browsing experience? What CAs should be trusted and how can they be evaluated? How many-ish are you guys trusting?
We really need a much better interface for managing trust. All of these security features rely on trusting something, and people need to have control over that. Maybe they have reason to distrust one of the CAs (China and their effort to catch people circumventing the Great Firewall is an obvious example), and so it should be easier for people to manage these important trust choices.
An interesting (and probably good) side-effect might be that market forces put pressure on websites/etc about their choice of CA. That is, if people distrust a CA an "break" websites that it signed, that's a good thing as it lets the market punish shady CAs indirectly.
We really need a simple way for for someone to browse the trust choices and easily say "I don't trut the government of $COUNTRY, disable all of their certs" or "Use these trust settings that my friend gave me on this $PHYSICAL_MEDIA" or "I trust $SOME_3RD_PARTY, use their recommended list". Several of these suggest the need for a portable and secure way to publish lists like the 50 CAs you just removed.
Feel free to use that to check your own site's certificates!
(It's possible to directly query the multiple Certificate Transparency log servers for your site's certs, but non-trivial, hence why I implemented the above functionality.)
This RSS service is useless to all Tor users, as CloudFlare attempts to serve up a catpcha here. Serving a captcha on an RSS feed defeats the purpose of RSS automation.
I had issues with newsbeuter retrieving your feed with Tor.
Regardless, CloudFlare shows such disregard and contempt for privacy and anonymity that I'm not comfortable using services that use CloudFlare (Hacker News excepted!).
Certificate Transparency is a (mostly Google-developed) program which aims to require that certificates be publicized to some third party, so that mis-issuance (just like this one!) can be detected. You do this by telling a third-party log server that you're issuing a certificate for a domain, and the log server gives you back a proof (essentially a countersignature) saying "Yes, I am a third-party log and I've seen this certificate and I'm publicizing its existence."
You can submit the actual certificate to the Certificate Transparency log, but then you don't have that proof immediately, so the certificate isn't usable until after it exists. (And the easiest way to send the proof to a client is to embed it in the cert itself.) So instead, the CT protocol allows you to generate a "pre-certificate", a modified certificate with a special X.509 extension poisoning it from actually being used as a certificate. It is a promise that you are willing to issue this certificate, but cannot be used to authenticate.
Since it can be exchanged for a CT log proof, just like an actual certificate can, it has exactly the same verification / trust requirements as actually issuing a cert. However, it cannot be used as a certificate.
Chrome now requires Certificate Transparency for all CAs that it has authorized to sign extended validation (EV, green bar with the company name) certificates, as those are held to a higher standard than the rest of the CA system is (currently) held. While Google also operates the logs, they shouldn't have needed any special access: any website operator can ask the logs "Hey, what proofs have you issued recently," and cross-check those against the certificates they actually intended to get signed.
CNNIC issued the intermediate to someone who was doing non-consensual, large-scale SSL MITM. CNNIC evidently didn't realize that's what they were doing, and possibly the company didn't either (and just accidentally MITM'd their own technician's connection to Google during testing -- not that this makes the story better), but in either case CNNIC should not have issued a non-HSM cert and they should have vetted the technical competence and goodheartedness of the customer. Furthermore, they said they were not issuing intermediates, and had not updated their certification practices statements before entering this "experimental" business.
In the Symantec case, the private key in question remained with Symantec at all times, with employees who legitimately had access to the EV certificate authority as part of their job, in the course of testing. It was never exposed to anyone outside the CA.
CNNIC sold a valid, unconstrained intermediate private key to a completely unqualified customer and had organizational troubles at all levels. We were lucky that the customer was also incompetent, and just got caught. Symantec had a few employees make a mistake internally, and at no point could anyone malicious (other than potential malicious employees) threaten internet security. And those specific employees got fired.
Link to this? The story was that they issued it to some v Egyptian company that wanted to run a CA. This company was incompetent and didn't have an HSM. They did have a Palo Alto MITM box that had "CA capabilities", so they used that. Then an engineer at this company plugged his machine into the MITM port, loaded Chrome and tada.
Utterly incompetent and against the CA rules. But not large scale or non-consenual MITM right?
Oh, yeah, you're right. They put their globally valid, unconstrained intermediate cert in a device whose primary purpose is large-scale SSL MITM, and they MITM'd themselves without realizing it, but yes, they weren't intending to do large-scale non-consensual SSL MITM. I'd forgotten the part where they were planning on using the device as a way to issue normal certs and ignoring the MITM capabilities.
Seeing as how using a trusted CA to do MITM isn't even remotely a valid business plan or idea, I think it's quite possible that they were incompetent enough to use any piece of hardware, yes. It's actually better for the world if they were planning to do MITM, as they'd have been caught so fast and their plans killed so quickly itd be funny. As-is it's just luck that they messed up.
It's different because this was "internal testing" and the certificate wasn't used (at least, this is not mentioned). CNNIC issued a CA cert to some random intermediary. Slight difference in severity.
That's the problem with TLS trust: All it does is tell a browser that a CA trusts the certificate. The process to verify site ownership varies and is error prone.