Damn, seems so obvious in hindsight. When I learned about the CSS history information leakage in the first place I was alarmed enough. I clear history several times a day because of this leakage problem (and other reasons anyhow) but that's really not often enough.
I personally don't like whitelists to solve general browsing problems but noscript does allow you to only trust certain sites with possibly attaining your history information by limiting their ability to run JS. I like seeing new sites with JS magic though. In general, I am leaning towards using a separate browser entirely just for things I log in to and fully trust -- this is just another log on that fire.
Interesting attack, but it only works for CSRF tokens included in a URL. If you instead place your tokens in hidden form fields (which I recommend, since you should only need to protect POST requests) this attack won't affect you.
And it also only includes tokens that are short enough to be bruteforced, essentially in JavaScript. And it would only work for tokens that haven't already expired (isn't it pretty common practice to only let a token be used once?).
All in all, it seems like most developers who know enough security to try to stop CSRF with tokens in the first place would create an implementation where the hack is useless. Still an interesting idea though.
Sure. And I noticed the tokens in URLs but did not usually care that much since they are over https which doesn't reveal the parameters. Now I do care.
Let's say that you have an application that one part of your company has developed for use in several other sites your company runs. This application sits in an iframe. You want to make sure that it's only being used on sites from the *.example.com domains.
So, you set a cookie in the .example.com domain, and then, on the server side, you create a one-way hash of that cookie and a key phrase (a crumb), and put it on the url of the iframe.
The key phrase doesn't have to be particularly secret, it's mostly for convenience so that you can use the same cookie and hash function in multiple different applications without getting the same crumb everywhere for a given user. So, if a crumb is leaked, it's only going to expose one thing, not everything. Also, if you make it time-dependent, then it further limits the exposure.
The iframe validates that the crumb in the query string matches the one-way hash that it gets by performing the same one-way hash against the cookies and the key phrase.
Yahoo uses this technique all over the place. It's very effective as long as the crumb function is sufficiently clever. However, if your crumb is brute-forcible, then it can be exploited easily.
Putting session keys in a GET URL happens a lot (whether it "should" or not :-)). It's an authentication scheme for access to the URL: your GET may or may not actually have "side effects" regardless, that's orthogonal.
I personally don't like whitelists to solve general browsing problems but noscript does allow you to only trust certain sites with possibly attaining your history information by limiting their ability to run JS. I like seeing new sites with JS magic though. In general, I am leaning towards using a separate browser entirely just for things I log in to and fully trust -- this is just another log on that fire.