Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the author is claiming that clicking on https://www.google.com/s;/ChromeSetup.bat;/ChromeSetup.bat?g... results in a file ChromeSetup.bat being downloaded, but in chrome and firefox the file downloaded is f.txt.

Has anyone tried this on other browsers?

EDIT:

Here is the portion of the paper explaining why this no longer works:

"However, a common implementation error could result in Reflected File Download from the worst kind. Content-Disposition headers SHOULD include a "filename" parameter, to avoid having the browser parse the filename from the URL.

This is the exact problem that multiple Google APIs suffered from until I reported it to the Google security team, leading to a massive fix in core Google components."



The author mentions a mitigation of specifying a filename in the Content-Disposition header, which that particular url actually does:

    Content-Disposition: attachment; filename="f.txt"
Perhaps Google has fixed the problem for that URL -- I would hope the author contacted them in advance.


Stories like these make me never want to make a http webservice again. HTTP(S) is just way too complicated for me to ever be confident I've done everything right. It's getting to the point where webservices are like crypto: only experts should touch them.


Being aware of exploits and protecting against them comes with the territory. Luckily there are things like owasp.org to help developers keep up on web security. However, security is hard and it can't be done absent mindedly. There is no getting around that.


If the standards were more strict, some of these issues would not exist. I see this as exploiting a lot of slop in protocols. It should not be possible to interpret a URL as anything but a URL, yet here it's being reflected back and interpreted as something else entirely.


It has nothing to do with the standards being strict; the standards can be as strict as they want. If the standards are strict and useless, no one will follow them, instead implementing something less strict and more useful.

For example, when downloading a file from a website, what default name should you use for it? There is a header to tell you, but not ever page supplies such a header; so the browser needs to do something. It chooses to pick the last component of the URL as that filename. However, URLs are somewhat more complex than you might expect, so this becomes more complicated and can lead to attacker controlled ways to manipulate this filename.

Now, you could make a more strict spec, for example by forbidding downloading files unless the filename is properly specified, or forbidding using any kind of default filename and making the user choose it themselves, or something of the sort. But if any browser vendor implemented this more strict spec, they would instantly annoy a lot of users who would find things breaking that used to work, and they would be likely to switch to another more permissive browser.

Security, compatibility, and robustness are hard factors to balance. Just blaming this on "slop in protocols" is a vast over simplification.


>> For example, when downloading a file from a website, what default name should you use for it? There is a header to tell you, but not ever page supplies such a header; so the browser needs to do something. It chooses to pick the last component of the URL as that filename.

Yeah, and that's slop in the protocol. If the header was required everything would still work, web sites would just have to fill in the header. What's easier to do, comply with a protocol where your site brakes if you don't, or to have swiss cheese and then make site developers learn a bunch of security best practices and hope they get it right?

Also in there is the good old "this site wants to blah blah" and ask the user to decide. If you have to ask, the answer is "No! fix your site so it's not on the user to decide". Broken certificates? Not my problem, browser should just say "sorry site security is busted" and leave it at that. It's an old debate, but AFIAC there is no debate, only lazyness.


I get that. I just dread the days when malpractice for programmers is as common as it is for doctors. I like building functionality, not fortresses.


Yes, the author mentions that in his paper. He contacted both Google and Microsoft; sounds like Google rolled out fixes before publication, while Microsoft is still working on them:

  On March 2014, I reported a security feature bypass to 
  Microsoft which enables batch files (“bat” and “cmd” 
  extensions) to execute immediately without warning the 
  user about the publisher or origin of the file. Hence, 
  RFD malware that uses the bypass will execute
  immediately once clicked.

  ...

  Microsoft is working on a Defense-in-Depth fix to solve 
  this issue.
And:

  This is the exact problem that multiple Google APIs 
  suffered from until I reported it to the Google security 
  team, leading to a massive fix in core Google components.


§2.3.2 mentions that the author reported the security problems to Google, and Google fixed their APIs.


I wonder if this can be mitigated by marking your JSON actions as HTTP POST only. Since this utilises HTTP GET the request would never be actioned and since JSON uses HTTP POST almost exclusively it wouldn't break existing code.


It's not so hard to build a form and submit it automatically with JavaScript in order to get a user's browser to do a POST request to any URL you want.


That form wouldn't be on the same domain and therefore would hit CSRF protections.


Yes, if you require a user-specific random token in the request, the exploit doesn't work. But that's independent of GET/POST and not what you said in your earlier post.


On Safari 7 at OSX file is downloaded as f.txt.json.


I read somewhere recently that Google used to be affected but have since patched their servers.


Ahh...that makes sense


I can confirm that on Safari, the file downloaded is f.txt




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: