Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I understand the reason Google did it (because of lawsuits), this is very annoying, and I mean in general: you don't know how or if you can use a resource you fetch from a website even if its robots.txt allows you to fetch it. This is in fact an even bigger deal for small companies than for Google, since the latter already reached a size where it can manage the issue somehow.

It would be nice to have a version of robots.txt that states (in a predefined format) the license under which the resource is available. That way you could know right away what you can and you can not do with it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: