Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are right. It might increase very slowly if you actually store a source repository composed of many small files and then only some of them change periodically. But if you store large binary blobs and all of them change often then git won't work that well.

You'd at least want to have the option to blow away some history as you run out of space or become confident that you'll never need it -- can't do that in git without re-writting the whole commit history from scratch and filtering out some commits.

In think in this case there is some confusion between backup and synchronization, both can be thought of as separate and git might not do well for either one in the general case. It might not be as good for backup, because you might want to blow away old snapshots and in case of synchronization. As for general bidirectional synchronization, you'd need some way to plugin in custom merge conflict resolution (can git do that?). For example you might have a policy that whenever you modify a file on the your laptop it should take precendence over the same file modified on your iphone, or some text files can handle merging while others should be replaced based on timestamp, or something like that.



> But if you store large binary blobs and all of them change often then git won't work that well.

True, default git won't work well. But you can handle large changing files git-style with a different approach; the bup backup system using git comes to mind (see https://github.com/apenwarr/bup & http://lwn.net/Articles/380983/) where what is stored are chunks of files, where xdelta and other diffs work efficiently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: