Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dynamic linking was never about cheap storage, it was about RAM. Except that dynamically linking libraries into your program almost always uses more memory than statically linking your program. So there is no point. Most systems (including Unix, which did not have dynamic linking until late 80s/early 90s) worked just fine without it.

Here is a good list of reasons why you should never use dynamic linking:

http://aiju.de/rant/dynamic-linking



FUD. Dynamic linking is about saving on RAM. Nobody ever said it's faster or "secure" therefore the entire post is mostly about made up agruments.

And dynamic linking does save a lot of memory. That's one of big wins for Google's Dalvik: standard JVM is unable to share code - that's why all JVM processes are memory hogs.

Looking at my process map right now. Gnome-panel has 20MB in resident memory (not all of it is code, mind you) - and 13 of them are shared with other processes. Unity-window-decorator shares over half of its code with others. Just for the kicks open your process monitor, have "shared memory" column open, and sum up all those numbers - those are megabytes you would have lost without the shared code.

It is also about CPU as well. Moving sort() implementation in and out of L2 cache is expensive, better to have a single instance of those opcodes do sorting for multiple processes.

Sorry, but our operating systems and programs we run on them are mostly composed of shared libraries. The debate of either they're good or not is largely pointless unless we migrate to different OS designs.


"Sorry, but our operating systems and programs we run on them are mostly composed of shared libraries. The debate of either they're good or not is largely pointless unless we migrate to different OS designs."

What do you mean when you say that the operating system is dynamically linked? I don't even think Linux kernel modules count. I run OpenBSD, and all the base packages are statically linked. Many applications (esp. graphical ones) are dynamically linked, but there's not many reasons they have to be. The biggest one is bloatware libraries full of buggy crap you don't want or need. But dynamic linking isn't a solution for that problem.


"What do you mean when you say that the operating system is dynamically linked?"

By that I mean this: grab a debugger, attach to a random process you have running. Pause and look at the current stack: you'll notice that the code you're looking at resides in a so. And everything on the current call stack is all shared code, loads of it, sandwiched between the kernel at the bottom and a thin layer of hosting executable on top.

Here's another way to look at it: http://paste.ofcode.org/diZdtuH8uPs2UBWHEvTYWh (try to ram all that code into gnome-panel itself and ship with next Ubuntu - see what users will tell you)

One more time: shared code is absolutely essential. Only "one-process-per-machine" datacenter approach can do without. That's why there is no Java on the desktop.


Except that dynamically linking libraries into your program almost always uses more memory than statically linking your program.

If your metric is the memory usage of one arbitrary program. If your metric is the total memory usage of the system, it's likely that dynamic linking is a win because many applications can reuse the same pages in memory.

Most systems (including Unix, which did not have dynamic linking until late 80s/early 90s) worked just fine without it.

That's neither-here-nor-there; most systems worked just fine without operating systems until those were invented, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: