Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be a big step forward to have everything that runs as root written in something safer than C/C++.

(Screams come from the Old UNIX Admins.)



The biggest stumbling block there, for me, is that it's impossible to write shared libraries in go. You can't write libraries in go that can be used from anything other than go, so until you've converted everything over to go, you'll still need to write libssl.so and friends in C. Go is a great application language, but it doesn't play well enough with others to really be a systems language as long as you can't write libraries other languages can use in it.


Shared libraries are coming for Go.

https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGh...

But shared libraries aren't the only way or even (often) the best way to share code. Especially when we're talking about UNIX tools, it's pretty easy to have one process run another process. Go has a really quick startup time which makes that practical.


There are absolutely some scenarios that can be adequately (or better) handled by communicating with another process! I completely agree with you. There's still a large volume of use cases where linking code in-process is a very good fit for the problem.

I am looking forward to go someday being suitable for producing native shared libraries. If they ever prioritize and complete that work, that's something I'll absolutely be taking advantage of, and will extend the situations where I'm able to reasonably use go.


I moved from C# to Go, and one of the things I really loved is the sudden absence of DLL-Hell. I never have to worry that installing my program on another machine would suddenly cause it to bork.

I guess I'm happy importing everything into monster huge static binaries and not worrying :)


Coming to C# in the form of .NET Native and vCore on Windows.

Mono always supported static linking.

http://www.mono-project.com/docs/tools+libraries/tools/linke...

http://www.mono-project.com/docs/advanced/aot/

Sometimes I wish people would not mix toolchains with languages.


yeah, I get that. But my actual lived experience of working with C# is Visual Studio, Windows, and DLL-Hell.

Whereas the go toolchain is very much a part of the language. The go team don't tell you whether tabs or spaces are canonical, they tell you to run go fmt on your code.


While I like Go as a better C, I wouldn't use it instead of C#.


DLL hell was solved in .net a decade ago. If you need specific versions, ship with your own version in the local directory. Otherwise, .net assemblies solve the problem elegantly.

Go needs to support shared libraries if it wants to move to the next level of adoption.


Marcus, I'm also giving Go a try at the moment (C# is my day job). What are you using tooling-wise? I'm trying out Lite IDE at the moment and that's the best experience I've had yet.

So far my view is that I really like the language but I'm finding the tooling pretty unappealing. Wonder if I'm missing a trick?


I find that Sublime Text with the GoSublime plugin works pretty well. There's "Intellisense" for imported packages and GsLint actively detects syntax errors. Gofmt also takes care of indentation and formatting for you at every save.


++ to Sublime Text and GoSublime. It's what I use and anecdotally appears to be the most popular Go dev environment.


Thanks both of you - I haven't had a go with GoSublime yet. Now I will!


My experience: you kinda have to rethink what an IDE is. Go comes from the *nix tradition where the "coding environment" is a text editor and a few terminal windows.

I use GoConvey to automagically run my tests and tell me what I broke. I keep this running in a terminal window (so I can CTRL-C stop it) and a browser window.

I have another terminal tab on the same window for godoc, and that's serving another browser window.

I have a terminal tab for git commands and file manipulation (this is the one the terminal window is normally on).

I moved away from SublimeText to Atom (with go-plus) because while the load times suck, the go language support in the tooling is way better. Not that GoSublime is bad, it's not, but Atom just works better for me. I tried vim, and loved it, but the support for the go tools was never quite there, and configuring the bloody thing was a nightmare.

So every time I save a file it automagically runs gofmt, goimports, compiles and runs my tests in goconvey (and tells me what I broke), and colours my test coverage right in the editor.

It's not quite the integrated experience that Visual Studio is, but it's incredibly powerful, and conforms to the unix philosophy of lots of good, small programs working together.

And just for comparison; today I spent 20 mins trying to get a dark theme on VS2010 and had to give up (or hand-pick every colour in what looked like around 100 options)... whereas I have dark themes galore on everything else ;)


There are a lot of good choices out there. Personally, I use vim plus "gocode" to do context-sensitive code completion. gocode is able to parse go, which means that its autocompletion results are pretty good.


VS2010 ?!? We are already on 2013 with 2015 around the corner.


Tell that to Oracle, who hasn't bothered with OCI libs for VS2013 yet ;)


ODP.NET works just fine. :)


Not in C/C++ apps :-(


I see. My employer has abandoned C++ world in 2005, so I only get to play with C++ on hobby projects.


did I mention DLL-Hell? ;)

actually it's because I have a 2010 licence, but none for 2013. Another advantage of Go...


I don't have DLL hell in our applications since years.

- DLLs are distributed with applications

- Dependencies to GAC libraries are enforced with full version

- we make use of application manifests when needed

> actually it's because I have a 2010 licence, but none for 2013.

Express and Community editions?

We just use MSDN subscriptions.

> Another advantage of Go...

I see some good uses for Go as an improved C, but tooling when compared with JVM and .NET eco-systems is not one of them.


Tooling actually is a huge advantage. I can install the latest Go distribution and get everything I need to work (minus a text editor).

On the other hand, on top of the JDK I have to install, eg, Eclipse, maven, etc. Not to mention all the crazy frameworks I would need to get any real project off the ground.

Similarly with .NET I have to install a whole host of stuff with Nuget, LINQPad, etc.


I rather use VisualVM, Eclipse MAT, dotTrace... offer, not that .ps generated file from Go runtime.

I rather use RAD tooling for GUI design, not spending hours doing design by code.

I also prefer to have visual representation of DB schemas and generate code from it.

I could go on with lots of other examples, Go tooling will get there some day when the language gets enterprise adoption.


> My experience: you kinda have to rethink what an IDE is.

An IDE is an integrated development environment; emphasis on integrated. Having to, by yourself, put together a mixed assortment of tools for your development environment doesn't seem to qualify as being integrated. Though of course someone could come and make a program that bundles a lot of tools together and present them as a coherent, integrated package.

(Personally I'm sceptical of IDEs myself and tend to believe more in being able to make your own work-flow and programming environment, which a lot of loosely coupled (ideally not coupled at all) tools makes simpler.)


Integration is a matter of degrees. The trend with recent editors (textmate, sublime, atom) is that they expose more abilities to integrate with simple editor functions. I don't see how this doesn't qualify as integrated.


They don't offer the developer experience Smalltalk, Interlisp-D, Lisp Machines, Ceres did, nor what modern IDEs are offering.


And because everything is statically compiled, go binaries tend to be very large.

Imagine each binary being 5 mb on average and multiply that by 100.


I suppose you could work around that by going the busybox route and provide multiple tools as a single binary and then just hard link them to the different command names. But I guess there might be other problems with that.


> And because everything is statically compiled, go binaries tend to be very large.

Are you sure it's because they are statically compiled?

Not because each binary has DWARF debug symbols in it so that there are useful stack traces?

> Imagine each binary being 5 mb on average and multiply that by 100.

I don't think it's a factor of 100 times, that seems a bit outrageous.


>> Imagine each binary being 5 mb on average and multiply that by 100.

> I don't think it's a factor of 100 times, that seems a bit outrageous.

I think he means 100 utilities all weighing in at 5mb


> Are you sure it's because they are statically compiled? Not because each binary has DWARF debug symbols in it so that there are useful stack traces?

Removing DWARF information with -ldflags -w reduces the size of the binary by 20%. The binary is 1.8MB originally...

    $ go build hello.go            
    $ wc -c hello                  
     1806192 hello
    $ go build -ldflags -w hello.go
    $ wc -c hello                  
     1408880 hello
the rest is pretty much all runtime, the hello world itself should take about 10k, that's more or less the size of the C or dynamically linked Rust versions (Rust also uses static linking by default, and the hello world is 300k in that case)


to see how much the runtime contributes to the size of the binary use an empty main. for hello world you bring in the fmt package which is a relative heavyweight:

    $ cat > t.go
    package main
    func main() {}
    $ go build t.go 
    $ wc -c t
      623280 t
    $ nm t | wc -l
        1029
    $ cat > t.go
    package main
    import "fmt"
    func main() { fmt.Println("hello world") }
    $ go build t.go 
    $ nm t | wc -l
        2541


Go binaries start at around 2mb on my machine, so not as bad as 5mb, but still much bigger than similar utils in c. Storage is getting dramatically cheaper all the time though, so the size is becoming less important.

Shared libraries for the std lib would definitely be nice for utils, but they do have drawbacks too (dll-hell).


The problem is not in storing the binary, the problem is that you have to load the binary in memory to execute it. So 5mb you have to transfer from disk to memory and keep it there until you're done (and this is not even counting the runtime memory use)


Not true at all on systems that use on-demand paging (i.e. all systems in the past 30 years). Only what's used is paged-in, and it's paged-in only when it's used. A big part of Go binaries are DWARF info, which are not paged-in in regular operation. Also, much of the code is not paged-in either. The code is currently bloated because the linker can't always statically determine which methods it can elide (it does that for functions), because Go allows to query the dynamic type of an interface at runtime (reflection, type switches and type assertions, and all that). Alan Donovan has done some promising static analysis work that will allow the linkers to elide more code than they do now.


If memory use is a huge issue, you can get down to 400KB or so without using the stdlib - just copy in the isolated code you need if your tool actually requires it. Even with liberal use of the stdlib it's not 5MB on average, more like 1-2MB - where did that 5MB figure come from? It is possible to have smaller binaries with static linking but you have to be careful. Of course it'd be nicer if they were more the size of tiny linux utils (30KB), but they'd need dynamic linking for that, which brings up other issues.

If you are severely resource constrained, and need to have lots of tiny programs resident in memory at once, then go with static linking is not a good choice, but does this preclude using go tools on modern servers, desktops, phones where many binaries are typically > 1MB today and many are only run for short periods anyway?


> Shared libraries for the std lib would definitely be nice for utils, but they do have drawbacks too (dll-hell).

There is also .a/.lib hell, the only difference is when it gets handled.

In Windows dll-hell is a solved problem since Windows 2000 for any developer that bothers to follow Microsoft guidelines instead of copying dlls into systems directories.


I didn't mean by referencing dlls to imply this was a windows-only problem, just that using dynamically linked libraries shifts the burden of dealing with dependency conflicts to the runtime machine, and away from the developer at compile time.


There is an open ticket planned for Go 1.5.

https://code.google.com/p/go/issues/detail?id=6853


Yea, I mean, that'd cost 20 cents or something to store.


Are shared libraries a memory win in production environments? For shared libraries to be a win, you have to be running different executables on the same machine (or VM) which share libraries of significant size relative to the memory use of the program itself. You also have to be using a big fraction of each shared library. With static linking, if you need "cos", you get that loaded; with a shared library, you bring in the entire math library.

Modern production environments often involve only one program per VM, or multiple instances of the same program. In such cases, shared libraries are a lose.


That is a toolchain limitation, not the language.

I think gccgo allows for .so.

And the work for Android is adding .so support to the main toolchain.


While totally unsuited for the aforementioned hypothetical situation, Go has also supported SWIG for a few years now.


You could write those command line libraries/etc in Ocaml now ;)


Yep

"Unix system programming in OCaml" http://ocaml.github.io/ocamlunix/


Oh this looks very interesting. It would be fun to read this and make Haskell versions of the programs as well as learn Ocaml at the same time!


You mean something like Perl Power Tools from the late 90s? https://metacpan.org/release/ppt :)


> It would be a big step forward to have everything that runs as root written in something safer than C/C++.

As you know, options have existing since the early days, but mentalities are hard to change.

I came to the conclusion that many issues in computing are only solved with change of generations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: