Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

D isn't defined by performance or memory management, however. There are many D features which are just blisteringly professional in how they approach software development: Using D is conducive to writing good software.

Metaprogramming, for example, in D is obvious and almost fun. The compile times are ridiculous compared to C++, i.e. I can build the main D compiler in three different configurations in about 5sec on my machine.



I'm not saying that D is defined by that. I'm simply saying that if you're comparing to C or C++, you are talking about use cases that are defined by performance and/or memory management (or else, they are being used for legacy reasons). If you have a green field project that doesn't have massive performance concerns, and doesn't have to be specifically in C or C++ for other reasons (toolchain availability, available developer resources, etc), you probably aren't using C or C++.

Some people would argue that you can use D for equally high performance things to C++, and make sure you use the GC very selectively, etc. However, you don't appear to be making that argument. If you aren't, then there's just no real point comparing C++ and D. If you don't have any of those requirements, and you are ok with obscurity, you have much stiffer competition from many other languages like Haskell, Kotlin, etc.


There are people who specifically use D for writing high performance applications. They find it faster than C++ for a rather subtle reason - D code is more plastic, meaning it is easier to refactor D code trying out different algorithms looking for a faster one.


I'm sure these people exist but they're the exception rather than the rule. The two biggest industries in SG14, the C++ low latency study group, are games and HFT. In both these areas, D penetration is pretty much zero. And in HFT at least while many features would be nice (especially reflection), I can't imagine it would be anything but a performance hit. Just the fact that the best D implementation for performance is llvm based, and most people do not find that llvm produces assembly as good as gcc or icc, is already an instant global hit.


D reflection is at compile time, not runtime, so no performance hit.


I didn't intend to imply otherwise but I see my wording was unclear. I meant that D generally would be a hit, though it's only really speculation either way.


There is no inherent reason that D code would be slower than C++ code.

I should know, I've written a C++ compiler and know where the bloat is buried, and designed D to eschew features that would be inherently slower than the C++ counterparts.

You can even turn off the runtime array bounds checking.


If you use the same backend... no. They are both native languages with "no room below".

llvm vs icc could be a hit, but that's all really.



LLVM and GCC are so close together these days that it makes almost no difference practically, and when it does you should start optimizing based around your code not your compiler.

Root of all evil etc. etc.


Dude I mean the context in which I'm discussing this is a large organization operating on timescales shorter than microseconds with numerous people involved in optimizing every single part of the pipeline. We are waaaaaaaaaaay past the point of "premature" optimization. This is just optimization, and optimization where a few percent difference between compilers is huge.

Your comments read like you're explaining optimization to a beginner, it's a bit bad faith tbh.


https://www.phoronix.com/scan.php?page=article&item=gcc-clan...

GCC and LLVM have broadly similar performance characteristics(i.e. for any arch/uBenchmark combo there is another that puts the other one faster).

If GCC is appreciably faster for your purpose, then fair enough but LLVM is not drastically slow by any means (Especially when you consider that any "Anything you can do..." Between LLVM and GCC moves much faster on the LLVM side due to a much saner codebase)


You are talking about it like comparing a Ford to a VW they are pretty much the same, but what the poster above talks about is more like comparing formula 1 cars where a single percent of engine power can win or lose you the race. The difference may be small enough for by far the most purposes, but this one is where the small difference can cost a lot.


Hmm... the thread started with GC, are you sure you want to allocate memory while struggling for the last percent of performance?


Both of which use their own C++ subset, while ignoring the standard library.

So even C++ isn't good enough for them.

Yet a few vocal anti-C++ devs in the game world, from Insiomatic Games are now at Unity driving HPC# efforts, so there's that.


HFT doesn't subset C++ nor does it ignore the standard library. Given you're totally wrong about one I'm not inclined to believe you on the other, and I have an examples from the standard library used in game dev eg atomics.


The people I know using C++ for games wrote their own atomics library :-)


Well you can certainly elect to torture yourself if you really want to. Atomics in C++ compile to assembly in very straightforward ways and there isn't really an enormous design space there. Preshing certainly makes it seem like Ubisoft is using C++ atomics; if it makes sense for them with so many developers and creating AAA games... I'd be curious to know the motivation for writing their own.


> Atomics in C++ compile to assembly in very straightforward ways

in release mode.


Disabling RTTI and exceptions is subseting C++, as it is outside of ISO C++ requirements, even if is common support among C++ compilers.

As for game developers, you don't have to look any further than Unreal, CryEngine or EASTL, or a couple of talks at GDC Vault.

So who's right?


Except that in HFT we don't generally disable RTTI or exceptions. I work at a top HFT firm, have friends/colleagues at other top firms, have seen multiple people like Carl Cook at Optiver state in talks that they use exceptions... So what on Earth are you talking about?

Not using some or most of the standard library is precisely not an example of subsetting C++. Just because the C++ standard library has a hash table available doesn't mean that every project has to use it. Companies standardizing their own high performance data structures where it makes sense, and other things as well, is just something that happens and often makes sense independent of language.


> we don't generally disable RTTI or exceptions

Keep up the good fight! "Exceptions are slow" is a realy outdated opinion


I guess Herb Sutter is all wrong on his talks then, what does he know.


Ultimately, though, this is the real tragedy of D. It started as a C clone, and continues to be named like it's a C clone, but it is no longer a C clone. It is a general-purpose memory-aware compiled programming language inspired by C++, but it is pretty different by now! People expecting a C clone miss what else it has to offer.

(Can it be a C replacement? Sure, but so can Rust/Swift/Go/etc)


I don't believe it ever was a C clone. I remember it always presented as a C _successor_, like C++, but in a different direction.


Go (and some others, swift surely) cannot be used as a C replacement since they need a runtime.


Just like C does for calling into main(), doing floating point emulation if CPU not available, calling library initializers (common C extension), handling VLA allocations.

Just because it is tiny does not make it inexistent.

Then there is the whole POSIX, which is kind of runtime that wasn't made part of ISO C, but follows it everywhere.


It's not necessarily tiny but it's optional (besides main() ). Linux kernel, baremetal embedded does not use this.


Linux kernel surely used it during the time they had VLAs in, they are now mostly removed thanks to Google efforts reducing memory exploits on the Linux kernel.

One just links another implementation, just like printf() vs printk().


libc.so.6 is about 2MB.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: