Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You mention GCd languages being a bad choice for games in your README. But these days Java has GCs that have < 1ms pauses. I wonder if it is still true?

https://kstefanj.github.io/2021/11/24/gc-progress-8-17.html



1ms in a frame that lasts 16ms is already huge in gamedev. We often optimize processes so that they last less than that. I will always prefer 1ms in rendering, physics or audio over memory management. And the fact that it's not 1ms every frame makes things worse, because it might make miss a frame irregularly which will make the framerate jittery and the game feel very unpleasant.


Foolish 60hz peasant, why aren't you aiming for 240hz? Get your crap done in 4ms!

I play a lot of games, from PC to console to mobile to handheld to VR, I really don't get these kinds of takes. Until games from indie to AAA fix their existing loading and stuttering and microstuttering and various other issues, clutching pearls over GC non-determinism seems so silly. There's no GC to blame for why you can stare at a static scene with a can in a console AAA game under development for 6 years in a mature engine and see its shadow flicker on and off. I find it hard to believe we'd be in a much worse situation if GC languages were even more prevalent. Part of this is that there's already a lot of sources for non-determinism in a C++ engine that can cause issues including frame slowdowns. Of course with effort you can eliminate or bound a lot of it, though the same is true for GC'd languages. Real-time GCs have been a thing for decades but sure you probably aren't using one, nevertheless you can often tweak a non-real-time GC in useful ways if you need to -- or as is mentioned every time this topic comes up, do the same thing you'd do in C++ by allocating big pools ahead of time and reusing things (i.e. not making garbage). Memory management is not the hard part of making a game, though the more complex and demanding the game or constraining the hardware the more you'll need to consider it regardless of language, and change up what's idiomatic when programming in that language.

Not that it matters, because any positive t is going to mysteriously always be too much, but the 1ms number is really underselling ZGC. Maximum of 1ms pause times was the goal, in practice they saw a max of around 0.5ms, an average of 0.05ms (50us), and p99 in the above-linked article of 0.1ms (100us). A more interesting complaint would be the amount of memory overhead it needs. Tradeoffs are everywhere.


In 1992 we still used joysticks plugged into OG gameports, the ones using NE558 timers and manual pooling IO until all capacitors are charged http://www.minuszerodegrees.net/oa/OA%20-%20IBM%20Game%20Con...

Reading such gameport took >1ms, and you wanted more than one reading per second :)


The linked page shows 0.1 ms though.


Recently ran a benchmark against Rust and Java for a particular aspect of our code. The Rust completed in less time, using one thread. Java required twice the time, twice the memory, and thirty five threads. Even if your GC isn't pausing, it's still using as many CPUs as it can to trash your d-cache looking for objects to GC.


Rust takes an absolute eternity to compile code.

On that count alone I would never-ever consider it in hobby/games project of any sort.

It doesn't matter what features a programming language has, if it takes seconds if not minutes to compile even relatively small projects (less than 50-100k LoC).


That is genuinely one of our dilemmas. The product makes enough to spend an extra 20x on machines (and it scales horizontally). So does the impact to developer productivity of Rust offset the 20x cost savings? New features mean new $$$, and more than enough to offset the extra runtime cost of Java.

Personally, I'm now doing hobby projects in Rust because it just feels right. But I've done entire systems in assembly, so I am a bit of a control freak. YMMV.


Why are you doing a clean build? You can't complain about the difference against binary dependencies if you're manually flushing the cache of built dependencies.

And if you're not doing that, you're either wrong, or have a terrible project structure (comparable to including literally every header in every source file in C++)


Isn't compilation time in a roughly similar ballpark for C++? Which is kinda... "quite often" used in games projects? Not sure how much in hobby ones though, if that's what is being discussed here, but I'm somewhat confused (hobby/games sounds like speaking about an either-or alternative?)


Kind of, C++ profits that the ecosystem embraces binary libraries, so usually you only have to care about your own code.

Then if you are using modern C++ tooling like Visual Studio, you can make use of hot code reload.

Here are examples in Visual Studio, and Unreal.

https://www.youtube.com/watch?v=x_gr6DNrJuM

https://www.youtube.com/watch?v=XN1c1V9wtCk


The only thing your benchmark proves is your Java could was not as optimized as well as your Rust code. Java has overhead, but certainly not two orders of magnitude.


Our use case is definitely a pathologically bad problem for Java's GC. Nevertheless, it is a real use case, and CPU and GC are the primary impacts to our service.


You mean you're actually seeing 90% GC overhead and are not looking to improve on that? (like by tuning the GC or changing your implementation). GC impact on normal behaving applications should be less than a few percent, so you're not comparing Rust and Java, but Rust and badly written/tuned Java - which doesn't say anything about the maximum attainable performance.


Because you were comparing Apples to Oranges.

Now try that against D, any .NET language, Nim, Swift, Go, or any other GC language that supports value types and unsafe programming.


Swift is technically not a language with GC I believe. It's probably using reference counting like Objective-C before.


Reference counting is a GC algorithm, despite urban knowledge that describes it as otherwise.

Chapter 5 on the "The Garbage Collection Handbook",

https://gchandbook.org/contents.html

Or if you prefer chapter 2 on "Uniprocessor Garbage Collection Techniques" https://www.cs.cmu.edu/~fp/courses/15411-f08/misc/wilson94-g...

Plenty of SIGPLAN papers about the subject.

Despite all marketing talk, the real reason why Swift went with ARC was easier interoperability with Objective-C, otherwise Apple would need to build something like .NET's RCW/CCW for COM interop on Windows, which follows the same concept as Cocoa's retain/release calls.


Even before Java advanced its GC even more, a little-known 3D game called Minecraft was written in it.


to be fair Minecraft was ported to C++ for anything but PC, and IIRC as of last year the PC version was ported as well, the only thing standing in the way being mod support. The reason Notch wrote it in Java was because that's what he knew, not exactly because it was a fantastic choice for a 3d game.


Bedrock edition is available basically everywhere and unified the non-Java platforms, yeah, though the Java edition still gets feature updates in tandem. There are still other differences between the two that will likely never be rectified (not least of which are bugs becoming relied-upon features). But all that's kind of beside the point. Clearly Java wasn't a bad choice.


As proven by his bank account, being written in Java wasn't a blocker for commercial success.


And still today, with all the advances, Minecraft is a stuttery mess, even with optimisation mods like Optifine


Lua with LÖVE [0] or LÖVR [1] is fine for many types of games though, even though Lua is a language with GC. Most of the heavy lifting is still done in low level languages like C or C++. And it should be easy to write performance critical parts in C/C++ anyways, if needed.

I suspect LÖVE / LÖVR would perform better than Java for games, but haven't tested or verified myself.

---

[0]: https://love2d.org

[1]: https://lovr.org


Idk why people are so worried about gc in general, just keep an object pool around and byte arrays if strings are immutable and never see a GC pause. Sure it'll look a lot like c, but that's besides the point.


At that point, you're not benefiting from the GC'ed language. Just use C/C++/Rust/etc, and you'll end up with something faster and more reliable for a small fraction of the effort.


Most modern GC-ed languages offer more than just garbage-collector: On top of my head:

1. Prevents memory corruption

2. Have way better compilation time than C++ and Rust, absolutely horrid compile time for both.

3. Have all sorts of developer-ergonomics features (like being able to write functions in arbitrary order (compared to C/C++))

4. Have built in reflection (none in C/C++)

5. IDEs/autocomplete/refactoring tools etc, just generally work better for C#/Java/etc.

Garbage collection is probably close to bottom on the list why I would use GCed language when making an indie or hobby scale game

I would never consider using Rust simply because it takes too long time to compile even for relatively small projects.

Especially not for hobby or indie scale game projects.


Why downgrade my developer experience, when my GC language also offers C++ class features if I really need them.

People should learn more about what is in the toolbox.


Different languages are not downgrades/upgrades - they're different balance points between constraints.

Also, classes are merely one feature of C++. The language is a far cry from 40 years ago, when it was "C with classes".


Agree, however a large majority uses C++ as what I call C+, which is basicaly "C with classes" with a bit of C++11.

To the point that we have the Orthodox C++ movement, and educating people to move beyond "C with classes" that keeps being taught around the world is a common discussion subject at C++ conferences, and ISO C++ papers.


I would certainly have agreed with this characterization of the majority 10 years ago. But - do you really believe this is still the case today? After 10 years of C++11 making headway, and a lot of effort to educate people differently? I wonder...


Object pools don’t turn off by one errors into complete remote code execution exploits.


How does a GC help with that? That's just boundary checking.


Just? :)

And don’t forget about all the other ways such corruption could happen, use after free etc.

On top of all that, in managed languages you generally have a stronger runtime type information on top, that doesn’t accept arbitrary memory address to implicitly be read as executable code. Even explicit static casts from Object to more defined type will fail if the object is not of expected type. Code must be defined as function objects in the language to begin with.


Object pooling without GC has the same use-after-free problem as with it.


Coding imperfectly in safe languages leads to performance issues. Coding imperfectly in unsafe languages leads to correctness issues.

It’s a trade-off; performance in games is more important than correctness anyways so I tend to agree with you here.


A nitpick: correctness and type safety are very different concepts. Type-safe languages don't provide correctness (and vice versa).


They didn’t mention type safety though (Unless they’ve edited their post?)


One of the biggest mainstream engines (Unreal Engine) is C++ and utilizes garbage collection:

https://unrealcommunity.wiki/memory-management-6rlf3v4i

https://docs.unrealengine.com/4.27/en-US/ProgrammingAndScrip...


It depends. If you're not doing anything that pushes the boundaries then GC isn't a huge deal. After all, there are plenty of games out there written and running on the god-awful web stack.

But historically games have been boundary pushers, and a GC's overhead isn't really desirable in that space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: