Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This video https://youtu.be/TBrPyy48vFI?t=1277 is a few years old, but it covers how the GRiSP platform combines Erlang and RTEMS Real-time OS [1] to overcome Erlang VM's soft real-time limitations and achieve hard real-time event handling.

[1] https://www.rtems.org/



What are the soft real time limitations of erlang?


Erlang's BEAM, assuming no chicanery of NIFs, will use reduction counting to eventually yield a scheduler to make sure other Erlang processes get execution time. This gives you kind of a "will eventually happen" property. It can't guarantee meeting a deadline. Just that all things will be serviced at some point.


Right GRiSP has support for creating RTOS tasks in C, IIRC.

Within BEAM itself there’s no priority mechanism, however, on a RPi3 or BeagleBone you could get about an 200 uS average response time to GPIO on Linux, even under moderate load. The jitter was pretty low too, like 10-20 uS on average, but the 99.9% tail latencies could get up to hundreds of millis.

That’s fine for many use cases. Still I now prefer programming on esp32’s with Nim for anything realtime. Imperative programming just makes handling arrays easier. Just wish FreeRTOS tasks had error handling akin to OTP supervisors.

Now Beam/Elixir would be amazing for something like HomeAssistant or large networked control systems.


Erlang does have a mechanism to modify process priority, with process_flag/2,3.

As of OTP 28 there's also priority messaging that a process can opt in to. Not really related, but it's new and interesting to note.


> As of OTP 28 there's also priority messaging that a process can opt in to.

That's a very important feature. Without priority messaging you can't nicely recover from queues that start backing up.


Just a reminder that commonly "real-time" on stuff like VxWorks isn't hard realtime either. You test a bunch of scenarios, put in some execution CPU head-room you are comfortable with, and call it a day. With enough head-room and some more (or less, if you have money and time) hand-waving, you can more or less guarantee that deadlines will be kept.


It's all relative. Hard Realtime vs Soft Realtime is not clearly delineated. Because on anything real world there is always a probability distribution for every deadline.

Our observation is that Erlang's "soft-realtime" is already getting much harder once Linux stays out of the way. We have a master thesis worth of research on having multiple sets of schedulers in one Erlang VM that run on different hard real-time priorities plus research on a network of message passing and garbage collecting Erlang processes can be doing Earlies Deadline First scheduling.

However that stayed prototypical because we found relativeness of vanilla Erlang on RTEMS was good enough for all practical customer problems we solved.

For very high performance hard real-time we drop down to C and we are currently working on a little language that can be programmed from the Erlang level that avoids that.


quick-question: why go the `rtems` route ? would 'isolcpus' not work in this case ?

--

thanks !


With Linux we can only run on larger embedded CPUs that support virtual memory well enough. With RTEMS we can go towards much smaller platforms.


Addendum: we have Buildroot and Yocto based platforms too. Not clear on the website right now but we have three platforms actually:

* GRiSP Metal - aka just GRiSP (Erlang/Elxiir + RTEMS)

* GRiSP Alloy - Buildroot Linux based, starts the Erlang/Elixir runtime as process 1 similar to Nerves but more language agnostic (Nerves is for Elixir only) and we support RT Linux and running multiple Erlang Runtimes at different priorities

* GRiSP Forge - Similar to alloy but Yocto based.

The idea is that from the high level language level they are more or less interchangeable.


ah ! indeed it can. do such platforms have 18m of memory and then some ?


I am not associated with the project, so I cannot answer that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: