(~gc side-track Walter) Has anyone ever tried using a caching model for application memory? LRU evict memory objects to disk w/ a pseudo persistent memory application behavior?
Your keyword here is 'transparent persistence' (or 'orthogonal persistence'). There was quite a lot of work on this a few decades ago - https://dblp.org/db/conf/pos/index.html
p.s.
those seem to be persistent object focused. Persistence in a caching memory model would be a side-effect (and doesn't even have to be used). Do you know if there exists research primarily focused on 'a caching memory model' as an alternative to (general) GC or reference counting strategies?
Oh, I see. I am not aware of any work in that domain. Caching tends to be rather application-specific; despite all that phk says about varnish, mmap is not really a good general solution to the problem. Something language-specific with some awareness of the object model and perhaps access patterns could be an improvement, but not, I think, more than an incremental improvement.
It's an idea that its been attempting to get me seriously interested /g for a while now. I didn't wish to assert novelty (but to date haven't found anything).
The general idea is pretty straight forward, you have a ~fixed size chunk of (virtual) memory that active memory objects reside in. Garbage and very rarely used references get flushed to disk. Presumably, if the operating memory requirements (i.e. active objects) are met by available Ln cache layer, this should be a viable alternative to 'collecting garbage'. We're trading the overhead of tracing/ref-counting with the cost of cache mechanism. If cache is tiered, then the 'garbage' & 'rarely used' objects will end up in nice VM blocks that OS will flush to disk. One approach basically requires the use of one annotation at language level to distinguish 'long lived' objects.
Something like "Address/memory management for a gigantic LISP environment or, GC considered harmful" <https://dl.acm.org/doi/10.1145/1317203.1317206>? I think that uses usual LRU mechanisms to determine what to page though.