Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 2160 x 2160 LCD (per eye) 72-144Hz refresh rate

I question that we could not create a special purpose video codec that handles this without trickery. The "per eye" part sounds spooky at first, but how much information is typically different between these frames? The mutual information is probably 90%+ in most VR games.

If we were to enhance something like x264 to encode the 2nd display as a residual of the 1st display, this could become much more feasible from a channel capacity standpoint. Video codecs already employ a lot of tricks to make adjacent frames that are nearly identical occupy negligible space.

This seems very similar (identical?) to the problem of efficiently encoding a 3d movie:

https://en.wikipedia.org/wiki/2D_plus_Delta

https://en.wikipedia.org/wiki/Multiview_Video_Coding



I'm entirely unfamiliar with the vr rendering space, so all I have to go on is what (I think) your comment implies.

Is the current state of VR rendering really just rendering and transporting two videostreams independent of eachother? Surely there has to be at least some academic prior-art on the subject, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: