Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess this is the crux of it, a NN seems like a concentration of data, more data-y than algorithmic to me (not a computer scientist). Lossless compression has an inference of essential data being intrinsic in the compressed file, but with the arrangement of the OP it seems like some of that data is extrinsic - some of the essential data is in the NN. Whilst you might make that claim for any regular algo, it seems to me this has a different complexion to it. Ultimately an algo could use all extent online images, then use reverse image search and be 'just' providing an internal link to return.

Another way of looking at it is that you could have a 3D model of a person, as used for CG in movies, and then have an error map + config data to return an image of the person that is exactly the same as a mask of a photo that was taken. The error map + config data wouldn't really be a "lossless compression". Much of the data would lie in the 3D model. Do you agree that this would not be "lossless compression"? So, there's a dividing line somewhere?!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: