Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the order of 10% smaller than WebP, substantially slower encode/decode.


The encode/decode is almost certainly not optimized, it's using Pytorch and is a research project, a 10x speedup with a tuned implementation is probably easily reachable, and I wouldn't be surprised if 100x were possible even without using a GPU.


Where did you get that from? PyTorch is already pretty optimised and relies on GPU acceleration.

The only parts that are slow in comparison are the bits written in Python and those are just the frontend application.

There's not much room for performance improvement.


pytorch has optimized generic primitives, generally optimization means including safe assumptions specific to the problem you are restricting the solution to.

For example, YACC is highly optimized, but the parsers in GCC and LLVM are an order of magnitude faster because they are custom recursive-descent parsers optimized for the specific languages that those compilers actually support. GCC switched from YACC/Bison, which are each highly optimized, in version 4.0, and parsing was sped up dramatically.

Additionally, a lot of the glue code in any pytorch project is python, which is astonishingly slow compared to C++.

So I reiterate, a 10x speedup would be mundane when moving from a generic solution like pytorch and coding something specific for using this technique for image compression.

Finally, Pytorch is optimized primarily for model training in an Nvidia GPU. Applying models doesn't need a GPU for good performance, and in fact probably isn't a net win due to the need to allocate and copy data, and the fact that consumer computers often have slow integrated GPU's that can't run nvidia gpu code, and the compatible system (OpenCL, which basically isn't used in ML in a serious way yet) is supported on many systems on the CPU only since integrated GPU's are still slower than the CPU even with OpenCL.


That could be an acceptable trade off for some applications. I could see this being useful for companies that host a lot of images. You only need to encode an image once but pay the bandwidth costs every time someone downloads it. Decoding speed probably isn't the limiting factor of someone browsing the web so that shouldn't negatively impact your customer's experience.


> Decoding speed probably isn't the limiting factor of someone browsing the web so that shouldn't negatively impact your customer's experience.

Unless it is with battery powered devices. However I would say that with general web browsing without ad-blocking it wouldn't count much either it terms of bandwidth or processing milliwatts.


is webp lossless?


It's both lossless and lossy - https://en.wikipedia.org/wiki/WebP


Webp supports lossy and lossless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: