Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you downsample each dimension individually you only need to process 20 pixels per pixel.

If you shrink 10x in one direction, then the other, then you first turn 100 pixels into 10, before turning 10 pixels into 1. You actually do more work for a non-smoothed shrink, sampling 110 pixels total.

To benefit from doing the dimensions separately, the width of your sample has to be bigger than the shrink factor. The best case is a blur where you're not shrinking at all, and that's where 20:1 actually happens.

If you sampled 10 pixels wide, then shrunk by a factor of 3, you'd have 100 samples per output if you do both dimensions at the same time, and 40 samples per output if you do one dimension at a time.

Two dimensions at the same time need width^2 samples

Two dimensions, one after the other, need width*(shrink_factor + 1) samples



You're right, I got confused. I was think of Gaussian blur, where the areas to process overlap heavily. Here there's zero overlap.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: