Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dividing one 32-bit number by another is a constant-time operation even if on some machines (e.g. ARM) it is implemented as a loop. This is what matters. Raising a number to the nth power is an O(log n) operation.


Division takes a data-dependent number of cycles on all x86-64 processors.


But the dividing instruction is still O(1).


To decode? Why does the user care how long it takes to decode, they care how long it takes to run. We can provide upper bounds for exp, sin, and erfc too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: