Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AVL trees were more competitive when memory was more shallow. These days an L1 cache hit is dirt cheap and a RAM fetch is ~200x as expensive, following a pointer creates a data dependency. B-trees give you more L1 cache hits and fewer pointers to follow. Back in the 1990s the cache hits and RAM fetches were much closer to parity and there weren't as many pipeline stages to fill so data dependencies weren't as important. Go back even farther and you have completely uniform memory that could be accessed in a fixed number of cycles no matter what the address.

There are a lot of good books from these eras, and there are a lot of good books that teach with the uniform memory model. Most algorithms classes don't talk about the hierarchy at all (even though it distorts the true asymptotic runtime of many algorithms).



Agreed. It blew my mind to learn that tiling matrix multiplication is more efficient than untiled, but that you can only show tiled is faster in a hierarchical memory model (for tiled, you have arithmetic intensity that scales with size of your fast memory rather than a fixed constant). Yet in uniform memory model both are O(n^3).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: