Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could anyone explain why that would be desirable? Is it to test out a website design that you are hosting locally, and you want it to "feel" right? Something else?


I've encountered bugs that were not reproducible due to the speed of local socket connections. I had to use a remote server to debug.


Testing protocol implementations. I had a recent bug where the speed on localhost was masking a race condition between the packet reader and parser. Only caught it when testing over the Internet.

Basically you want full control over your simulation parameters.


For the development/testing of any client-server app where you anticipate deployment to a network with variable latency. In 2010 this means websites.

Mostly to simulate the real-world UX of ajax sites that have a lot of server roundtrips.

It's vital in these kind of interfaces for everything to happen before the magic 150ms threshold of perceived instantaneousness.


I'm currently developing a multiplayer game and use this (although with additional arguments to include variation/jitter and packet loss) to simulate somewhat more realistic network conditions. I find it's great for exposing bugs in the networking protocol that would otherwise only arise under poorer network conditions.


Park a "chatty" protocol host on the other side of the planet. (E.g. a rich client.)

Watch latency kill performance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: