Node is sitting at the core of the new Viki platform. It's been a pretty flawless part of the stack. We do zero-downtime deploys thanks to the cluster module, which also keeps workers running (I haven't seen an unhandled error take down a worker in a while, but as with most any stack you need to be pretty vigilant about your error handling). At over 3K api requests per second to a single box, we hover at around 0.05 load (our machines range from e3-1230 to dual 2620s, so it's hard to get an exact number). When asked to handle more request, the impact on load is pretty linear.
We're also dealing with servers in 4 different locations and some requests need to be proxied to a central location. With a ~300ms round trip from Singapore to Washington, Node's asynchronous nature has been a win when it comes to handling concurrency in the face of backend latency.
We're also dealing with servers in 4 different locations and some requests need to be proxied to a central location. With a ~300ms round trip from Singapore to Washington, Node's asynchronous nature has been a win when it comes to handling concurrency in the face of backend latency.