Sure, it can be accomplished in a variety of ways: swapping a migrated replica; breaking big schema changes into multiple incremental migrations that minimize the amount of time the database is locked and back-filling data; or spinning up an entirely new cluster, migrating everything, and then switching traffic over from the old cluster to the new cluster at the load balancer. Any number of these techniques can be mixed and matched to accomplish minimal and/or zero downtime depending on your constraints. [1] The most important thing is making sure your application code is able to operate with multiple schema versions.
I used to have a presentation from Facebook where they spoke about their zero-downtime migration process, but I can't find it, so this SO post [2] will do. Of course, there's a bunch more information out there to read up on.
I used to have a presentation from Facebook where they spoke about their zero-downtime migration process, but I can't find it, so this SO post [2] will do. Of course, there's a bunch more information out there to read up on.
[1] https://www.honeybadger.io/blog/2013/08/06/zero-downtime-mig...
[2] http://stackoverflow.com/questions/6740856/how-do-big-compan...