Deployment of a new version would depend upon your setup. Assuming a setup similar to the author, you can have a new Docker images with the new version of your code and run it in parallel. All you have to do after that is point the traffic from the old version to the new version (By just running `docker compose`).
If you have a more complex setup, e.g. if by using Kubernetes, you can do things like run both the version at the same time, person A/B testing or have canary deployments to ensure the new version works .
Time for deployment would be most likely in seconds unless the setup is complex/convoluted.
Schema modifications are another beast. For small use cases, you could run a specialized one time container that performs the modifications, but once you need high availability, you'd have to consider a more complex approach. See https://queue.acm.org/detail.cfm?id=3300018
> Each student watched one video, took the corresponding quiz, watched the second video, and took the second quiz. About half of the participants did the Paxos portion first and the other half did the Raft portion first in order to
account for both individual differences in performance
and experience gained from the first portion of the study.
We compared participants’ scores on each quiz to deter-
mine whether participants showed a better understanding
of Raft.
This seems to be a good strategy but it is difficult to factor out the quality of the explanations.
The reason I'm so picky about this claim is that before you know it, you have mythical pseudo-statistical claims like "some programmers are more than 10 times as good as others"
that will live a life of their own. CS has way too many of those.
> To give an example, say I have n machines in datacenter A, and n*.99 in datacenter B. datacenter A gets destroyed, permanently. Does datacenter B now reject all (EDIT: where reject = not commit) requests until a human comes along to tell it that datacenter A isn't coming back?
Of CAP, you are now choosing CP with Raft. So yes, the system is unavailable until an external agent fixes it. In other words, the system needs to have a majority of nodes online to be "available".
What would happen if nodes were to be added to each side of a network partition (unknown to the other side), so that each side believed they had a majority? Or is the "writing" side of the partition determined at partition time, and not changed until they are restored?
* needs a round of Raft to notify its presence to other nodes
So you can only add new nodes (automatically) when you have a 'live' system.
majority = ceil((2n + 1)/2) : so by getting the number of available nodes in the partition, nodes can figure out if they are in the majority or minority cluster.
See section 6 in the paper for details of its implementation.
If you have a more complex setup, e.g. if by using Kubernetes, you can do things like run both the version at the same time, person A/B testing or have canary deployments to ensure the new version works .
Time for deployment would be most likely in seconds unless the setup is complex/convoluted.
Schema modifications are another beast. For small use cases, you could run a specialized one time container that performs the modifications, but once you need high availability, you'd have to consider a more complex approach. See https://queue.acm.org/detail.cfm?id=3300018