I use a similar system when deploying builds to production. The deployment itself is a multi-step process where I have to pick out two artifacts, verify that their git commit hashes are correct, then pick out the correct jenkins jobs and run them in a specific order (providing artifact numbers as input). At every stage I point to my screen, read out the information and only click on things that I'm pointing at. This may be anecdotal but multiple times I have stopped myself from clicking on the wrong thing only after saying its name out loud.
Not everything can be scripted, automation doesn't always repay the work expended in creating and maintaining it[0][1], and sometimes it's just worth having a human in the loop to know what's going on.
While I love XKCD 1205 simplifies things too much. You have to account for human psychology.
Small times doesn't mean small impact on developer time.
1. Routine makes you much more likely to make errors because you're not fully focused. This error could feed into the next process step which is maybe longer, thus costing you disproportionate amounts of time if it fails.
2. If there are wait times in these short tasks (say 30 seconds to 30 minutes) these can have an inordinate effect on developer time wasted.
Say you have a "30 second task" you do a handful of times a day. You have to do something, then wait a bit, then do something else. If you're not a robot you will not like watching for the time it takes to complete. So you might switch to HN or whatever else you were doing. What happens after the task completes?
* You could have lost your state of flow (very likely) => that's another 15-20 minutes to get back into it.
* You might forget to context switch back. Boom 30 minutes have gone by. Oh it's lunch time now. I'll do that after. Then you come back, and forgot about it again. 2h have gone by in your 30 second task, and it's still not done.
3. The problem could be that you don't do the task often enough. Maybe it would help quality if the task was done every few seconds. If it's arduous or takes some time, you're much less likely to do it as often. The graph in 1205 assumes that the number of times the task is done is not related to the speed of execution or that it is a manual task. I don't think that's true. An extreme example of this would be my previous workplace. Build times were 5h, you needed to execute two steps to trigger a build. One step after the other completed (roughly 15 min).
People would waste a lot of time on finding errors that a compiler could find right away, because compiling took half a day and wasn't something you just wanted to trigger.
The process described is not only easily scriptable, but are things for which a human could very commonly make a mistake. You can add things in to include a human in the loop, maybe hitting return at each step, or hitting y, but deploying to production should never depend on a series of manual steps except when absolutely necessary
They listed exactly what they are doing. The artifacts they are picking out should already be tagged with the release. The script to push those to production can use those tags to know what to push and what git commit hashes to check.
ETA: I'll add that when pushing production, the amount not just you can waste, but everyone down the line from users, developers, testers, etc, etc, can be huge. Calculating how much time you would save isn't useful.
I don't want to say you are wrong, because there are a lot of situations (most?) where deployment automation can greatly reduce errors.
However, there can also be a number of reasons why builds and deployments could not be automated safely:
- "Production" is not a single environment, but multiple customer environments with multiple deployment version targets based on need/contract. Customer environments might not even be accessible from same network as build/deploy machines.
- Code is for industrial/embedded/non-networked equipment.
- Policies dictated by own company or regulatory body require builds are manually checked and deployed by a human who can validate and sign off.
There really is no way of knowing. Automation can save hundreds and thousands of man-hours and reduce margin of error; but it is not applicable to every scenario. Sometimes manual work reinforced by good habits and processes are the tool for the job. As much as it pains be to say, as my job is automation.
Not to leave you guys hanging about my specific case, the process we do is as almost as automated as possible. All the heavy lifting is scripted and the deployment to all other environments is done via a single click, however for prod we do have an additional checks, as deploying the wrong build could have some bad consequences.
I agree, if you can make a mistake doing a deploy eventually it will happen. Likely at the worst possible moment because you are under pressure to put out a production issue. All our deploys are push button deployments, it takes a lot of stress out of my daily life to know that once I approve a deployment things take care of themselves.
Well, the corollary to human mistakes in repetitive processes is that the script has a critical bug that shows up just when it would be most important that the script worked flawlessly. Murphy's law always wins in the end ;).