I've work on high performance networking file transfers. my experience is that most people who move data get very low utilization compared to the actual throughput of the network. People typically use one TCP connection, one process. high performance data transfers use thousands of TCP connections and thousands of processes.
Many other people underestimate the time/labor effort of dealing with a snowball.
We used a box of disks to get about 20 terabytes out of Amazon to CMU. It ended up being about 50℅ cheaper (from memory - may be off a bit) because we did not account for any employee costs. Startup, running on fumes, none of us drawing a salary, etc.
Technically, that's a logical fallacy:. A&B->true does not mean !B->false.
But, really, I'm not trying to prove or disprove your point. Just noting that there was a situation for us where disk made sense, and we were satisfied with the outcome. Spending 4 hours of person time to save a thousand dollars was reasonable for us in a way it probably wouldn't be for many real companies, because we had comparatively little money and we're willing to work for peanuts.
(Note that I actually share your bias in this one. I both use GCP for my personal stuff and I'm writing this from a Google cafe. :-)
Many customers want to dual-host their data to not be beholden to a single cloud provider. Or to have redundancy across providers. or to put their data closer to the compute.
I've work on high performance networking file transfers. my experience is that most people who move data get very low utilization compared to the actual throughput of the network. People typically use one TCP connection, one process. high performance data transfers use thousands of TCP connections and thousands of processes.
Many other people underestimate the time/labor effort of dealing with a snowball.