"Since mainframe computers are almost always running at 100% utilization..."
That's essentially impossible, as either CPU or I/O will bottleneck first and be the limiting factor.
As an operator, I used to make note of how different jobs used resources so I could keep overall utilization as high as possible, blending I/O bound jobs with CPU heavy jobs. (This was graveyard shift, where real-time users were few)
There was a great Boole and Babbage product called Resolve that allowed the operator (or a TSO user) to dynamically change job priorities as desired.
"Mainframes provide the lowest cost of ownership. First the efficient mainframe power and cooling requirements are much cheaper than equivalent distributed UNIX or Windows platforms."
This makes no sense to me at all. For decades, you could have far lower $/MIPS using minis and micros. Simply needing the whole raised-floor, halon-protected environment and custom peripherals (and specialized personnel) seems cost-prohibitive if not needed from a legacy standpoint.
On an individual basis, sure, the smaller systems are more cost effective. But in the aggregate large systems just blow them away, in general.
I know of a local mega corporation which has been steadily moving off of their "expensive" mainframes for the past decade or so, and they're finally almost finished. But now that they have tens of thousands of servers instead, their IT and other related costs have just exploded, to the point where today they're in a rather severe cost-containment mode. This in spite of the fact that they're tried to leverage open source tools and such in order to keep costs down. (Wall Street has started to notice these excessive cost issues, too.) And this doesn't even take into account all of the breakage which has occurred during the transition, leading to the company's good name being heavily smeared these days due to ongoing customer service issues. I suspect that their reputation may never fully recover from this.
That's essentially impossible, as either CPU or I/O will bottleneck first and be the limiting factor.
As an operator, I used to make note of how different jobs used resources so I could keep overall utilization as high as possible, blending I/O bound jobs with CPU heavy jobs. (This was graveyard shift, where real-time users were few)
There was a great Boole and Babbage product called Resolve that allowed the operator (or a TSO user) to dynamically change job priorities as desired.