They definitely are for most customers: it is extremely expensive to gather and label enough data for a deep learning model to work correctly. It's very unlikely that you'll manage to configure and train your models + generate input data that Google lacks to make your model work much better than what Google already provide with their "generalist" API.
Say you are an insurance company and you want to use build a model that uses damage photos and meta data about car as a backstop to make sure that your repair shops aren't ripping you off.
In this case you already have a bunch of historical labeled data and a pre-trained model is useless to you application. It doesn't help you that the pre-trained model can recognized 10 different types of cats, you need a model trained on photos of damaged cars. Obviously the insurance companies own photo data will be more useful here because it's data about the application domain.
Google has collected a ton of photos for the purpose of image search and consumer photo organizing and that models utility has been tuned to those application area.
The key question is what is the overlap between all applications of images models and what photos Google has collected.
There will be for some but my guess is that those are the mission critical, I can only get this performance from Google cloud are few and far between.
I'm not saying there aren't any. Ben's article suggests that Google's data is somehow going to be a mission critical asset for all applications areas. Which I think is a terribly naive idea when it comes to ML.