If they cannot adequately protect against these scenarios they really should not be trying to collect and monetize so much granular user data. Clearly the organization is incapable of operating what they have built.
The reality, IMO, is that it is just not financially worthwhile for them to give a shit. People will jump through hoops for stupid validation purposes because they want access. Why spend engineering time solving a problem that is more easily handled by inconveniencing your users.
Your very insightful last paragraph makes the preceding ones an unnecessary appeal to high mindedness. They absolutely should be collecting granular user data if the user and the jurisdiction is willing to let them, and it makes them money. They absolutely are capable of operating what they've built if they're financially healthy despite being a dark pit of nothingness to randomly fucked users. Not prioritizing users can work as a business model for some time. Maybe that's the time horizon their shareholders care about. Don't judge.
> Same people that complain in this post about over-jealous verification, will complain in another post about misinformation and propaganda.
A bit tangential but actually I suspect those are nearly disjoint sets. In my experience the people who complain about misinformation and propaganda are okay with identity verification and censorship while those that want privacy (such as myself) typically dislike censorship and don't want a central authority getting involved to judge whether something is misinformation.
It largely comes down to trust in authority and centralized versus decentralized system design.
Same people that complain in this post about over-jealous verification, will complain in another post about misinformation and propaganda.