Originally it was built as the inverse. A signal that recruiters could use to tell them which “passive candidates” could be more willing to change jobs.
A customer asked if it could be used internally (we already had their ATS/HRIS data) so a new feature was born.
Yes money was a motive but this particular feature didn’t seem like an evil idea to be used to increase employee misery.
That said, We did build some things that I do regret now.
I think there's an explanation that is both more charitable and more pragmatic.
Companies try to keep employees happy and committed, and part of that is making sure they see a potential future / growth for themselves. As a manager I try to both make sure this is based in reality and that employees are picking up the message.
I like to think I am good at this but it's a difficult skill, and external signal to "hey, you might want to check in with Bob a bit more carefully next time to make sure he's feeling as good as we think he is" could always be valuable.
So even from Bob's perspective it's positive - he may get the extra conversation that increases his options where he stays. On the flip side, what's the malicious use case? "You updated your linkedIn so I am going to fire you" doesn't sound like company policy that's going to be implemented anywhere because it makes no sense.
> "You updated your linkedIn so I am going to fire you"
You might be close to quitting (via perceived signal) so I’m going to give the high visibility project to someone “loyal”.
You may be perceived as a quitter so I’m going to give discretionary budget for the next raise to the employee who is more loyal.
You might be perceived as quitting, and my company requires me to stack rank employees. The lowest gets fired. I put you there to keep the rest of my team. You become a “sacrifice” since you were going to quit anyways.
And these are just the examples my friends at Amazon talk about. I’m sure there’s more.
Now consider all of the above, but now you’re on a visa. Losing your job means you have a few weeks to replace it or get deported.
Typically these tools are bought and used by HR or Talent Acq departments, not managers so the type of detailed decision-making you’re describing wasn’t a use-case in my experience.
It’s more like a roll-up metric that can be looked at globally, by role, department, location, etc. yes, it can also be used at the individual level but again, HR is the buyer and they are the most fearfully bureaucratic department in most companies .
From a data and capability perspective, I agree it’s a little scary. But in practice I doubt it’s used this way and if so, there’s your retention problem.
IMO a company that would rely on this kind of invasive surveillance is not really interested in the well being of their employees. There are far better and less invasive ways to evaluate employee satisfaction and fulfillment than hiring an outside organization to "dig up dirt", for the lack of a better term.
To me it's no different than a company hiring a PI to follow me around so they can report back how many drinks I have on the weekend at a barbecue. Or following me around to find out if I bought a new suit and tie (oh no, might indicate I'm going for an interview!). Just because it's being done digitally doesn't make it any less invasive.
What's next? Grocery stores start selling my buying habits to my employer? That would definitely give them more insight into whether I'm happy and committed. Banks/Credit card companies selling my purchase history?
Stop giving them ideas! The last thing I need is for someone to figure out and monetize the correlation between my Oreo / Johnny Walker consumption rate (not together, obviously) and my job satisfaction.
I always said “at least we’re not building weapons we are trying to get people jobs (or keep them in jobs)”
But aside from weapons, if you think building a retention tool for HR is bad, you certainly should not ever look at AdTech or the types of things insurance companies are doing from a data perspective.
There's a strong component of "If I don't someone else will", but also, usually this is the kind of thing that sucks at getting general open/free solutions, because no one does it willingly, yet it's easy for an employer to justify paying for (And economically incentivize it's development)
"If I don't then someone else will" is only an excuse for the already morally dubious. So what if someone else does it? Sure the bad thing still exists, but at least you didn't personally make the world a worse place explicitly for your own personal gain.
The unspoken part of that phrase is the second half of "so since it will happen anyway, it's not wrong for me to reap the rewards of doing the bad thing"
I hope you understand how inherently wrong that is.