That's literally not how it works too. Agile has a robust and rigid structure that you then pull from and adapt to make your team and processes better. Few teams have the same implementation. It's important to understand the full scale of agile practices and tools but do not try to do it 100%, that's not the idea at all.
Also, the article is not claiming she's right, nor am I. All of what you're saying might be true, except for 1 thing:
who isn’t speaking in good faith
I have absolutely no reason to assume this is true. She could be transphobic or just misinformed, but nothing in her past suggest that she's a hateful bigot.
The evidence of that is that she talked about being a sexual assault survivor in her anti-trans manifesto
If this trauma caused her to become transphobic, is this really the best reaction?
Of course there wasn’t evidence of bigotry when she mostly lived a private life, before she decided to get hooked into the Twitterverse. An analogy: you don’t have evidence of a crime before it happens.
The article is arguing that we should be doing more to help her understand.
This has been done. All kinds of people have made thoughtful criticisms of her essay (much more level-handed than mine).
One must be willing to learn in order to be taught. She is not a person that can be reasoned with. She believes this thing she believes and feels the need to broadcast it widely, and she has made up her mind.
This is the hill she wants to die on. If she’s factually on to something, it doesn’t matter. It doesn’t make this less of a stupid thing for her to do.
While I don't want to comment on the validity of the comparison, the fact that antidepressants are overprescribed is well documented. She calls doctors unnecessarily prescribing antidepressants lazy.
Saying it calls people who take mental health medication lazy is a gross misrepresentation. Whether it was a deliberate or out of laziness, it poisons the conversation.
> One must be willing to learn in order to be taught. She is not a person that can be reasoned with.
This was before she wrote that ridiculous essay (did you read it?)
> So I want trans women to be safe. At the same time, I do not want to make natal girls and women less safe. When you throw open the doors of bathrooms and changing rooms to any man who believes or feels he’s a woman – and, as I’ve said, gender confirmation certificates may now be granted without any need for surgery or hormones – then you open the door to any and all men who wish to come inside. That is the simple truth.
Your proof is right there. She doesn’t want trans women in her bathroom because she believes it makes women less safe. Because she doesn’t believe trans women are real women, because they’re just men in dresses coming in to assault her.
It’s right there in her essay, and she tries her best (she is a good writer) to disguise it as sympathy for the vulnerability of trans people.
The completely stupid part of this argument is that nothing actually stops a man from entering a women’s bathroom. And nothing stops a woman from assaulting another woman in a bathroom.
In fact, there are plenty of unisex bathrooms and changing rooms in the world and nothing bad seems to happen en-masse.
It’s just TERF nonsense. And even if I’m totally wrong and it’s not, billionaire author JK Rowling doesn’t need to be defended. She lives a public life and if people don’t like her, that’s her problem. I’m sure she’s got plenty of friends to spend the summer with on her yacht.
No, only from the European versions of Google. Google is actually sued by the French government because they feel that searches for French citizens should be censored based on the EU law on Google worldwide. Which to a certain extend makes sense because right now, if you see the red text mentioned in the article you can just visit the US version of the site to find the original results.
However, if it would be China, Russia or any other authoritarian regime demanding that Google would censor their international engine, I think the sentiment would be quite different.
This is the current behavior, but Google has received significant legal pushback from that stance. Canada's Supreme Court has also ruled that Google must remove results about their citizens globally.
Google, looking for a friendly court, decided to ask the Northern District of California for an injunction preventing them from complying with the order, and California, true to form, issued such an injunction.
Of course, the Northen District of California doesn't have jurisdiction over the Supreme Court of Canada, so by my read, Google is currently in violation of a court order in Canada. As far as I know though, Canada hasn't taken this any further yet.
The Supreme Court had noted in their ruling that, if Google were to show that it is in fact illegal in their home jurisdiction to comply with the order, that would change the analysis. Courts are generally pretty willing to engage in comity analysis; they understand there's a problem when a Canadian court orders an American company to violate American law.
There's no law in the US that prevents Google from delisting a search result. Google essentially asked a US court to make something up so they could defy the order. I don't really feel that lies in the area of compliance with the order.
I think if it happened then Google or search engines in general would quickly splinter into regional alternatives that provide whatever is allowed in their jurisdiction.
I have a competing hypothesis. It seems to me that a lot of people writing the vacancies have too big of an ego to acknowledge the fact that someone without work experience can do the job they've been doing for years (and maybe still are). The seem to have forgotten that they also started without any experience.
I think it's quite strange that when respected and rational people like Elon Musk and Stephen Hawking warn against dangers of AI, some people still dismiss it as irrational FUD. Did you consider they might have a point?
On the other hand I think it's quite strange that a talented entrepreneur and a physicist, among others, are considered as a source of expertise in a field they have nothing to do with, per se. I don't see any of the top AI/ML researchers
voicing these kind of concerns. And while I highly respect Musk and Hawking, and agree that they are rational
people, their concerns seem to be driven by "fear of the unknown" more than anything else, like another comment
pointed out.
Whenever I see discussions about the dangers of AI, they are always about those Terminator-like AI-overlords
that will destroy us all. Or that humans will be made redundant because robots will take all our jobs.
But there are never concrete arguments or scenarios, just vague expressions of fear. Honestly, if I think about all the things HUMANS have done to each other and the planet, I can hardly imagine anything worse than us.
Maybe, but it's worst case scenario. AI can't prove, nor disprove, nor prove improvability of the conjecture, because shortest proof requires 10^200 terabytes.
Make no mistake, FUD kills. If you're in a position of influence, you want to make goddamned sure you're right before you hold back human (and machine) progress by focusing only on Things That Could Go Horribly Wrong. Otherwise, you're basically asking for unintended consequences instead of just trying to warn humanity about them.
... which is just outrageously inappropriate at this stage. If he goes full Howard Hughes, which I'm increasingly worried about, he could set us back decades.
Can you please write what concerns do they have? I have tried to google Elon Musk's quotes about AI, but all I found was that he said, we should fear AI, because it is like summoning a demon... Does he have some thought-out points you refer to? ..because from all of his quotes it seems that he doesn't know how ai works.
I think the AI concerns have been summarised below in this thread:
A) We're striving to make strong AI.
B) It seems plausible that as computing and AI research continues, we'll get to strong AI eventually given that brains are "just" extremely complex computers.
C) We do not know what strong AI will be able to do or how it will act, if it exceeds human intelligence.
The concern is not with the current state of the art, but what could happen in the future if we continue improving AI without seriously considering some safeguards against making a system that at some point becomes clever enough to start making itself even smarter.
I won't claim I'm an AI expert, but I think people like Musk and Hawkins deserve (based on their accomplishments) to be taken serious when they express concerns. I very much doubt that everyone in this thread dismissing their comments as irrational fear mongering have enough knowledge on the topic to do so.
This is absolutely the best strategy in my opinion. If they would sell the company 1 month after you leave, you'd still have almost the same percentage as they would, which is fair. If they'd stick around for another 16 month and then exit, your share would be diluted, which is also fair.