Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why did you suggest THIS as an example of what hes talking about?

He seems to be pooh-poohing the entire idea of "eliminating bias" in AI. So I felt it was important to

* point out that there are clear cases of bias in AI no matter where you stand on gender

* move on to explain a closely related case (using historical speech about race could be offensive)

* use the lesson to show that using historical speech about gender could be problematic as well

> Furthermore,that sounds like a problem of having incomplete training data.

Training a model from historical data can only reflect historical approaches. The social conventions around gender are changing rapidly and are contentious.

> Regardless, manually tweaking a model points to a failure in the process somewhere.

Here, there's no manual tweaking of the model: merely a refusal to return results in an area where the model has proven problematic.



I don't think your example is indicative of what he opposed.

If you can't effectively train something from existing data then cherry picking results according to different values isnt going to fix it. Your example has quietly shifted from facial recognition of different races to speech about different races. I cant even be sure of what you're talking about other than the fact that you will oppose criticism of imparting political bias into models.


> then cherry picking results according to different values isnt going to fix it.

Again: "merely a refusal to return results in an area where the model has proven problematic."

> Your example has quietly shifted from facial recognition of different races to speech about different races.

Again, three points:

* First, no matter how you feel about gender: bias in AI is a problem, as evidenced by issues with recognizing black faces.

* Second, there's some obvious cases where we can all agree that using past training data could result in things that are currently offensive. There are pieces of language we pretty much all agree we should use differently now to avoid offense (e.g. mongoloid).

* Third, I believe that gender is one of these cases. Social mores are evolving. Using conventions from the past when our collective norms are changing on the span of months basically guarantees offense.


> If you can't effectively train something from existing data then cherry picking results according to different values isnt going to fix it.

Given the variance in the utility of copilot's suggestions, this doesn't seem true on it's face. Define "effectively" here and I think cherry picking would definitely fall within its range.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: