Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Aren't there numerous ML techniques potentially be applicable to evaluating potential future crimes where even the people responsible for writing the algorithm and feeding the massive dataset in don't really understand how a particular person might "get to red"? Transparency seems to be only part of the problem.


The corollary to this: if an algorithm uses AI or machine learning to the extent that nobody precisely understands why it makes the decisions it does, it will be very difficult to change its behavior in specific cases, e.g. "make it stop doing that," especially when the inputs cannot be changed.

This is going to come up at some point when a self-driving car does something that appears totally irrational and ends up causing an accident. Engineers will need to come up with some kind of explanation, and I suspect the general public will not be satisfied when they learn that the explanation may be unknowable, or may reduce only to probability instead of certainty.

I deal with some of this at work in far more trivial use cases, and non-engineers just can't seem to accept that sometimes you cannot fix the imperfections without introducing worse imperfections in other areas of the system, and that ML generally leads to output that is "good enough" instead of perfect.


Maybe. But at least we can demand to know the system's input data and what measure(s) the training process is designed to optimize. For example, in the case of Beware, which public records are being used, and what is the benchmark for red/yellow/green? Also, are there any systems in place to try to reduce things like racial bias or the self-reinforcement effect described in the report? Why or why not?


Maybe, but I'm not aware of any that have produced predictions on a broad in vivo population that are better than existing methods. It's one thing to do a heat map of a region at a certain time of day, and express that in terms of overall statistical risk. It's quite another to try and read the future intentions of a human being based on what amounts to a wealth of noise and a paucity of signal.


Just curious as the name is familiar - do you go to Georgia Tech?


I don't sorry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: