Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your point about the "shape" is interesting, and I think critical, to the future of AI (not to get hyperbolic or anything...).

For example, suppose we have a cancer-diagnosing/treatment planning algorithm. It's possible that it's much better than human doctors: out of a thousand patients, human doctors will save 300 and the algorithm 500; but also that the 500 is not a strict superset of the 300.

And to your point, it's possible that for some of the 300 that are not part of the group of 500, that the diagnosis/treatment recommended by the algorithm is obviously/hilariously wrong to a human.

If so, will we insert a human into the mix? How will we decide when it's correct for the human to override the algorithm? Because if they do all the time, we're back to the 300. And maybe the times when it's correct to override are not all obvious.

Or are we willing to simply accept the algorithm's judgment, knowing that an additional 200 will be saved? We know this is an unlikely outcome because a substantial portion of the population is unwilling to accept the idea that vaccines save more lives than they cost, simply because the lives they cost are different than the ones they save.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: