Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

e.g. "This autonomous car is 4-5x safer than your human-driven car."

When people hear numbers like that, they may assume it's based on sufficient data and is true, even if it's theory.

But how could it be true or acceptable, given that we're only just starting to have more autonomous vehicles on the road, and the safety could be dependent on the ratio of autonomous vehicles to human drivers?



Are you talking about the study, or people claiming that about real cars? The study would need to have a wide variety of safety numbers, and all the cars are theoretical. If 4-5x lines up with a claim about a real car, it's probably a coincidence. Saying there's not enough data about real cars has no bearing on this research at all, unless the abstract is wildly misleading.

You replied to someone talking about waiting for autonomous cars to be 4-5x as safe. That means they are waiting for a car to meet that threshold with correct statistics. You can't call out nonexistent statistics as being a lie!


We have safety statistics on real cars with a minority of autonomous vehicles on the road.

What we don't have are adequate safety statistics on autonomous cars on the road with each other and human drivers at various ratios of autonomous cars to human-driven cars.

If a study were to be able to propose safety numbers based on the various ratios and various autonomous cars and systems, then that would be adequate, given the theories behind those numbers are well-founded.

If you were to say, "This is how it is: given that the ratio of autonomous cars to human-driven cars isn't changing and we expect it to stay the same, we can see that the autonomous cars are 4-5x safer," then I'd not call it a lie.

But 4-5x safer without qualifications shows a gross misunderstanding of the importance of the effect of the ratio of autonomous drivers to human drivers; the variation in environment and expected behavior could influence greater or decreased safety, leading false assumption.


It would be easy to do a comparison with different ratios, though, so you shouldn't preemptively assume incompetence.

And it seems pretty unlikely that increasing the number of autonomous cars will make them signficantly less safe, and that's the only result that would be a problem. Mild fluctuations don't matter on a scale as coarse as "4-5x", and an improvement would be good.


Do you think that increasing the number of autonomous cars will linearly decrease the number of accidents to a certain point? Don’t you think that increasing their number will increase the chances of meeting with really bad human drivers and we simply don’t have sufficient info on whether those ‘meetings’ are less or more deadly than with a human driver - and those are the cause of a significant chunk of accidents. And by intuition I doubt that today’s AI could react better than a competent human driver to someone cutting in front of it, and the like. Simply because we are better at reading high-level patterns in others’ driving. Reaction time is not the only metric that matters.


> Do you think that increasing the number of autonomous cars will linearly decrease the number of accidents to a certain point? Don’t you think that increasing their number will increase the chances of meeting with really bad human drivers and we simply don’t have sufficient info on whether those ‘meetings’ are less or more deadly than with a human driver - and those are the cause of a significant chunk of accidents.

There's a difference between the risk changing, versus merely going from insufficient data to sufficient data.

When you have an extremely small data pool, it's also quite possible that one or two meetings with a really bad driver will give you a misleadingly bad impression of autonomous cars.

But I'll put it this way. Once we've seen either 10 billion miles or 100 fatalities from a particular tier of self-driving, we'll have a very solid idea of how dangerous it is. Getting that much data only requires a tenth of a percent of cars in the US for three years. (And if they're particularly dangerous we can easily abort the test early.)

> And by intuition I doubt that today’s AI could react better than a competent human driver to someone cutting in front of it, and the like. Simply because we are better at reading high-level patterns in others’ driving. Reaction time is not the only metric that matters.

If someone's dangerously cutting people off there probably isn't much to read in their patterns. Being cut off seems to me like one of the situations that is most about reaction time and least about thinking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: