Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lets imagine your software shows constant malfunction and one day stops working, and the engineer who wrote the whole critical piece decided that he will officially leave the investigation group but said he will cooperate.

What will happen?



> What will happen?

If you happen to have subpoena power or can otherwise legally compel the engineer to cooperate, then what will happen is that they lose their seat at the table and get downgraded to a code-monkey that gets called in as needed. "What that this widget do?", "It increments the frobbs. The thinking at the -" "That is all we need to know, thank you for your time. You are free to leave."

And not to mention, you are the engineering board and your report on this project will have a direct impact on whether they can continue to practice as an engineer.


Except that’s not what happened here. In this analogy, that engineer made public statements before the investigation was completed and then the investgating group told them to leave.


Problem is was it malfunction ? Driver is always required to pay full attention and it is just a driver assist.


In your worldview, can Autopilot ever malfunction? Because Driver is always supposed to be vigilant.

Clearly, by your definition Autopilot cannot malfunction ever. Then why go on raising the point that Autopilot didn't malfunction, when you think it will always be a perfect system, even if it decided to mow down pedestrians


The autopilot can (and probably did) malfunction, but the responsibility rests with the driver.


Was it a malfunction?

Are you serious? The car decided to turn and hit a divider at 60mph without ever attempting to brake.

The reason the driver is required to pay full attention is because the software malfunctions in some cases.


If the car drove itself into a barrier because of malfunction, it's a manufacturing defect.

If the car drove itself into a barrier without a malfunction, it's a design defect.

If the car didn't drive itself into a barrier, well, that's a different story.


Let's say the pilot walks out of the cockpit to pee, doesn't notice the co-pilot is narc'd out for his knee and back pain, and the plane rolls into a banking spiral. Is it the software's fault?

The guy hadn't touched the wheel for six seconds, in traffic. WTF.


A better real life example:

https://en.wikipedia.org/wiki/American_Airlines_Flight_965

Basically, distracted/lost pilots accidentally used an autopilot/flight management system to program a flight path that took a jet into the side of the mountain. They did receive a warning in the cockpit, but attempted to recover too late.

Now, to your question (from the Wikipeida article linked above): "American Airlines settled numerous lawsuits brought against it by the families of the victims of the accident. American Airlines filed a "third-party complaint" lawsuit for contribution against Jeppesen and Honeywell, which made the navigation computer database and failed to include the coordinates of Rozo under the identifier "R"; the case went to trial in United States District Court for the Southern District of Florida in Miami. At the trial, American Airlines admitted that it bore some legal responsibility for the accident. Honeywell and Jeppesen each contended that they had no legal responsibility for the accident. In June 2000, the jury found that Jeppesen was 30 percent at fault for the crash, Honeywell was 10 percent at fault, and American Airlines was 60 percent at fault."

So, yes - even in a situation where the automation system in question was much more rigorously tested and the users had much more specialized training, the automation system was found to be partially at fault.


>automation system

Tesla autopilot is driver assist, not automated driving.

If you want to put someone on trial, you first have to establish a crime.

If the crime is a bug in autopilot, then yes, Tesla is guilty.

If the crime is killing the driver of a vehicle because of an accident, then Tesla is not guilty.

We can talk about bugs in software outside of a manslaughter case.


First and foremost, Tesla itself sells, and still sells, Autopilot as a self-driving system.

Not all trials are criminal trials. Some are civil trials, meaning generally that a tort occurred and one person is suing another to recover damages. That is the most likely trial in this situation.

If the crime is a bug in Autopilot, then an engineer at Tesla might be guilty of any number of crimes. How the driver dies doesn't particularly matter if the bug causes his death. But more likely, Tesla is liable for product defect resulting in the death of one customer and the near-death experiences of at least 3 other drivers.

This isn't a manslaughter case. But it is a product liability case+negligence+libel+invasion of privacy+intentional infliction of emotional distress case when it didn't have to be, and those additional claims will likely destroy Tesla financially. Trial experts have been quoted as saying a verdict in excess of $100 million is likely if Tesla were stupid enough to go to trial (and right now, Musk is definitely being that stupid).


> The guy hadn't touched the wheel for six seconds

This phrasing is Tesla spin, and I recommend being a bit more cautious about accepting what they say about this at face value.

A Tesla motors car cannot detect whether or not your hands are on the wheel -- it can only detect torque on the steering column from the driver if it exceeds a specific threshold. Regular Tesla drivers have documented getting frequent false positives on this alert.


Torque is the correct measure to be used. If the person is touching the wheel but not applying torque how do I know it's actually not resting his hand at the bottom?

I wanna know that the person is there and moving the steering wheel. Since in case of anything happening I need them to move the wheel not rest their hand somewhere.


That is the correct behavior. If your hands are not required to impart measurable torque, then it'd be easy to spoof and would lead to more false negatives.

So you're suggesting this was a false negative, i.e. that the driver did have his hands on the wheel?


It is indisputable at this point that Autopilot happily steers cars into the concrete barrier.

If Tesla does not share some responsibility (in a legal sense) for this crash, then under what circumstances would they share responsibility?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: