Lets imagine your software shows constant malfunction and one day stops working, and the engineer who wrote the whole critical piece decided that he will officially leave the investigation group but said he will cooperate.
If you happen to have subpoena power or can otherwise legally compel the engineer to cooperate, then what will happen is that they lose their seat at the table and get downgraded to a code-monkey that gets called in as needed. "What that this widget do?", "It increments the frobbs. The thinking at the -" "That is all we need to know, thank you for your time. You are free to leave."
And not to mention, you are the engineering board and your report on this project will have a direct impact on whether they can continue to practice as an engineer.
Except that’s not what happened here. In this analogy, that engineer made public statements before the investigation was completed and then the investgating group told them to leave.
In your worldview, can Autopilot ever malfunction? Because Driver is always supposed to be vigilant.
Clearly, by your definition Autopilot cannot malfunction ever. Then why go on raising the point that Autopilot didn't malfunction, when you think it will always be a perfect system, even if it decided to mow down pedestrians
Let's say the pilot walks out of the cockpit to pee, doesn't notice the co-pilot is narc'd out for his knee and back pain, and the plane rolls into a banking spiral. Is it the software's fault?
The guy hadn't touched the wheel for six seconds, in traffic. WTF.
Basically, distracted/lost pilots accidentally used an autopilot/flight management system to program a flight path that took a jet into the side of the mountain. They did receive a warning in the cockpit, but attempted to recover too late.
Now, to your question (from the Wikipeida article linked above):
"American Airlines settled numerous lawsuits brought against it by the families of the victims of the accident. American Airlines filed a "third-party complaint" lawsuit for contribution against Jeppesen and Honeywell, which made the navigation computer database and failed to include the coordinates of Rozo under the identifier "R"; the case went to trial in United States District Court for the Southern District of Florida in Miami. At the trial, American Airlines admitted that it bore some legal responsibility for the accident. Honeywell and Jeppesen each contended that they had no legal responsibility for the accident. In June 2000, the jury found that Jeppesen was 30 percent at fault for the crash, Honeywell was 10 percent at fault, and American Airlines was 60 percent at fault."
So, yes - even in a situation where the automation system in question was much more rigorously tested and the users had much more specialized training, the automation system was found to be partially at fault.
First and foremost, Tesla itself sells, and still sells, Autopilot as a self-driving system.
Not all trials are criminal trials. Some are civil trials, meaning generally that a tort occurred and one person is suing another to recover damages. That is the most likely trial in this situation.
If the crime is a bug in Autopilot, then an engineer at Tesla might be guilty of any number of crimes. How the driver dies doesn't particularly matter if the bug causes his death. But more likely, Tesla is liable for product defect resulting in the death of one customer and the near-death experiences of at least 3 other drivers.
This isn't a manslaughter case. But it is a product liability case+negligence+libel+invasion of privacy+intentional infliction of emotional distress case when it didn't have to be, and those additional claims will likely destroy Tesla financially. Trial experts have been quoted as saying a verdict in excess of $100 million is likely if Tesla were stupid enough to go to trial (and right now, Musk is definitely being that stupid).
> The guy hadn't touched the wheel for six seconds
This phrasing is Tesla spin, and I recommend being a bit more cautious about accepting what they say about this at face value.
A Tesla motors car cannot detect whether or not your hands are on the wheel -- it can only detect torque on the steering column from the driver if it exceeds a specific threshold. Regular Tesla drivers have documented getting frequent false positives on this alert.
Torque is the correct measure to be used. If the person is touching the wheel but not applying torque how do I know it's actually not resting his hand at the bottom?
I wanna know that the person is there and moving the steering wheel. Since in case of anything happening I need them to move the wheel not rest their hand somewhere.
That is the correct behavior. If your hands are not required to impart measurable torque, then it'd be easy to spoof and would lead to more false negatives.
So you're suggesting this was a false negative, i.e. that the driver did have his hands on the wheel?
What will happen?