While I don't necessarily agree with how they've advertised it I think they are legally safe. All Model 3's DO have the HARDWARE to support full-self driving, even if the software is not there. And regarding the software, it has been shown to be about 40% safer than humans where it is used, which is what they've claimed.
Because humans can do driving with comparable (worse, actually) hardware. Superhuman reflexes mean that software with superhuman driving abilities can exist.
This is the part that can be emulated in software. All you have to prove is that what ever platform/language you're using for the programming is Turing Complete, which is the case for almost all of the most popular languages.
So can I run Crysis on my TI calculator? I'm sure whatever platform/language is running on it is Turing Complete. I think you missed the point that the brain is also hardware.
> All Model 3's DO have the HARDWARE to support full-self driving, even if the software is not there.
Please provide evidence.
Given that the only full-self driving system out there is Waymo's, which uses completely different hardware than Tesla's, it is impossible to back your claim, unless you develop a fully-self-driving system on top of Tesla's hardware.
So until that is done, your claim is false, no matter how many caps you use.
Sure, they have cameras to see and a computer to do rigorous processing. We can compare this to a human, who uses their vision to perceive the outside world while driving. (You could also argue humans use their hearing, well the Tesla's have mic's if they really wanted to use that).
> they have cameras to see and a computer to do rigorous processing. We can compare this to a human, who uses their vision to perceive the outside world while driving
My phone also has cameras and a processor. Are you implying that my phone is as equipped to drive a car as a human?
A monkey has eyes, a brain, arms and legs. Are you implying that a monkey is as equipped to drive a car as a human?
I'd say the monkey has most of the hardware but not the software, analogously. As for the phone, right I was assuming the cars have enough processing power and memory to do the necessary processing. But granted they haven't exemplified full self-driving on their current hardware, I will have to concede that we do not know how much processing and power are needed to be truly self-driving. For all we know, that last 10% of self-driving capability may require an exponential increase in the amount of processing required.
if we ask what the reasonable expectation is after an advertisement tells a customer that the capability "exceeds human safety" I would say that the average customer thinks of a fully automated system.
This couldn't be further from the truth as automated vehicles still suffer from edge cases (remember the lethal accident involving a concrete pillar) where humans can easily make better decisions.
A system that is advertised as superior to human judgement ought to strictly improve on human performance. Nobody expects that to mean that the car drives perfect 95% of the time but accidentally kills you in a weird situation. This 'idiot savant' characteristic of ML algorithms is what makes them still dangerous in day-to-day situations.
Yes I totally agree, I think there should be some regulation regarding this area. At least in terms of being clear when advertising. I think it's ok to deploy such a system where in some/most cases the AI will help, but it needs to be made apparent that it can and will fail in some seemingly simple cases.
That’s probably a gambit to avoiding being easily sued but it’s a really bad faith attempt to mislead. Most people are going to read that and assume that a Tesla is self-driving, as evidenced by the multiple accidents caused by someone trusting the car too much. Until that’s real they shouldn’t be allowed to advertise something which hasn’t shipped.