The manufacturers of devices should be held responsible for the security of the devices they sell. This is not different from what they are required to do for the electrical and physical parts of the devices. Pass some standard tests that would prevent at least the most obvious vulnerabilities, default passwords and other bad practices. Run metaexploit against it, etc.
This would raise the attention to secure software development also in other areas of the IT business. Sure, it costs more upfront but we gain as we suffer less from this kind of attacks.
I agree. The manufacturer has not only released a device that is defective for its user, but has actually been weaponized against innocent 3rd parties. That goes beyond the normal problems of providing a product to buyers. When manufacturers can actually degrade the quality of the shared space, that's where we go beyond the normal consumer-protection rules and have special regulations like the FCC in the USA or the CRTC in Canada.
While right now the stuff you provide is basically Caveat Emptor, I could see these security problems eventually being taken more seriously by regulators than simple liability.
I'm counting the days until cars will be weaponized remotely via rich text message received on your onboard entertainment/computer system which isn't physically disconnected from the vehicle controller because it at the minimum accesses status info.
Some grave accident is the only way mandatory minimum standards and improvements will be had, unfortunately. It would be enough to disable breaks remotely for regulation to appear.
It's ironic that (semi-)autonomous driving will improve safety, while all the added software and network connectivity will accelerate the need for software quality requirement as used for airplanes, because software defects will be deemed lethal. I really hope it will happen before someone or something remotely causes a vehicle to cost lives. That said, I suppose a vehicle would turn off the motor if everything else (sensors, actuators, ...) fails.
> I'm counting the days until cars will be weaponized remotely via rich text message received on your onboard entertainment/computer system which isn't physically disconnected from the vehicle controller because it at the minimum accesses status info.
I still don't get why car companies don't just put a diode between the vehicle controller and the infotainment system, and make it physically into a one-way-push system.
On the other hand, once self-driving cars are a thing, that point is almost moot... you need an internet-connected device with incredibly complicated code to run one.
The infotainment system is the least of your worries, the telematics device which is little more than a cellular radio connected to the CANBUS present in most newer vehicles is.
Why isn't GPS plus lidar and friends enough, give that the map data is already available in detailed resolution on local storage? Internet isn't yet as pervasively, reliably, always available as it should be, so I wouldn't want to wait for a connection in my autonomous vehicle while I'm on probation and have no license.
Real time traffic information and ability to try to talk other vehicles more than few metres away are few use cases I think of autonomous vehicles can benefit from using the internet
Satellite internet is reliable enough for low bandwidth operations when cellular range is not available. It can also be a soft dependency , I.e. your car will still run when there is no connectivity, just not that efficiently.
I disagree. It should be the responsibility of the ISPs to monitor their network and shut down the harmful devices. Going after the hardware manufacturers is pointless, many of them are far too small to do anything about the problems they've created. You can sue them out of existence, but dozens more will pop up in their place.
I thought we wanted ISPs to be "dumb" pipes? or did I not get the new position paper memo...
Putting the burden on ISPs is never going to happen. They are in the business of moving bits. While some, unfortunately, do things like traffic shaping, deep packet inspection, and all other sorts of middleboxes -- that is all for economic purposes. There is not nor will ever be an economic incentive for an ISP to filter traffic traversing their network unless it is directly impacting their operations. and when that happens, they take the most direct and effective route: bit bucket it and notify the offender to clean up their house :)
But they are big enough to pass FCC, CE (for Europe) and all the other certifications required to sell their goods. This is only one more compliance test to pass. It's not special because it's computer stuff.
Okay, so some regulator comes up with a security test, the device passes the test, and then later on some security researcher discovers a flaw in either the test or in the device in a way that was not expected. What now? The manufacturer may well be out of business or they may be too small to recall all of their devices. They cannot be counted on to remedy the situation. As much as you'd like to blame the manufacturers, blaming them won't do much good.
More secure devices is something we need no matter what. If compliance tests are required, I suspect that companies will emerge to provide management and upgrade services to devices. Manufacturers won't have to care of all aspects of security and will work with them. This will lower costs because economies of scale will happen inside the service company. This is going to require some standardization on the software platforms (but many are using Linux anyway) and the update methods. But those companies will become targets or accomplices. Beware.
Second pronge. ISPs should firewall customers from within. I have a concern that this is going to be another step towards the Big Brother. I'm not an expert of networking so I can't make suggestions here but if that webcam from a long time disappeared manufacturer starts doing DDoSes, the ISP cuts it off the Internet. This is fine with me. Beware of false positives.
they are shipping criminally negligent software with their products. A future standard should simply require a known, open, secure method for updating firmware to their devices. demanding it be secure from the start is...dangerously naive.
That would cause a lot of trouble for many people. Imaging having an ISP that cut of the internet connection for a customer, because "some device" is misbehaving. The customer, who may not be that tech savvy, will now be forced to figure out if it's the fridge, TV, AC, wireless router, security system or whatever, that has been compromised.
Even if the customer figures out that which device that's to blame, the manufacturer may not have patches, or even be in business any more. So we end up in a situation where you need to replace devices, which ordinary people expect to have 10+ years lifetime.
I don't think you're wrong though, the ISP should shutdown harmful devices, but the manufacturer also need to be held responsible. It's just extremely complicated to address the manufacturer, because many are just reselling white label solutions, or companies created for that one product.
Honestly many of the IoT devices shouldn't exists to begin with, or be allowed to communicate with the internet.
Here in the UK if my car is a danger to other road users, I'm banned from using it even if the defect isn't my fault, the manufacturer has gone out of business, and the car is only a fraction of the way into its expected lifetime. Caveat emptor.
If that makes it hard for new manufacturers to enter the market, they could always form an industry association and, through some combination of source code escrow and insurance, reassure customers their products will be maintained even if the supplier goes out of business. Travel agents, window manufacturers and cavity wall insulators all have such schemes.
In that case, who's responsible for keeping the anti-malware software up to date... and paying for it?
Or maybe that's going too far and the first step is to just have them encourage / force more secure passwords.
(Assuming this is correct [1]):
> Mirai functions by infecting IoT devices by trying to brute force their passwords. The tactic it uses to brute force passwords is entering commonly used and default passwords.
I suggest a gradual approach. Anyway, the manufactuter has more costs so the price is going to be higher. How higher in the long run? Maybe not much. We already experienced the same progression towards safety with materials that don't poison us and appliances that dont catch fire or interfere on the radio spectrum. Every one of those steps cost money. We still have cheap stuff to buy.
The difference here is that those devices must be maintained or retired. So there could be a recurring cost. Eventually there will be ecosystems of companies taking care of the update and maintenance of devices made by their customers (the manufacturers.)
We also have to educate people, to the point that they will feel ashamed to buy unsecure devices and inadvertently help the criminals behind those attacks in the news. Nobody wants to help the mob (or worse) by keeping their stuff at home, right?
This is going to become a matter of national security in every country, because those attacks can be used as a weapon to incapacitate vital infrastructures.
I can't answer because I'm not sure about which mechanism you're thinking about. Would you mind elaborating the concept?
But as any mechanism: who's operating it, who's paying for it, what it's going to do, etc.
An idea is a state owned crawler that tries to get into any device inside the country and tells people that their device X is insecure because of Y and Z. I think there are already many of those crawlers around, but they don't warn people. Quite the opposite ;-)
I'm wondering whether it's possible to log IPs of attacking devices. From targets and from intermediaries. Then inform ISPs, and demand that they inform device owners. But yes, who would do that, and who would pay, are hard parts. Still, this was an expensive day for many providers.
The major difference between physical products, and network products, is that the damage caused by most physical products is limited to their immediate vicinity. Network products have no such limits. IOT devices in Calcutta can harm networks in the U.S. That isn't true of a bad lawn-mower or blender. The network operators and ISPs need to step in and do better policing of their networks. This of course, raises other issues but blaming device manufacturers, even if it really is their fault, just wont work.
Yes, that approach works when the companies doing the damage are based in the U.S., and have the means to pay for the damages. Many of these IOT manufactures are based in other countries. Suing them out of existence may feel good, but it won't stop dozens of other companies from popping up and doing the same thing.
Also, in your specific examples (leaded fuels, CFCs), it's relatively easy to test for bad products. How do you test for an insecure device?
Easier to say then do. Information security is different from electrical and material safety and the key difference is pace of technical progress. The latter areas are relatively developed and stable, significant progress occurs in decades. While infosec landscape can drastically change in years and even months, within life-span of devices. To give you some perspective, toxicity of asbestos was discovered half a century after beginning of its widespread use [1]. And it took another half to fully realize the danger and stop using it.
When designing a new device, you can't mitigate a threat that you can't imagine. And you don't have enough time and budget (time-to-market is the key) to mitigate all known threats.
I'm not advocating status quo, which is dire. But there is no easy solution in sight.
If they were useless no one would buy them. They are indeed useful. The problem is they also can be harmful, but not to their users.
The situation can be remotely compared to RF interference. If you use RF spectrum irresponsibly, you harm your neighbors, police, military and other parties, so everyone understands some regulation is needed. RF spectrum regulation is in everyone's interests.
If you use insecure IoT devices, they are exploited to harm some abstract people on the other side of the globe, who cares about them? If more people realize their devices can be used to shut down their Twitter and Instagram, maybe something will change. But I personally doubt that.
Didn't we try this same argument already with gun manufacturers? How well did that work?
The problem with holding manufacturers responsible for releasing a potentially dangerous product is that the manufacturers view compliance with regulation as an obstacle to maximizing profits, and seek to avoid it. Manufacturing is also centralized and they organize and lobby politicians to influence regulation.
Consumers are distributed and disorganized, and I predict that blame for not securing your IOT device(s) will stay with consumers for the foreseeable future.
I agree some standardized testing would be a good idea though.
Not only that presumably gun manufacturers do not insert backdoors on their products or otherwise make the easier to steal through gross negligence. Many of these IoT manufactures clearly paid zero attention to security. That's unacceptable.
Right, but if you keep your gun under your mattress, someone steals it, and then uses it to shoot someone halfway across the world it's not you're fault.
It's like door locks -- we use them because thieves exist and we would rather manage keys than go though the effort of recovering our stolen goods, but even if someone doesn't have one, it's still always the fault of the thief.
The telnet password used by the botnets and the admin control for the end user are separate in many of these devices. My understanding is that the telnet password was set to not give the end user the access to change it, well, with the exception of technically savvy end users.
people should start thinking about consumer protection for internet services and internet connected devices. Minimum security standards for products brought to market should be just one aspect of this.
This would raise the attention to secure software development also in other areas of the IT business. Sure, it costs more upfront but we gain as we suffer less from this kind of attacks.