The manufacturers of devices should be held responsible for the security of the devices they sell. This is not different from what they are required to do for the electrical and physical parts of the devices. Pass some standard tests that would prevent at least the most obvious vulnerabilities, default passwords and other bad practices. Run metaexploit against it, etc.
This would raise the attention to secure software development also in other areas of the IT business. Sure, it costs more upfront but we gain as we suffer less from this kind of attacks.
I agree. The manufacturer has not only released a device that is defective for its user, but has actually been weaponized against innocent 3rd parties. That goes beyond the normal problems of providing a product to buyers. When manufacturers can actually degrade the quality of the shared space, that's where we go beyond the normal consumer-protection rules and have special regulations like the FCC in the USA or the CRTC in Canada.
While right now the stuff you provide is basically Caveat Emptor, I could see these security problems eventually being taken more seriously by regulators than simple liability.
I'm counting the days until cars will be weaponized remotely via rich text message received on your onboard entertainment/computer system which isn't physically disconnected from the vehicle controller because it at the minimum accesses status info.
Some grave accident is the only way mandatory minimum standards and improvements will be had, unfortunately. It would be enough to disable breaks remotely for regulation to appear.
It's ironic that (semi-)autonomous driving will improve safety, while all the added software and network connectivity will accelerate the need for software quality requirement as used for airplanes, because software defects will be deemed lethal. I really hope it will happen before someone or something remotely causes a vehicle to cost lives. That said, I suppose a vehicle would turn off the motor if everything else (sensors, actuators, ...) fails.
> I'm counting the days until cars will be weaponized remotely via rich text message received on your onboard entertainment/computer system which isn't physically disconnected from the vehicle controller because it at the minimum accesses status info.
I still don't get why car companies don't just put a diode between the vehicle controller and the infotainment system, and make it physically into a one-way-push system.
On the other hand, once self-driving cars are a thing, that point is almost moot... you need an internet-connected device with incredibly complicated code to run one.
The infotainment system is the least of your worries, the telematics device which is little more than a cellular radio connected to the CANBUS present in most newer vehicles is.
Why isn't GPS plus lidar and friends enough, give that the map data is already available in detailed resolution on local storage? Internet isn't yet as pervasively, reliably, always available as it should be, so I wouldn't want to wait for a connection in my autonomous vehicle while I'm on probation and have no license.
Real time traffic information and ability to try to talk other vehicles more than few metres away are few use cases I think of autonomous vehicles can benefit from using the internet
Satellite internet is reliable enough for low bandwidth operations when cellular range is not available. It can also be a soft dependency , I.e. your car will still run when there is no connectivity, just not that efficiently.
I disagree. It should be the responsibility of the ISPs to monitor their network and shut down the harmful devices. Going after the hardware manufacturers is pointless, many of them are far too small to do anything about the problems they've created. You can sue them out of existence, but dozens more will pop up in their place.
I thought we wanted ISPs to be "dumb" pipes? or did I not get the new position paper memo...
Putting the burden on ISPs is never going to happen. They are in the business of moving bits. While some, unfortunately, do things like traffic shaping, deep packet inspection, and all other sorts of middleboxes -- that is all for economic purposes. There is not nor will ever be an economic incentive for an ISP to filter traffic traversing their network unless it is directly impacting their operations. and when that happens, they take the most direct and effective route: bit bucket it and notify the offender to clean up their house :)
But they are big enough to pass FCC, CE (for Europe) and all the other certifications required to sell their goods. This is only one more compliance test to pass. It's not special because it's computer stuff.
Okay, so some regulator comes up with a security test, the device passes the test, and then later on some security researcher discovers a flaw in either the test or in the device in a way that was not expected. What now? The manufacturer may well be out of business or they may be too small to recall all of their devices. They cannot be counted on to remedy the situation. As much as you'd like to blame the manufacturers, blaming them won't do much good.
More secure devices is something we need no matter what. If compliance tests are required, I suspect that companies will emerge to provide management and upgrade services to devices. Manufacturers won't have to care of all aspects of security and will work with them. This will lower costs because economies of scale will happen inside the service company. This is going to require some standardization on the software platforms (but many are using Linux anyway) and the update methods. But those companies will become targets or accomplices. Beware.
Second pronge. ISPs should firewall customers from within. I have a concern that this is going to be another step towards the Big Brother. I'm not an expert of networking so I can't make suggestions here but if that webcam from a long time disappeared manufacturer starts doing DDoSes, the ISP cuts it off the Internet. This is fine with me. Beware of false positives.
they are shipping criminally negligent software with their products. A future standard should simply require a known, open, secure method for updating firmware to their devices. demanding it be secure from the start is...dangerously naive.
That would cause a lot of trouble for many people. Imaging having an ISP that cut of the internet connection for a customer, because "some device" is misbehaving. The customer, who may not be that tech savvy, will now be forced to figure out if it's the fridge, TV, AC, wireless router, security system or whatever, that has been compromised.
Even if the customer figures out that which device that's to blame, the manufacturer may not have patches, or even be in business any more. So we end up in a situation where you need to replace devices, which ordinary people expect to have 10+ years lifetime.
I don't think you're wrong though, the ISP should shutdown harmful devices, but the manufacturer also need to be held responsible. It's just extremely complicated to address the manufacturer, because many are just reselling white label solutions, or companies created for that one product.
Honestly many of the IoT devices shouldn't exists to begin with, or be allowed to communicate with the internet.
Here in the UK if my car is a danger to other road users, I'm banned from using it even if the defect isn't my fault, the manufacturer has gone out of business, and the car is only a fraction of the way into its expected lifetime. Caveat emptor.
If that makes it hard for new manufacturers to enter the market, they could always form an industry association and, through some combination of source code escrow and insurance, reassure customers their products will be maintained even if the supplier goes out of business. Travel agents, window manufacturers and cavity wall insulators all have such schemes.
In that case, who's responsible for keeping the anti-malware software up to date... and paying for it?
Or maybe that's going too far and the first step is to just have them encourage / force more secure passwords.
(Assuming this is correct [1]):
> Mirai functions by infecting IoT devices by trying to brute force their passwords. The tactic it uses to brute force passwords is entering commonly used and default passwords.
I suggest a gradual approach. Anyway, the manufactuter has more costs so the price is going to be higher. How higher in the long run? Maybe not much. We already experienced the same progression towards safety with materials that don't poison us and appliances that dont catch fire or interfere on the radio spectrum. Every one of those steps cost money. We still have cheap stuff to buy.
The difference here is that those devices must be maintained or retired. So there could be a recurring cost. Eventually there will be ecosystems of companies taking care of the update and maintenance of devices made by their customers (the manufacturers.)
We also have to educate people, to the point that they will feel ashamed to buy unsecure devices and inadvertently help the criminals behind those attacks in the news. Nobody wants to help the mob (or worse) by keeping their stuff at home, right?
This is going to become a matter of national security in every country, because those attacks can be used as a weapon to incapacitate vital infrastructures.
I can't answer because I'm not sure about which mechanism you're thinking about. Would you mind elaborating the concept?
But as any mechanism: who's operating it, who's paying for it, what it's going to do, etc.
An idea is a state owned crawler that tries to get into any device inside the country and tells people that their device X is insecure because of Y and Z. I think there are already many of those crawlers around, but they don't warn people. Quite the opposite ;-)
I'm wondering whether it's possible to log IPs of attacking devices. From targets and from intermediaries. Then inform ISPs, and demand that they inform device owners. But yes, who would do that, and who would pay, are hard parts. Still, this was an expensive day for many providers.
The major difference between physical products, and network products, is that the damage caused by most physical products is limited to their immediate vicinity. Network products have no such limits. IOT devices in Calcutta can harm networks in the U.S. That isn't true of a bad lawn-mower or blender. The network operators and ISPs need to step in and do better policing of their networks. This of course, raises other issues but blaming device manufacturers, even if it really is their fault, just wont work.
Yes, that approach works when the companies doing the damage are based in the U.S., and have the means to pay for the damages. Many of these IOT manufactures are based in other countries. Suing them out of existence may feel good, but it won't stop dozens of other companies from popping up and doing the same thing.
Also, in your specific examples (leaded fuels, CFCs), it's relatively easy to test for bad products. How do you test for an insecure device?
Easier to say then do. Information security is different from electrical and material safety and the key difference is pace of technical progress. The latter areas are relatively developed and stable, significant progress occurs in decades. While infosec landscape can drastically change in years and even months, within life-span of devices. To give you some perspective, toxicity of asbestos was discovered half a century after beginning of its widespread use [1]. And it took another half to fully realize the danger and stop using it.
When designing a new device, you can't mitigate a threat that you can't imagine. And you don't have enough time and budget (time-to-market is the key) to mitigate all known threats.
I'm not advocating status quo, which is dire. But there is no easy solution in sight.
If they were useless no one would buy them. They are indeed useful. The problem is they also can be harmful, but not to their users.
The situation can be remotely compared to RF interference. If you use RF spectrum irresponsibly, you harm your neighbors, police, military and other parties, so everyone understands some regulation is needed. RF spectrum regulation is in everyone's interests.
If you use insecure IoT devices, they are exploited to harm some abstract people on the other side of the globe, who cares about them? If more people realize their devices can be used to shut down their Twitter and Instagram, maybe something will change. But I personally doubt that.
Didn't we try this same argument already with gun manufacturers? How well did that work?
The problem with holding manufacturers responsible for releasing a potentially dangerous product is that the manufacturers view compliance with regulation as an obstacle to maximizing profits, and seek to avoid it. Manufacturing is also centralized and they organize and lobby politicians to influence regulation.
Consumers are distributed and disorganized, and I predict that blame for not securing your IOT device(s) will stay with consumers for the foreseeable future.
I agree some standardized testing would be a good idea though.
Not only that presumably gun manufacturers do not insert backdoors on their products or otherwise make the easier to steal through gross negligence. Many of these IoT manufactures clearly paid zero attention to security. That's unacceptable.
Right, but if you keep your gun under your mattress, someone steals it, and then uses it to shoot someone halfway across the world it's not you're fault.
It's like door locks -- we use them because thieves exist and we would rather manage keys than go though the effort of recovering our stolen goods, but even if someone doesn't have one, it's still always the fault of the thief.
The telnet password used by the botnets and the admin control for the end user are separate in many of these devices. My understanding is that the telnet password was set to not give the end user the access to change it, well, with the exception of technically savvy end users.
people should start thinking about consumer protection for internet services and internet connected devices. Minimum security standards for products brought to market should be just one aspect of this.
It looks like it's going to be necessary to get serious about ingress filtering. See RFC 2827, which laid out the basics. Fortunately, there aren't that many ISPs left. If Comcast, AT&T, and Verizon got serious about ingress filtering, it would cut way down on the number of devices that could get through with a random IP source address. The ISPs are in a good position to tell their customers to unplug whatever is causing the trouble. (Assuming it's not the ISP's own router, which they should be able to fix remotely.)
At the big-pipe level, there should be sampling. 1 in 10000 packets has been suggested, which will reveal any massive attack without hurting privacy much. If a big pipe has an excessive fraction of attack packets, that indicates the sending end isn't doing proper ingress filtering. That's a matter to be dealt with between network operators, possibly with involvement from CERT and Homeland Security if necessary.
This problem is solveable, but it's going to take some ass-kicking. There are now enough big companies annoyed about this for that to happen.
From the ISP perspective, it is ingress filtering -- he's saying they should be filtering their customer's traffic as it enters their network, such as leaving customer's cable modem and entering the CMTS.
As I understand it, this will do nothing, in these cases, because the traffic is perfectly valid. At least in the case of the "Krebs On Security" blog, the attack was "just" a massive amount of perfectly valid http/https requests. I don't see how the ISP would be able to filter that out.
Before filtering is implemented, it makes sense for ISPs to warn their customers that their hardware was involved in a ddos attack. Analyzing historical data is typically much easier than real-time.
You mean "egress", and that's only helpful in cases where the source is in fact spoofed. With a large and widely distributed number of small devices, there isn't much of a need.
If the IP address isn't spoofed, you can block by IP at intermediate points, firewall, attack the attacker, or get the responsible ISP to turn the connection off. Random IPs are a much bigger headache.
As for the ingress/egress thing, the RFC calls it ingress filtering. Not worth arguing over.
Calling for government regulation of devices is all fun and games until the government comes along and mandates closed source, irreplaceable firmware like the FCC did with Wifi access points.
These problems need to be fixed soon or government regulation will come. I think the government would be reluctant to regulate unless it seems like an emergency. If larger attacks happen then the public will call for regulation and politicians will fall over each other to do it. There will be intended and unintended consequences.
He best way to prevent regulation is to eliminate the perceived need. If another attack hits, especially larger and more costly the one last week, people will start getting outraged. Those businesses who lost money might even start lobbying.
I'm not hopeful at this point as I don't even get the feeling that most people have taken the time to understand what happened last week. I see almost mantras people repeat out there outlining mitigation that, white important generally, would not have helped mirigste the attack last Frieay. The solutions offered frequently involve big lag times even if somehow set into motion immediately.
It may be necessary to make some hard trade offs to show some real progress. When people don't freely make hard choices, the government will step in. That would be a preventable tragedy. I don't have much hope at this point.
Good regulation isn't impossible. For example, mandating a unique admin password for each IoT device would go a long way to helping prevent this kind of fiasco.
How would a unique admin password help in cases where there's a backdoor accessed via an open port? A lot of the devices used last Friday had port 23 open. The password used in the backdoor was compeletely separate from the device admin password.
Your advice is good but it wouldn't have helped last Friday.
The EU version of that WiFi regulation also has a clause that while the default firmware for the router can't be allowed to be used for such purposes, users should always be able to install their own firmware.
Which, of course, will result in a worse security posture. Interesting technical ideas can be found in the actual research being done on embedded device security out there. This comes to mind, for example: https://lwn.net/Articles/568943/
For reference, the earlier Krebs attack is quoted on Wikipedia as being 620 Gbps [1]. I'm not sure it's really comparing apples to apples though with this being at DNS level. ~2x the traffic sure, but at a layer with a lot more impact on everyone, so the outcome is much more than ~2x worse.
Is there a way to determine electricity used during all of this? Certainly each IoT device has low bandwidth and load draw, but each of their service requests is honored the same as any normal human request. A small signal from a device prompts more electricity in potentially multiple servers upstream until the host can satisfy the request. So are there potentially thousands of kWh being wasted even with mitigation? Or can requests from certain IP's be denied and thus electricity saved?
I don't think electricity here is the problem. In fact, measuring the impact is probably pointless, a lot of servers are always-on regardless of traffic.
Yes, but of course the lost productivity of the eastern continental U.S. for just a few minutes likely dwarfs the wasted electricity by many orders of magnitude.
Sure, the current draw per byte of information exchanged is negligible. But I'm curious as to how these millions of cheap, low-power IoT devices can essentially act as transistors in the sense that their requests can prompt more relatively higher current draw elsewhere. Once again, this draw is likely miniscule.
In theory, if every internet connected device in the world could be a part of Mirai, not only would we not be able to use the internet, but wouldn't the majority of datacenters be running at full capacity?
Is there a page somewhere that maps the passwords to corresponding devices? Essentially a list of what devices are affected by this botnet? I don't think anything I own is affected, and I monitor my network pretty closely, but would like to double check. And make sure not to buy any of them either.
AFAICT, Mirai gains access ONLY through telnet on port 23. I see no one mentioning that anywhere here. If you do not have a device with something answering on port23, then you cannot be p0wned by Mirai.
Can anyone verify that statement as correct or incorrect?
I just came into this thread yesterday and today and mentioned it but I get the feeling this thread but maybe it's too late for discussing as this thread's been around for like 5 days. I'd like to discuss it more, maybe start another threwd but it might be futile to try to talk about this topic more. It's a bummer there's not more passion for this.
It's easy to call for companies to be held responsible for botnets etc. but it's important to be much more specific than that.
Google has an absolutely massive bot network in all Android devices running Play Services, which basically keeps an open root shell to the mothership at all times. However that is used mostly for good, by keeping devices updated and removing malware remotely. An responsibility model would entail insurances, and that could easily change the situation for the worse.
But it's also important the realize the inherent risk in this. Has anyone tried to quantify it?
Then there has to be a line drawn somewhere. Holding the keys to Window Update means a potential botnet. Maybe even being a Debian Developer. That probably shouldn't carry the same responsibilities.
I wonder what the goal of these guys is. Several high profile attacks with little impact, source code and attack vectors leaked, users and vendors scrambling to set up defenses.
Is this some "look what could happen" scenario or is someone using it to show off the skills of his other botnet?
Another speculation is that these were very specific attacks. Are there any online monitoring / logging / audit services that stopped being reported to? I know that at least the SPF checks failed for emails. Some other infrastructure could fail too.
Can someone ELI5 how does malware infect IOT devices ?
The firmware which is there can only be upgraded from the OEM sites right ? So unless there is a way to affect the firmware from outside, how will a malware affect it ?
And what can I do to secure an IOT device if I have one ?
The malware hardcodes 62 common username:password pairs like root:12345 [0] and apparently there are millions of IOT devices out there where these trivial passwords will get you root.
I think he's asking more about how the attackers get past the simple NAT. I mean, unless you deliberately expose the device as a webserver to the public internet, a device should be by-default hidden theoretically unless it goes looking for trouble online. Of course, considering the quality of most routers, it's not like the NAT itself is terribly safe.
They don't need to. Millions of devices have their telnet port open to the Internet (just check Shodan), after infecting those, Mirai has a beach head to scan and attack the rest of the network.
The simplest idea is to put your exploit as JS payload in some shady ad network or auto-cracked PHP website, and get to the devices by scanning & fingerprinting devices on the browser's rfc1918 network. Then use a reverse shell style connection to c&c.
malicious javascript can only send packets on your rfc1918 network via a DNS rebinding attack, which is not trivial to pull off and is not very reliable
all of these iot devices use upnp to bypass the nat 'firewall'
Edit: Note that the example does try to do some DNS rebinding on the router, but that's the end goal. The attack itself doesn't rely on rebinding. It does require "already logged on to the router" users, but I suspect an attack using default login credentials is possible as well.
I'd like to see someone actually name & shame the devices involved. With evidence, of course.
I've had a quick poke around my house, and the couple of devices I have (wemo, philips hue) are holding pretty persistent outbound http connections - not dropping their trousers and exposing telnet.
Persistently exposed telnet sounds more to me like industrial control devices, network appliances, etc. But "IoT" is so vague that we're immediately jumping to blaming consumer devices.
Dahua. And then there are the Alibaba Chinese Hikvision cameras that generally can't be firmware updated. Seems like plenty of people port forward to their cameras as most of the camera forums say over and over to use VPN and not port forward.
I wonder how many are running some linux variant though, and have some remote execution vulnerability on their web UI (maybe even without logging in)..
Sadly the present state of software engineering is mostly incapable of archieving what your reasonable expectation is. Malware can exploit bugs in the devices to get arbitrary code execution (=run any code the attacker wants).
How difficult would it be to write a anti-mirai virus which disables the remote access on these IoT devices? Or at least warns the user in some way that this device is being used for DDoS.
Relatively trivial. Mirai didn't prevent future access as far as I know. But it would be illegal as well.
Warning the users would be much simpler. The hosts used to report infections are known. Destination port for the infection is known and normally not exposed.
I think that at this point ISPs should do the same thing with incoming port 23 as they did with outgoing 25. Disable by default, allow people to enable if they want to.
I don't understand how they can scan IoT devices if IoT devices are behinds a router which is usually our home network setup. So we will expose a public IP, and NAT is usually not enabling by default anyway so how the heck they can telnet/connect to the IoT devices to brute force password?
UPnP is enabled on lots of routers by default and enables devices and services behind NAT to forward traffic for specific ports to their internal IP address to allow direct connections from the internet.
Just do a quick shodan search for port 23, there are millions of devices directly attached to the web.
Some are intentional, but my guess is the most are just badly configured by people not caring for security. Add all the cheap vendors for IoT devices and "I need to be able to check my lights at home while on the road" and you get an amazingly huge attack surface
Downvoting this post is kinda shitty, because it's just obliquely pointing out something that nearly every developer should already know--that having a single point of failure for your builds and your tests (the only reason Github being down should impact you) is a bad, bad idea. Your code should be building and deploying from git remotes you control, if you want to be using push-as-build CD. If you have dependencies, bring them in-house on your own Artifactory, gem server, npm mirror, whatever. Controlling the entirety of your deployment stack is not just a good idea, it's a requirement for safe and sane operations.
Thanks - I was exactly thinking this when i asked the question. If git is advertised as a DVCS, then it's hard to understand why the practice is to rely on a central authority that you don't own. It could go down anytime like it did yesterday.
Are you aware of some well-built systems for enabling that? What you're describing, while simple in concept, is not so simple to set up. There's tooling to be written and authentication to synchronize.
I...just do it? I don't find it to be complex at all. Use one or another method of user authentication for SSH (I use SSSD, but no solution can be really universally recommended), launch gitolite (a six line Chef recipe), do the same for Boxcutter or whatever you need.
I tend to think that the problem isn't complexity, the problem is the current developer culture being so predicated on it's easy, you don't have to understand things that having to understand things borders on anathema. Yes, doing this does require understanding how your tools work, but that doesn't make it complex. And you should know anyway.
I've been doing this for 20 years now, and little has changed. It's not that developers don't think they have to understand things; it's that their incentives are to understand things that are relevant to the problem domains of the products they are working on. Reliable and efficient deployment strategies are not usually in their problem domains.
They may not have been in their problem domains twenty years ago (though they were for me from the start, albeit merely fourteen or so years ago =). I think that the future is obvious and that the expansion of the "stack" to the underlying runtime infrastructure is a foregone conclusion; this is ignored at a developer's own peril.
(As it happens, I was a developer first--but I was managing my own infrastructure, too, and so from a very early age I had to be damned comfortable with doing that. Now the industry is pivoting there as "devops" becomes more and more of a thing.)
In a perfect world, we would all operate our little empire, where we control every piece of infrastructure, and would not be reliant on any other provider. Alas, maintaining that infrastructure has costs... enormous costs, and to get a product to market quickly, you often can't do it on your own. Maybe industries like banking, defense, and healthcare are different, but for the majority of us, we just can't afford to build our own basic internet infrastructure and compete in the marketplace.
You're not building "basic internet infrastructure". You're standing up Gitolite next to your CI system and Gemcutter or Kappa or pick-a-Maven-repo-including-Wagon-over-FTP. It costs about twenty bucks a month to do both these tasks. If they're running on separate machines. Which they don't have to be.
The idea that a git remote is "basic internet infrastructure" that should be out of your hands because you're shipping a product raises ignorance to worshipful levels. If you want to use Github for collaboration, that's fine! But if your deploys, etc., are being held up by something out of your control that isn't systemic failures with the provider in which you're deploying your software, you done screwed up.
To get around the DNS attacks, one of the cloud providers I use posted their IP address to Twitter. Of course, when Twitter is down too, it doesn't help much.
This would raise the attention to secure software development also in other areas of the IT business. Sure, it costs more upfront but we gain as we suffer less from this kind of attacks.