There are clearly worldwide problems with the hardware, software and protocols we use today. For months now there has been a massive new catastrophe every week. We have an internet that has few mechanisms to identify bad actors and make them accountable, and hardware and software that allow careless or incompetent admins and users to leak sensitive data (perhaps an understatement, as it takes a gifted, motivated, experienced admin to stand any chance today). This industry should, but probably won't, rethink everything. Otherwise the world is in for an unimaginably bad problems as people begin to abuse the data from tens of thousands of these leaks and hacks.
The problem is we have everything recorded digitally, managed with software and hardware of needlessly inflated complexity, mostly running on the least reliable OS ever (Windows), always connected to the Internet.
I would never allow any of my medical data to be digitized, but I have never been asked. I would never use an Internet-connected Windows PC to manage any critical machinery. I don't want my printer to order inks for itself. I would rather avoid modern libraries which add dozens megabytes of complexity to make trivial things look fancy.
Oftentimes the optimal solution is not to add something (like another layer of encryption and automation) but to remove and make things simpler.
I object to your characterization of Windows as "the least reliable OS ever". What Windows is, is popular, and that's for a lot of reasons but one of them is that it is actually quite good at being a desktop PC OS, unlike some other operating systems I could mention. The same things that make it a good desktop (like say, not having to recompile software every two years to get it to run on the latest version) also make it slightly easier to trick users into running things they shouldn't, and of course allows less advanced users to have more control in the first place, which combined with its popularity lead to a large number of infections.
Otherwise I agree with what you're saying, software (and hardware to a certain extent) is far more bloated and complicated than it needs to be.
Even if it's not connected to the internet but runs on a completely and perfectly airgapped network, when your sensitive data fit on a single USB stick, it's extremely hard to protect them, simply because of logistics. In the pre-digital age you could maybe steal individual records, but stealing large mountains of papers would have alerted the security guards. Nowadays you can carry truckloads of data in your pocket.
> Oftentimes the optimal solution is not to add something (like another layer of encryption and automation) but to remove and make things simpler.
I like that. It sounds so much like the de Saint Exupery quote: "perfection is attained, not when there is nothing left to add, but when there is nothing left to take away".
The problem is that the alternative to medical records being digitized is lots of paper files and faxes and the general difficulty of switching records to a new doctor. So in fact there is a big push to digitize things in standard formats which has its own set of problems but the paper-based methods don't really work.
When I was a schoolboy all my medical records were hand-written in a notebook I carried with me every time I went to another medical specialist (and I visited a lot - my health was way below the norm) and nobody used fax machines or duplicated anything. I really want my medical data to only exist in single instance like during those days.
I don't think you can blame the net infrastructure. The problem is with utility IT. Probably also not the IT directly, as it is probably understaffed or underfunded or both.
There is only one way to keep utilities safe on a public network and that would be extreme simplicity and a fitting network configuration.
> We have an internet that has few mechanisms to identify bad actors and make them accountable
That isn't the problem here and the advantage from this far outweighs the risks.
> abuse the data from tens of thousands of these leaks and hacks.
The data people share voluntarily also far outweighs stolen info.
The reality is that nobody wants to spend anything on IT and that will bite you in the ass at a later time. It is not a technical problem alone.
It's not the Hardware not the Software and NOT the Protocols, it's the Human Factor, neglected systems, and often not even a try to use "best practices".
The only way to change that is massive fines, there is no chance that a technical system can control human behavior...ATM at least, like Plane-crashes technically it's always human error.
EDIT: Corrected Plan-crashes and told Mozilla to pay a 1 Million $ fine, because the auto-correct software did not work correctly ;)
I agree it could help to look at legal changes, such as 'massive fines'. That's closer to the 'rethink' we need.
In regard to human error, I think we are approaching the point where it is useless to focus on individual hackers, companies, admins or users.
To pick the most famous example of something we should rethink (not that it would address the hack in this news story): users on the web mainly read text, yet our browsers enable Javascript by default.
To me it seems the ways we store and share data in IT are starting to fail. It's at the point where every interaction a regular person engages in (with banking, shopping, healthcare, social life, work, education, etc) has a real chance of becoming public. That's not sustainable.
There is always a balance between convenience and security. Marketing tends to hate forcing 2FA as it hurts conversions for example.
I think we can do a lot by thinking about only storing the data that is needed and for only the time that is needed. I think GDPR has been a first step in the right direction and it also adds fines for abusers.
The hardware, software and protocols are written by humans so they are definitely part of the problem!
Massive fines sounds like an easy win but it doesn't work in practice in the sae way that massive fines don't stop corporate polluters, don't stop medical negligence and don't stop crime in general. Are they useful? Only where it is clear where the responsbility lays.
Take the VW emmissions scandal as an equivalent issue imho. Who gets a fine? VW? That isn't really fair if the decision was taken by an employee. The employees? They can't afford to pay a fine. All this would do is make it harder for disruptive startups who can't afford the potential fines.
Sadly, I think like most things, it will take a whole raft of things including educating people, punishing people and companies, regulating who is allowed to produce and sign-off systems, have mandatory MOTs for software so they can't be left to languish and even potentially creating seprate WWWs for those who follow the rules and the general public can trust vs those who believe in complete liberty to do whatever they want and who cannot be exposed to the general public who won't understand the risks they are exposing themselves to.
>The hardware, software and protocols are written by humans so they are definitely part of the problem!
like Plane-crashes technically it's always human error
>Take the VW emmissions scandal as an equivalent issue imho. That isn't really fair if the decision was taken by an employee
That's a bad example, they got massive fines in the US, not so in germany (for obvious reasons), and yes it was for sure the engineer who took that decision and for sure not the management ;)
A example for responsible storage of such data would be:
If you want to store my highly private data (health records), you have to use a terminal to a government mainframe...you are not allowed to export, process or store it anywhere else.
> like Plan-crashes technically it's always human error
That's a $1M fine for misspelling "Plane". ;)
Also, it's definitely the software. There's a wonderful talk by Alan Kay regarding this, but I cannot find it. It might be referenced in Jon Blow's talk:
- https://www.youtube.com/watch?v=ZSRHeXYDLko
companies need to be held accountable for the garbage they create. peddling alpha/beta software to consumers or pushing code to production while they don't have a disclosure policy and rather purchase ransomware insurance than invest in quality-control must be forced with fines too big to ignore (make it 1% of annual global turnover _not_ profits)
secondly we need to decriminalize hackers that take down vulnerable shit. instead create a business model that allows someone like brickerbot/janit0r to take dangerous devices offline if it is known to be unpatched for x-months still. similar what the feds did with patching of the recent MS exchange exploit but more bottom-up.
finally criminalize all cryptocurrency so that they can no longer be converted to FIAT in any legal sort of way. want to use bitcoin? fine your only way to turn it into value is by buying a gift card that only is accepted by a corner shop in Uzbekistan. as another alternative that is also the other extreme, make banks adopt cryptocurrency (https://twitter.com/JoeUchill/status/1393697279941431300) so it turns into a regulated white-market while outlawing the grey areas.
taking these measures further, if you're getting pwned and do not disclose it within 1 week, or info surfaces that you paid ransom without disclosing it then your CEO and CISO bonk! go straight to jail. That would immediately put pressure into solving the boring security shit, like curating and maintaining a good quality SBOM, asset catalogue etc (... come to think of it, if inventory is missing in any org that processes PII, or you get hit by an OWASP top-10 then immediately jail time as well).
it all starts and fails with holding people responsible for what they do and make it impossible to hide behind the name of a company. salaries can be adjusted to actual responsibility people carry in the company, no more blaming the intern.
^^ the reason why any of these measures will never happen is because those who would have to enforce it are themselves corrupt and totally dependent on the broken system in order to get cover and plausible deniability for their own crimes: https://www.newsweek.com/exclusive-inside-militarys-secret-u...
An unfortunate state of the industry we are in is a result of years of technological evolution during which security and reliability were either ignored or present only in marketing materials and not in the result. Attempts to add security on top of this insecure mess sometimes make situation even worse (e. g. many anti-virus products were vulnerable to RCE at some points). Customers are complicit in this process too, especially if ware are talking about enterprise software. And simplicity which affects both security and reliability appreciated only by a minority of developers.
It is unfair to hold an app developer liable, when there is no practical way to create even a small CRUD application which doesn't have many million lines of code written by thousands of developers as dependencies (counting an OS as a dependency). And this becomes worse every year.
IMHO the only good way to create secure system is start from keeping the system simple and easy to audit. To keep it this way one would have not only invest a lot of time, but also to sacrifice non-essential features (something customers/users not used to do and will have a hard time accepting).
It doesn't need to be that way. The procurement process used by governments to buy software is broken beyond repair anyway. Maybe this will bring some innovation into the space.
For private companies in competitive industries I couldn't see this affecting consumers in a negative way. It may affect the bonuses of the overpaid but few would lose sleep over that.
It's both but the humans have an excuse- they're human. We should know by now that given enough time humans fuck everything up. We should build things with this in mind.
If we keep assuming humans are suddenly going to stop making mistakes then these things will never go away.
I almost never blame the user or human, I blame the software.
It's reasonably well established that humans systematically underestimate low-probability, high-cost outcomes. Getting hacked is already pretty high-cost for most organizations.
Given that, I think there's more to be gained by looking at the "low probability" side of things.
Some historic examples of attacking the "low probability" side include building codes and speeding fines - both have pretty strong evidence for their efficacy.
A comparable initiative for software might fine those found keeping data after they do not need it (like GDPR), or mandate that sensitive data is stored behind appropriate access controls (like PCI does).
> few mechanisms to identify bad actors and make them accountable
We have plenty of mechanism to track and identify actors, but the legal framework to make that work is not there. The major obstacle to get those in place is the billions evaluated industries built on the exist frameworks, and the enormous political power they have on the political system. Getting laws in place so we can have a safe internet that does not get exploited for commercial gain will be a bigger challenge then people had fighting tobacco companies on health issues.
> hardware and software that allow careless or incompetent admins and users to leak sensitive data
We have hardware and software that take responsibility for sensitive data, but that responsibility cost money. There does not exist any feedback loop or incentives for contract negotiators to manage data leaks to the point where leaks will not occur. There is however direct incentives to reduce costs, so hardware and software that can keep the initial costs down by putting the responsibility onto the user get a competitive advantage during contract negotiation. The industry can not fix this unless someone find a new profit center in preventing leaks. Laws and regulations could fix it, but then established commercial actors who currently has a competitive advantage by providing cheap hardware and software will fight those laws and regulators.
As such, the challenge seems to be to either fight nail and tooth against money and power in order to get laws that solves the issue (like gdpr if it actually got enforced), or someone has to invent a profit center that happen to align with eliminating leaks and hacks.
Maybe after nations has tried the concept of a citizen score, had a few internal wars, genocides, and media storms to really demonstrate the harm of the current system we might end up with enough international pressure worldwide to both have all the tracking and recording needing to correctly identify bad actors, but also the laws and regulations that prevents harm to innocents and enabling the commercial exploration of fellow human beings.
Imagine a world where medical records are part of a blockchain. They're decentralized, secure, publicly available, encrypted, and only shared with interested parties.
Now imagine someone finds a flaw in the software that a company uses to access the data in the blockchain. The 200K records "United Valor Solutions" has access to are read by an attacker and leaked on to the internet.
How is the Web 3.0 version better? If someone can access the data in the blockchain then an attacker can use that same mechanism and the legitimate access credentials to exfiltrate data if it's not secure. There is no magic solution to this problem.
>> Web3 is coming and these problems are being solved
This has always been the case, and we've been consistently wrong about what "web3" is. IDk if you remember "semantic web," for example.
These aren't easy problems to solve, and at this point, path dependency is a huge factor. The juiciest parts of the software industry are highly, if not totally dependent on proprietary & exclusive data. Even if blockchain-like technology was capable of being an everything data store, why would FB, Google, etc adopt it?
I would love the concept to have my private data encrypted but still accessible from everywhere....until quantum computing kicks in...NEVER give data in more hands then it needs to be.
I cannot wait for my US based medical record to be leaked at some point. There will probably be somebody who creates a torrent of these records to be downloaded at some point in time.
I have little faith in the US government or the various US healthcare providers keeping their databases properly secured from attack. I am saying this as somebody with 2 rare immune-mediated neurological diseases affecting my peripheral nervous system, plus type 1 diabetes (autoimmune and insulin-dependent).
I have far more faith in the European Union, even though it has issues. I live in Europe, and it is a breath of fresh air.
The EU might have good intention but with the current system (technical and bureaucratic), you still cannot know that data has not been hacked and misused - it only takes one screwup and with the people I have worked with over the years, there is no reason to think that a screw up is ever that far away.
I think the only saving grace at the moment is that the chance of any individual being taken advantage of is low simply because there are so many people.
I would be much more worried if I was important and more likely to be targetted.
Disclaimer: Related to projects regarding medical bills and insurance prediction.
I can assure you the data is safe due to being a total mess. Antiquated systems, archived data in different formats, cross institutions joined data based on intuition. Might get something but everything is so disjoint you would need a magician not a hacker for something sizeable.
EU countries at least some western ones I have worked with are OK but their implementation mirrors each country's bureaucracy.
True. But, honestly, health insurance claims would be an excellent target.
If released via a torrent at the right period of time, it could be powerful, such as right before health reform, if it is the right legal bill to prey on. The same would apply if there is some huge financial crisis in the US.
In the US, I was receiving a blood product that cost my employer (not insurance--the employer was self-insured) USD $275,000+/year.
No, but this is an excellent reason why healthcare should never be tied to your employment. This is not an uncommon occurance, at all.
10% of the general population can theoretically benefit from an orphan drug. In America an orphan drug costs hundreds of thousands of dollars per year to millions of dollars per year, and it is very frequently needed for the rest of the individual's life.
I never plan on working in the US again. I am a dual US|EU (Croatian) citizen, currently living in Croatia. There are about 30 countries in Europe that I can legally live/work/retire in, so I have it good.
I’d argue against that because of how much data is in a single database. I’ve been at 3 different hospital systems in various capacities and all three had a shared database/program (Epic, Cerner, Meditech) for 2-3 states worth of patients. I had read/write access to patient medical records for all hospitals and clinics across 2 different states at one site. More so, the systems may be antiquated (30 year old MRI machine) but the hospital has paid the lowest bidder a lot of money to upload the data to a shared imaging website that only works in IE10. It meets HIPAA requirements, but I guarantee security was barely on the priority list. Everything is interconnected and unsecured.
It's on your hospital servers. Or in your hospital closets, depending on where you live. In my country you're supposed to have a small medical book with all relevant informations. Now, it doesn't work, so: your public healthcare is tied to an hospital, and your private healthcare... Who knows. What I can tell you is that a lot of effort is going towards data sharing between hospitals (in data vaults) in my country and that medical data is probably my most protected data.
Yeah, true. But, the EU is a regulatory stronghold. The EU does take action on this sort of stuff, and will create standards if necessary. We still have a long ways to go, but it is a "going to the moon" type of an issue rather than a "solving global warming" type issue.
In the US, our Congress is extremely technologically incompetent, with a few isolated exceptions with respect to individuals.
I work in a hospital and just rebooted a network connected computer running windows XP. Not isolated from the main hospital network as far as I can tell. Your lack of faith is warranted.
Ha, mine got exposed a couple years ago (local office let a contractor load thousands of them on a thumb drive and leave, ended up online). They signed me up for "ID.me" for 3 years to protect me from identity theft.
ID.me is basically a glorified coupon/advertisement delivery platform targeting veterans, which sells itself as identity protection with perks. Then, ID.me got hacked, and all of my info I'd given them to prove my identity got shared with the goat. I got this sweet apology letter though.
So moral? They don't give a fuck. No one is held responsible. The general feeling is that "this kinda stuff just happens" and it's not really their fault. I'd love to see large numbers of people go to jail over this...
It would happen pretty quickly, based on how these cyberattacks are going. Also, Russia would like it.
The tip of the iceberg has not even been identified with the recent US SolarWinds cyber attack.
Then again, nationalizing care would bring up the standard. Not like I trust the US government, however, the third leading cause of death is believed to be preventable medical errors. The level of care is particularly poor and inconsistent outside of the east and west coasts, with a few notable areas as exceptions geographically. As a chronically ill individual with 2 rare immune mediated neurological diseases affecting my peripheral nervous system, it's not like I trust the system. I basically fell through the cracks, over and over again.
A company's finances is their own data, so they haven't harmed anyone else by leaking it. A company leaking your finances, is, or perhaps should be a crime by negligence.
While you're there, can you also ban the exchange of crypto for fiat in the USA? This will have a huge positive effect on carbon emissions, chip shortages, and actually reducing ransomware.
> ban the exchange of crypto for fiat in the USA? This will have a huge positive effect on carbon emissions, chip shortages, and actually reducing ransomware.
Makes sense, we did it with drugs and there's no more drugs.
Not that I encourage violating peoples privacy, but this might be a great option for citizen scientists to do some data analysis that wouldn't otherwise be done.
.. so what happened with the ransomware finding? do we presume the ransom was paid, honoured and then remained publicly available? or that it was an automated empty threat which went unnoticed by both parties? kind of glossed over in my view the most important part of the story
Adversarial red team bug bounty hunters to the rescue?
What if there were a law that any responsibly disclosed security vulnerability must be rewarded with a bounty based on the damage the vulnerability would have caused? Combined with immunity from hacking charges for companies that have a license to operate this way (and strict monitoring on those companies).
It's self-funding, self-enforcing, and as a bonus, trains more "cyberwarfighters" nationally. Maybe the market can solve this problem.