It boggles my mind that languages and package managers do not support ACLs for libraries.
leftpad has no need of install scripts, nor `eval`, reflection, or access to my disk or the network. nor should it be allowed to gain them in the future, at least without a million alarm bells ringing and explicit approval.
ACLs would allow establishing "moats" of dramatically-more-difficult-to-attack libraries, and encourage libraries to voluntarily reduce their attackable surface to make them more likely to be approved. Instead we have this.
This seems more suitable to a capability-based security model, where dangerous features like eval could be kept in separate packages, and only those packages which depend on them (non-transitively) are able to import and use that functionality.
The reason to disallow importing transitive dependencies is that we can write packages which use that functionality, but only expose a safer/restricted form; e.g. we might forbid dependencies on `eval`, but allow dependencies on a `mock` library, even if `mock` happens to depend on `eval`.
A sibling points out that restrictions can be broken when different packages pass callbacks to each other. Thankfully, that's exactly what capabilities are for (code is written without any "ambient authority", and instead requires capabilities to be passed in as arguments; e.g. see the Confused Deputy Problem)
Agreed, capabilities seem like a fundamentally better approach in basically all ways that matter. I should probably clarify that I mean "ACL" in the sense of "some kind of control, I hardly care what" since it's an easily-recognized term at least. And there probably will still be an ACL somewhere, e.g. to allow the library to use that eval at all, and then we trust it to pass capabilities correctly to others (or perhaps don't trust it! maybe I don't use that, and don't want it to silently appear later!).
In general, as the sibling points out, runtimes can't tell which dependency a particular API call is coming from if it's all bundled into one blob. Unfortunately most APIs, for example in WASI for ease of writing POSIX code, don't have an argument for a capability assertion to be made. So you can only realistically associate one set of capabilities to a single VM unit.
So what you actually need to do is split your program into more VM units, not unlike code splitting in bundler-integrated JavaScript HTML5 routers. You can build that atop a WASM runtime, if you pass the ability to call other WASM modules into other WASM modules. And then you could pass literal capability tokens around. You just need to split your code up.
Then you can bridge them back again with trusted code, and you can add additional constraints on API calls between modules (e.g. callbacks don't work across units, or if they do, then they remember the capabilities they were instantiated with, something like that).
If there were an easier way to have your dependency management system do this code splitting for you, then it would be feasible.
The setup/constraints/etc. you're describing sound more like a permission-based, ACL-like approach.
Capability security is generally based on the following:
(1) A "capability" to perform some action is (by definition) something which is necessary and sufficient to perform that action. If that action is calling a particular function (like 'eval'), then we can treat the function's name/reference as a capability. This fits the definition since (a) we can only call functions which we have a name/reference for, and (b) anything with the name/reference to a function can call that function. For example:
function somethingWhichCanUseEval(eval) {
return eval("foo");
}
function somethingWhichCannotUseEval() {
throw "Not given a reference to 'eval', so we don't have anything we can call";
}
(Note the contrast to your statement that "runtimes can't tell which dependency a particular API call is coming from". Such functionality would actually violate (b) above, making it much harder to implement capability security!)
(2) Capabilities are kept secret to begin with, and we pass them around to whatever needs them. In particular, this requires that (i) we can't ask for a list of capabilities (e.g. a list of references to every function), and (ii) we can't guess a capability (e.g. names should be cryptographically-random strings). For example:
// All guessable references to the eval function need to be shadowed, e.g.
function eval() {
throw "Error: Something attempted to call 'eval' without using the package";
}
window.eval = eval;
// etc.
// The package manager can give each package a cryptographically-random "import-name"
// When a package is imported, it's given the import-names of those packages it directly
// depends on
function myPackage(deps) {
// The 'deps' argument is a map from package names to (randomly generated) import-names
// We can import a package if we know its import-name, hence import-names are capabilities
// for importing packages
// We can import the 'eval' package by getting its import-name from the 'deps' map
const evalPkg = require(deps['eval']);
// The 'eval' package provides a reference to the 'eval' function
const eval = evalPkg.eval;
// At this point, we have the capability to call 'eval' (or pass it around to other functions, etc.)
}
(3) If we find ourselves "checking permissions" then it's already too late (those without permission shouldn't have even been capable of asking in the first place). Likewise if we try to "validate the caller's identity", since (i) that's a case of ambient authority (the "caller" may be a Confused Deputy), and (ii) it would prevent us delegating capabilities to others to act on our behalf, which in turn requires granting sweeping access to a broad range of systems 'just in case' anyone might want to use them.
(4) We can wrap capabilities in a more restricted interface, which acts as a capability for some more restricted action. For example, the following code provides the capability to evaluate code of the form 'foo = bar;':
function namerPkg(deps) {
const eval = require(deps['eval']).eval;
// Regex for valid Javascript names
const jsNameRegex = /[_a-zA-Z][_a-zA-Z0-9]*/;
return {
// A function which evals '<oldName> = <newName>;'
'namer': function namer(oldName, newName) {
// Parse/sanitise the input
const oldJSNames = oldName.match(jsNameRegex);
const newJSNames = newName.match(jsNameRegex);
if (oldJSNames == null ||
newJSNames == null ||
oldJSNames.length == 0 ||
newJSNames.length == 0) {
throw "Error: namer needs to be given Javascript-compatible names";
}
const oldJSName = oldJSNames[0];
const newJSName = newJSNames[0];
// Now it's safe to call eval
eval(newJSName + " = " + oldJSName + ";");
};
};
}
This simplistic model breaks as soon as a more privileged package passes a callback into a less privileged callback without fully constraining what that callback can do. Which, I suspect, is impossible to achieve in practice.
Perfection is unachievable, yeah. But "privileged package passes values to less privileged package and may be exploited" is so much smaller of a surface than "anything anywhere can os.Exec(whatever it damn well pleases)".
Security is not binary, nor can it be. Air-gapping isn't even enough, as is repeatedly demonstrated. That's not a reason to be yolo.
Perhaps it could be achieved with static analysis?
1. Each package declares what permissions it needs in package.json (if any).
2. I declare in my project's top-level package.json what should be allowed.
3. npm install fails if any dependency's permissions exceed what I've defined in 2, and points me to the offending package(s). Then at runtime just my project's restrictions can be applied across the board.
Yeah, even something this simple would likely be enough to prevent several of these supply-chain attacks - they tend to have to add unsafe abilities to historically-safe (but widely used, in part due to their safety) libraries. And this would be fairly simple to add onto existing languages/libraries... which is part of why I'm stunned it hasn't happened yet in practice, even on frequently-exploited systems like NPM.
The same kind of permissions system would also help warn you when you're importing a "does far more than you want" sub-tree. If `pad` imports `leftpad` and tomorrow `leftpad` imports the entire Kubernetes codebase, you might actually notice it and fix that instead of your builds just getting a bit bigger and hoping a reviewer notices the lockfile diff (which is probably hidden by default!). Again, not perfect, but better... and applies pressure to do better, possibly improving the ecosystem as a whole.
Future languages/tools/etc could be dramatically more intelligent about these kinds of things, but even the crappy basics don't exist in the extreme majority of cases. You could build it yourself many times, but almost nobody does that. Default behaviors matter.
Is the way Deno security works kinda what you have in mind?
Some Linux distros also ship SELinux or AppArmor policies with packages by default, but that seems like a kind of inside-out proposition from what you're describing.
SELinux does a heck of a lot to improve this on a per-binary / per-user basis, yeah.
But I want this in process. Because libraries are in practice very frequently treated as black boxes (like binaries)[1], but without any ability to limit their access like we can for users/binaries/etc.
There are some WASM things that do kinda what I want and pull off neat end results, but they're kinda tough to use together, and have obvious perf/debuggability/etc costs. e.g.: https://github.com/dtolnay/watt
[1] unfortunate in many ways, but largely reasonable IMO (simpler abstractions of things we don't want to think about is the whole point of shared libraries), and I don't believe the industry is capable of reversing course on this either way.
> (simpler abstractions of things we don't want to think about is the whole point of shared libraries), and I don't believe the industry is capable of reversing course on this
Far from reversing course, the industry seems to be interested in accelerating on the current one— containerization as a deployment mechanism is about doubling down on the black box-ness of binaries. Paradoxically, this doubling down on ‘the whole point’ of shared libraries is also a destructive innovation with respect to the advantages of shared libraries, because it also means surrendering the fine-grained resource sharing that shared libraries, as packaged traditionally, allow.
I disagree with how this conflates layers and spheres of responsibility
The package manager fetches dependencies recursively into your chosen directory running as your choose account. The runtime executes with your chosen account with whatever permissions have been granted.
Now having the runtime track whether the caller of a library is another library or your application. Or for the runtime to allow reject runtime api seems better left to the os to control with container, sandbox or transitional ACL
The is roughly what you are to be able to do with the Java SecurityManager- it was especially intended for code mobility scenarios where you can’t be certain that everything is well-behaved (and not just applets- we used it server side).
But wrangling all those capabilities and policies gets really really difficult on a decent sized system, and comes with a runtime overhead to boot.
Fairly similar at least, yeah. Java has enough power in its reflection APIs, and widely-enough used reflection, that this kind of thing is practically required to be part of the runtime + have a cost. And that's probably the safest way to do things anyway.
I do think though that as it seems to be only a runtime thing, it's probably warning you far too late. If it were baked in statically so, when you pulled in a new http lib (or just updated it), you approved the popup that said "let X access network and files?", it'd probably be a lot less painful 95% of the time. (since sometimes you'll want it more fine-grained)
---
Since I can't glean it from what I've skimmed so far: does SecurityManager let you do capabilities-like things? E.g. can you be given a SecurityManager as an argument, allowing code using that manager to temporarily access a file, while normally blocking it? Or is it something closer to "applies to a ClassLoader"? Though with enough effort and runtime cost you could make those equivalent, of course.
I strongly support this. There are of course other ways of compromising peoples computers when running untrusted code, but let’s get the low hanging fruit.
I have a very simple vue frontend app that I wrote a few years ago, and it somehow has >4000 dependencies (including dev dependencies). The fact that npm install could run code from all of those (which might not be obvious to a newer dev) is flat out dangerous.
That's definitely relevant for nodejs apps, but running js client side is a very different risk profile from running arbitrary scripts on the computer or server doing an npm install.
You might think that, but if we’re talking about a frontend service that doesn’t really handle much sensitive info (if any), there’s a lot juicier stuff on my machine than just pushing malware to that codebase (not least other codebases on my machines)
If we're going to develop our software like this (there are no signs of changing), then we need development environments that are fully hermetic.
Production deploys tend to be if you use the right tools. Docker images, cert signing, ACLs, network policies, etc. But we have no equivalent for developer machines. And engineers have access to lots of dangerous things. Docker alone isn't good enough.
I've recently switched from linux to MacOS (now that those shiny new MacBook Pros exist), and due to this, I've started using vscode dev containers along with gitpod as my pretty much exclusive development environments. I did this for convenience rather than for security, but I must say I felt a strong feeling of relief when this morning I saw the advisory, and typed `npm` into my terminal and found that it wasn't even installed on my host OS.
I think longer term, we're going to find things like Qubes's VM for everything model becoming more normalised.
This. I would love to have a minimal host machine setup while having all those elaborate dev dependencies and command line tools I downloaded from somewhere run inside their own isolated container (or at least within one single container for my entire $SOFTWARE_PROJECT).
Basically, I'm trying to imagine a world where every Bash script needs to come with headers specifiying binary dependencies and file paths (and env vars) it wants to access. Upon invoking said script, it gets executed within a light-weight container, the binary dependencies then get installed on-the-fly and the file paths get mounted into the container (after I've confirmed that the script may access them).
> Basically, I'm trying to imagine a world where every Bash script needs to come with headers specifiying binary dependencies and file paths (and env vars) it wants to access. Upon invoking said script, it gets executed within a light-weight container, the binary dependencies then get installed on-the-fly and the file paths get mounted into the container (after I've confirmed that the script may access them).
There are pieces of this that have been implemented in Nix, Distri, and Flatpak.
Binary dependencies getting installed on the fly used to be something you could enable systemwide on NixOS, and parts of it still can be. There are also FUSE filesystems that can automatically build/fetch paths as they're requested. DwarfFS, for fetching debug symbols, is the first one I'd heard of, and then there were some experiments like this in Distri.
In Nixpkgs, scripts built by b Nix are created in a way that bundles in the executables they call by interpolating in full paths to deterministically built versions.
In Flatpak you have this notion of portals, where you declare file paths and other things where your program is allowed to talk to anything else. It seems a little more seamless than sharing the filesystem on Docker.
I think this is pretty achievable in the moderate term.
I'm aware of Nix and know this is achievable – in theory. To really work well and be convenient it would have to be used everywhere, though. https://xkcd.com/927/ comes to mind.
Why not a virtual machine? I've been doing that for nearly a decade now. Back when it was actually sort of painful. Now we have 32+ gigs of RAM and 8 core CPUs.
You get snapshots and if your drive image is small enough you can toss it on a USB SSD drive and take it wherever you need to.
Docker was never good enough. Especially if you're on a Mac. It's a Linux VM with more overhead added and greater access to the host making it less secure than just using a VM in the first place.
I think they mean 'hermetic' in a more restrictive sense. But you can use Nix to generate Docker images and VMs and stuff, so you could use Nix to produce environments that are sandboxed if you want.
And builds are sandboxed, which helps with the OP issue
This seems like a step in the right direction for sure, but what is the threat model here exactly? When would I be concerned about code in an install script but not in the package itself?
What we really need are content security policies for node. I want to define at the top level of my project exactly which file system directories and internet domains can be accessed, then have that enforced by the runtime.
This is a great question, and there are actually a number of different reasons:
1. Package installation often happens under different privileges than actually running your end user app. There is unfortunately a lot of "just use --unsafe-perm to make the install work" advice out there which means a lot of people are installing packages as root even though they're not running as root. Also consider that a lot of npm packages ultimately get run in the browser, so the attack surface there should have very little to do with the files on your computer, but install scripts make this not the case. For the same reason, this means that this might be the only kind of attack that has the ability to run on your build machine, since your build machine may not actually "run" your app.
2. The fact that the code doesn't have to be run to be effective makes it quite difficult to spot, and thus increases its likelihood of sneaking into your dependency chain. For example, GitHub will often straight up just hide large `package-lock.json` file diffs, and no one wants to read through those. So a very innocuous patch version change in an otherwise unrelated bug fix could completely fool the PR reviewer: this is because there's no code "history" that would imply any sort of entryway into attack. All the code looks totally reasonable. By having to explicitly allow the package to use install scripts however, all of a sudden the PR would contain a clear indication that new foreign code that isn't represented anywhere in the commit will be run. This is huge.
3. This also takes advantage of the fact that the vast majority of npm installs aren't done by humans, but by CI and production deploys. Again, by ensuring that the attack is completely "passive", that is to say, as long as you get installed you've succeeded. A lot of times this can be triggered just by issuing a PR with automatic build and CI machines. The correct behavior for something trying to run a new script on your machine when a PR is submitted isn't to infect your CI machine, it should be for the test to fail with "package X tried to use an install script".
All great points, thanks. It would definitely reduce the surface area for attacks. In practice, it may just change which kinds of packages are targeted (packages in the build-time dependency tree instead of runtime), but it would still be a win.
Ultimately, I still think we need real sandboxing built in to the runtime to really solve this problem. Is there something inherent to node's architecture that would make this impossible or is it just a ton of work? Could Chrome's sandbox and CSP implementation be piggy-backed on perhaps? Deno takes some steps in this direction but they stopped well short of what's really needed.
I’m not a Deno user but my understanding is that permissions for the file system or network are all-or-nothing for a whole project. There’s no ability to specify directories or domains, or for dependencies to declare permissions they need. It’s better than nothing, but only makes a difference for projects that don’t need the file system and/or network at all. It wouldn’t have helped with any node project I’ve ever seen in the wild.
That said, perhaps it’s just a starting point and will be expanded into something more useful in time. It would be a killer feature imo if done right so I hope they go in that direction.
I do not see this proposed that would cleanup a part of the npm repository, Google,MS, Firefox could work and find what are most popular packages and then make a list what should be put in the browser or in node, or maybe a blessed optional extended standard library.
For example if we detect that 1 million packages depend on leftpad then add leftpad int eh standard library, if 500K packages depend on isValidUrl,isValidEmail then put this int eh standard library. I am sure Google devs could over engineer this to make it optional and include only what you need.
>it takes time to get adoption and all browsers have to implement it.
You can make it a library , name it STL-extras.
So instead of having 10 packages with 10 copy of left pad, 12 packages with some copies of isOdd and other ridiculosness you have this 1 STL-extra library that is reviewed by Google and Mozilla.
How does this make for a solution that meaningfully differs from just declaring that any problem can be taken care of by someone else doing all the work? In this case, the work is certainly doable; that's not the problem. "We'll make sure all the code we ship or consume is audited" works as a strategy against the problem that exists here. "We'll make sure it's audited, but we're not going to pay for any of it" would also "work" so long as you're able to achieve that, but achieving it is what presents difficulties. How does your proposal take those difficulties into account? It sounds a lot like when you tell a child that you don't have enough money for something and they tell you to go get money at the bank.
My solution does not fix the problem, it reduces it, so if we get all the missing STL functions in a trusted STL package then you as a web developer will have to review 200 less packages. But maybe some other luicky guys will have to review no package at all since most of what they need is in this STL-extra.
You can come with other solutions to reduce the problem further, like some of the other ideas suggested, one of the ideas is to maybe make it less CV worthy to have a npm package to discourage the behavior we seen where someone creates or forks some package and then pushes it into other projects just for fame.
I apologize if I miss-understood your point, honestly is not clear what you did not like about an optional STL library that is trusted and people could use instead of 100 untrusted leftpad like ones.
It's sad to see people reject this suggestion based on such weak grounds. "It won't stop all attacks" is true, but if it raises the costs of attacks, and protects more people that it harms, then it is worthwhile. Similarly, "It's a backwards incompatible change" is not a sufficient argument, as changes like that are made all the time (of course requiring a major version bump).
To address the resistance, I would propose a compromise, namely that install scripts won't be run by default unless the publishing account is secured by 2FA, or the previous version of the package also included an install script. That should greatly reduce the attack surface, and pave the way towards requiring 2FA for all packages with install scripts as a later step.
>but if it raises the costs of attacks, and protects more people that it harms, then it is worthwhile.
But how does it raise the cost of attacks? I don't see why it would be harder for someone uploading a malicious package to embed said malicious code in the index.js instead of in the install scripts.
If I was installing the module only for use in frontend, eg to be bundled by webpack, the code in your index.js won't execute on the machine but inside the browser sandbox.
It raises the cost of attack, sure. That said, just about every developer I know is not going to think about it and is just going to hit "OK" at the first opportunity. The VSCode "this directory isn't trusted" warning comes to mind. I know of no one who actually takes that to heart. Perhaps we should, but few will actually take the time, sadly.
The default install process should stop and prompt you with something like:
Package ua-parser-js wants to run a script before installing.
The description of the package is:
"Detect Browser, Engine, and Device type/model from User-Agent data."
The reason for the pre-install script is:
"Configuring the local user agent thing for reasons."
This script has been unchanged since version 0.7.29 which was published:
14 hours ago
The hash of the script is:
0123456789abcdef
Press Y to examine the script, or N to cancel installation.
After npm echoes out the script, the user should decide whether it looks obfuscated or does anything suspicious. If the user is still unsure, they can search the web for the hash of the script to see if other people have audited it.
For automated installs, such as a CI server, there would need to be a command line argument or config file entry with something like:
I don't think this would have much of an impact since it would only require the attacker to change the injection point from the install to swapping out the "main" entry in the package json to a compromised file, right?
I think the problems of npm and the js world in general is the depth and breadth of the dependency trees and the misunderstanding of transitive dependencies. I've heard devs say that they only have a single dependency when using CRA (which IIRC pulls in 1500ish transitive dependencies). I've heard devs say that a dependency is always better (even if it replaces a single line of code) than your own code.
Besides that even simple dependencies seem to be updated even when they were "done". Many devs see a repo that has not had a commit for a few months and consider the project dead instead of done, so there is an incentive to keep updating things that didn't need updating or for dependents to switch to "actively developed" projects for their dependencies.
So yes, requiring 2fa would be great, making the install steps not able to run arbitrary code would be great, and most of all requiring repeatable builds from source would be great. I still think the problem at the core would remain, which is that the ecosystem is too hooked on excessive dependency usage and sees newness as a virtue.
IIRC, yarn disables script invocation when installing a package, and there is a separate mechanism to run them if you need to. I don't remember the full details of how it works, though.
Given NPM's increasing viability for large scale supply-chain attacks, there are other things to worry about still. Perhaps those other things are more fundamental to NPM's design and can't be changed overnight. It's still helpful to solve these simpler problems.
That's not really a solution to the problem because the attacker might change the contents of the package instead of adding `postinstall` or `preinstall` hooks.
The more realistic solution would be teams of volunteers that are auditing the packages and check the differences between specific versions of those. This doesn't block all possible infected packages, but most of them, which is better than what we have now. Everything is based on trust so you can't stop this, but maybe prevent it.
> the attacker might change the contents of the package instead of adding `postinstall` or `preinstall` hooks.
Ultimately, any code inside an npm package needs to be run by default in the context of a sandbox, such as vm2 or SES. That way a developer would have to opt in to granting permissions for a package to run executable code.
> Package maintainers implement install scripts for a reason. It should be opt-out, not opt-in.
yeah, because they're allowed to. The only time that install scripts are sane and reliable is when they are part of a community that polices them and requires them to be minimal (e.g., in Linux distros). That's basically never the case with language-specific package managers
Almost all npm packages are open and the code available in Github - so if npm package maintainers are not being policed then it's the community's fault for leeching and not looking into the code.
I run an install script for my package [0], this install script forces 2 lines in users' .gitignore files to prevent EXTREMELY sensitive data being committed into a public repo - despite me telling everyone in the community to not commit these files on god-knows how many occasions. I do this because I care about my users and I know if I tried some funny business then they would rightly call me out for it - NOT "because I'm allowed to".
Sadly security by blacklisting always turns into a game of whack-as-mole. If there’s an unavoidable need for an install script that may be a case to mature the facilities a package manager offers rather than requiring packages to run arbitrary code.
You have no idea of the packages you are installing. Like litterally, none, unless you use very narrow, pure-js ones. Nobody can grok the dependency tree of the top 50 popular packages. Asking for opt-out is the best way for nothing to happen.
At RunKit, we've actually installed every version of every package, and I can assure you that it is not the case that there's tons of packages using install scripts for really great reasons. For starters, the vast majority of packages do not use this feature. Then, you have to consider that install scripts are a very old feature designed for a series of constraints that are either no longer representative of the state of the ecosystem, or quickly won't be. For example:
1. The most "legitimate" use of install scripts is to build binary node addons. N-API is a much better alternative with ABI compatibility that will allow the package to just ship built, which aside from being a much friendlier and faster install experience, will also allow for more deterministic builds and better caching.
2. Many install scripts just transpile code on installation. It would be much better to instead just transpile the code on publish. This is something that requires nothing new and everyone should absolutely just do today. It's crazy that people re-download and retranspile the same code over and over.
3. Many install scripts simply download non-npm resources, like apt dependencies or things like Chromium. It would be a lot better, a decade after npm's introduction, to standardize this common use case like this instead of having everyone roll their own. We actually implemented our own version of this where we can just list apt dependencies in our package.json. It gives you so many more options on how to optimize builds and provides you so much better information about your dependencies to have this be a "declarative" piece of the puzzle as opposed to having every package figure out their own slightly different way of doing this.
4. Many install scripts just present the user with ads. NPM should also just add a package.json field for donation links or whatever and provide a good standard way of disabling this as opposed to spinning up a process just to present ads.
5. And of course, some install scripts are used maliciously.
All this to say, 10 years later, there aren't actually a bunch of "wow, what an interesting use of the install script" examples, but rather just a bunch of fairly limited common uses (that could be better handled by npm itself), and yes, some interesting stuff, but also a worrying amount of abuses, that are easily duplicated and that we are currently doing nothing to stop. Given a proper transitional period, where npm just warns you above an install script rather than failing, I think most packages could be made to not require custom install scripts. And again, this is for a small percentage of packages, nowhere near the undertaking of converting packages on NPM from CommonJS to ESM for example. And, it's not that it would be disallowed, it's just that the rare package that really did need to do something really out there would need a flag. Having actually queried all the packages, I am certain this could be done with basically no disruption for end users, certainly no disruption to most package authors that don't use these scripts, and almost no disruption for the few package authors that currently use this feature.
> Point 3 is interesting, does it work across non-apt package managers?
At the very least it's not harmful. You could use something like repology or your own index to identify equivalent packages and then install them.
Really what you'd want would be a package manager-agnostic way to declare dependencies on things outside the Node universe, and then plugins for NPM that could use to assert they were satisfied in an appropriate per-package-manager way, and maybe optionally try to install them for you using the appropriate external package manager.
But the packagers for Node projects wouldn't be writing the mechanisms for that, they'd just be declaring the deps.
This functionality is a must nowadays. To reduce the risk, I would lock the packages to specific versions and upgrade only after two weeks or immediately after a critical vulnerability is fixed.
I may be thinking silly but wouldn't it be better if these dependency reliability issues were fixed by the platform that released the package? Are there any barriers to publishing a malicious package via npm right now?
Because it's not always the case that the code runs inside your machine. For example a package that runs inside a web browser can't do a lot of damage since it's sandboxed, however the install scripts always run on the target machine.
Also the install script may run with higher privileges than the one that the program would have when it's executed. For example there is the case of installing packages globally with root privileges, ideally something that you shouldn't do, but how many people runs `sudo npm i -g something`? In that case the install script runs with root privileges.
Of course there are rare cases when they are needed: but that cases should be an opt-in, meaning that the user is prompted to run the script (if running interactively) or the user has to provide a command line option to allow specific packages to run scripts, or add an option in the package.json to whitelist specific packages.
leftpad has no need of install scripts, nor `eval`, reflection, or access to my disk or the network. nor should it be allowed to gain them in the future, at least without a million alarm bells ringing and explicit approval.
ACLs would allow establishing "moats" of dramatically-more-difficult-to-attack libraries, and encourage libraries to voluntarily reduce their attackable surface to make them more likely to be approved. Instead we have this.