It's constant churn in that ecosystem to keep the build system working in any medium/large project, way more so than in any other backend runtime/package managers I've used.
It's hard to quantify, maybe, but it just feels like constant stress. I've worked in bad Java shops where you wonder if the build will compile today. The average Node places are much worse in terms of the tooling and build system not working for whatever fancy reason on <insert day>.
Recent example: Did someone at Google decide to merge a PR to a library two dependencies down? Because now shit won't compile, the compiler OOMs. Just what I wanted to figure out today, a crashing compiler on a minor library version change that nobody asked for.
It’s a challenge, but it’s not that bad. All the package managers have lock files, so no dependencies change daily until you update them. You could even have auto-update via dependabot or renovate and the smallest amount of CI would stop the breaking update from getting through. I’ve worked in a lot of node projects and never worried about whether they’ll compile today. Lock files have been standard/default practice for a long time.
The hard part is actually doing the updates. So many breaking changes, so many new (often better) approaches, so many different tools that interact with each other in weird ways…
It happens regularly to the point that it becomes the happy path.
The trick is that you should pin all dependencies and not just let auto-updates happen, and instead treat even patch updates as if they could be major-version bumps.
> It's hard to quantify, maybe, but it just feels like constant stress.
I agree, but the key factor is auto-updates cause pain. Once you disable those you no longer experience any pain. For instance, no one needs to constantly upgrade dev dependencies, and even patch upgrades to TypeScript are hard to justify without full regression tests.
If you put off all updating doesn't that lead to a lot of technical debt down the road? I guess most of the stuff I work with has to work for a long time and putting off updates just makes for a worse pile to sort out later once you find something newish to integrate and now it won't work with the older existing stuff. For us, keeping up to date incrementally helps prevent that maintenance work from building up. I guess it depends on how long you'll be maintaining the code. If it's something that won't need to work for a long time and you have enough devs then you can probably skip keeping things up to date and just rewrite it when it needs updating.
> If you put off all updating doesn't that lead to a lot of technical debt down the road?
It depends on what you interpret as being technical debt.
For example, are you piling up technical debt if you skip a patch update to the typescript compiler? What about Jester? What if you skip a point release of React? Are you piling up technical debt if you are not on the bleeding edge?
In case this counts as technical debt, what's the productivity hit of staying on that treadmill? Are you actually gaining anything by systematically be on the latest and greatest?
> For us, keeping up to date incrementally helps prevent that maintenance work from building up.
In my experience with modern frontend frameworks, the vast majority of these incremental changes do not add any significant value and in fact amount to wasteful noise. Some patch releases introduce regressions that are ironed out in following releases, and automatically applying these updates only buys you problems.
Also, it's very hard to justify having to refresh your whole dependency tree at each commit, or risk experiencing problems because switching local development branches causes dependency issues. Installing dependendencies should be a rare occurrence.
I can't overstate the importance thay pinning all dependencies and lowering the frequency of how these dependencies are updated has on a team's development workflow. No one will get hurt if you upgrade Jest or TypeScript on a monthly basis instead of at each commit.
I think pinning dependencies helps, but I have literally never seen an organization do this with Node. At one job, the "architecture review board" decided that all services would always use "latest" for all dependencies...
I don't disagree, but there are other issues too. Namely the Node runtime is much harder to diagnose performance issues than with even async Java, and every time this comes up it's a huge resource sink for most teams I've seen.
I think Node is a great tool. Lots of FastComments is built with Node. I just don't advocate for building large systems or monoliths with it.
The problem is this never happens.
It's constant churn in that ecosystem to keep the build system working in any medium/large project, way more so than in any other backend runtime/package managers I've used.
It's hard to quantify, maybe, but it just feels like constant stress. I've worked in bad Java shops where you wonder if the build will compile today. The average Node places are much worse in terms of the tooling and build system not working for whatever fancy reason on <insert day>.
Recent example: Did someone at Google decide to merge a PR to a library two dependencies down? Because now shit won't compile, the compiler OOMs. Just what I wanted to figure out today, a crashing compiler on a minor library version change that nobody asked for.