Hacker Newsnew | past | comments | ask | show | jobs | submit | VPenkov's commentslogin

> The vite plus idea is that you'll pay for visual tools.

From what I understand, Vite+ seems like an all-in-one toolchain. Instead of maintaining multiple configurations with various degrees of intercompatibility, you maintain only one.

This has the added benefit that linters and such can share information about your dependency graph, and even ASTs, so your tools doesn't have to compute them individually. Which has a very decent potential of improving your overall pre-merge pipeline. Then, on top of that, caching.

The focus here is of course enterprise customers and looks like it is supposed to compete with the likes of Nx/Moonrepo/Turborepo/Rush. Nx and Rush are big beasts and can be somewhat unwieldy and quirky. Nx lost some trust with its community by retracting some open-source features and took a very long time to (partially) address the backlash.

Vite+ has a good chance to be a contender on the market with clearer positioning if it manages to nail monorepo support.


Oxc is not the first Rust-based product on the market that handles JS, there is also SWC which is now reasonably mature. I maintain a reasonably large frontend project (in the 10s of thousands of components) and SWC has been our default for years. SWC has made sure that there is actually a very decent support for JS in the Rust ecosystem.

I'd say my biggest concern is that the same engineers who use JS as their main language are usually not as adept with Rust and may experience difficulties maintaining and extending their toolchain, e.g. writing custom linting rules. But most engineers seem to be interested in learning so I haven't seen my concern materialize.


It's not like JS isn't already implemented in a language that's a lot more similar to Rust anyhow though. When the browser or Node or whatever other runtime you're using is already in a different language out of necessity, is it really that weird for the tooling to also optimize for the out-of-the-box experience rather than people hacking on them?

Even as someone who writes Rust professionally, I also wouldn't necessarily expect every Rust engineer to be super comfortable jumping into the codebase of the compiler or linter or whatever to be able to hack on it easily because there's a lot of domain knowledge in compilers and interpreters and language tooling, and most people won't end up needing experience with implementing them. Honestly, I'd be pretty strongly against a project I work on switching to a custom fork of a linting tool because a teammate decided they wanted to add extra rules for it or something, so I don't see it as a huge loss that it might end up being something people will need to spend personal time on if they want to explore.


> you have no appetite for a better security model

For what it's worth, there are some advancements. PNPM - the packager used in this case - doesn't automatically run postinstall scripts. In this case, either the engineer allowed it explicitly, or a transitive dependency was previously considered safe, and allowed by default, but stopped being safe.

PNPM also lets you specify a minimum package age, so you cannot install packages younger than X. The combination of these would stop most attacks, but becomes less effective if everyone specifies a minimum package age, so no one would fall victim.

It's a bit grotesque because the system relies on either the package author noticing on time, or someone falling victim and reporting it.

NPM now supports publishing signed packages, and PNPM has a trustPolicy flag. This is a step in a good direction, but is still not enough, because it relies on publishers to know and care about signing packages, and it relies on consumers to require it.

There _is_ appetite for a better security model, but a lot of old, ubiquitous packages, are unmaintained and won't adopt it. The ecosystem is evolving, but very slowly, and breaking changes seem needed.


I had the chance to finish reading and it looks like Trigger were using an older version of PNPM which didn't do any of the above, and have since implemented everything I've mentioned in my post, plus some additional Git security.

So a slight amendment there on the human error side of things.


37 Signals [0] famously uses their own Stimulus [1] framework on most of their products. Their CEO is a proponent of the whole no-build approach because of the additional complexity it adds, and because it makes it difficult for people to pop your code and learn from it.

[0]: https://basecamp.com/ [1]: https://stimulus.hotwired.dev/


It's impossible to look at a Stimulus based site (or any similar SSR/hypermedia app) and learn anything useful beyond superficial web design from them because all of the meaningful work is being done on the other side of the network calls. Seeing a "data-action" or a "hx-swap" in the author's original text doesn't really help anyone learn anything without server code in hand. That basically means the point is moot because if it's an internal team member or open source wanting to learn from it, the original source vs. minified source would also be available.

It's also more complex to do JS builds in Ruby when Ruby isn't up to the task of doing builds performantly and the only good option is calling out to other binaries. That can also be viewed from the outside as "we painted ourselves into a corner, and now we will discuss the virtues of standing in corners". Compared to Bun, this feels like a dated perspective.

DHH has had a lot of opinions, he's not wrong on many things but he's also not universally right for all scenarios either and the world moved past him back in like 2010.


Well you do learn that a no-build process can work at some scale, and you can see what tech stack is used and roughly how it works.

But regardless, I didn't mean to make any argument for or against this, I'm saying this was one of the points DHH made at some point.


Dunno. You can build without minifying if you want it to be (mostly) readable. I wouldn’t want to give up static typing again in my career.


The repository introduces it as indeed based on Helium [0].

The cool part about Helium is that it's based on patches, rather than forking the full source code. I don't know how sustainable this is in the long term, but it's an interesting approach for sure.

[0]: https://helium.computer/


Not sure what's cool about that. A fork is a patch set, with a ton more ergonomics on top. Passing around sets of patches was what we did before VCSs were common/easy-to-set-up, and it was always brittle and annoying.


Here is a homework for you to see why they do it:

  1. Checkout Chromium's codebase.  
  2. Make a commit and see how long it takes.  
  3. Try to push it to any git hosting service.  
You will discover what's actually brittle and annoying.

And yes, being 10s vs 10000s devs in the same repo isn't fun.


Standard practice for Chromium forks. Chromium's repo is huge, slow, and impossible to diff for your changes with 10000s of other commits. Also, painful to host it anywhere.


Same here. A few years ago I thought maybe the ringing isn't normal. It hadn't occurred to me before that.

I found a YouTube video of a "tinnitus demo" with the right sound and frequency. I could only start hearing it at about 80% volume. I gave my headphones to my partner and she said it was unbearable. I guess I'm used to my normal.

I slightly regret knowing about it, I seem to be paying more attention to it now.


One is impulsive, the other requires structure. The two are not mutually exclusive though, because both conditions are pretty diverse. AuDHD is a term used to describe people with both.


This is a massive oversimplification of both autism and ADHD which approaches uselessness. Impulsivity is one possible symptom of ADHD, but doesn't even begin to describe the experience, and by itself paints an incorrect picture of the experience. Same for autism and structure. I know plenty of people with autism who absolutely do not deal in structure.

I know it feels nice to be able to craft a simple narrative, but this narrative feels more harmful and misconstrued than useful.


I have autism and have a lot of trouble with routine and rigid timelines. But I also have ADHD, so I suspect there is some internal struggle there. I want to have routine for a lot of things, I just can’t seem to make it happen.



Yes it does, since the ignore-scripts option is not enabled by default.


Yes it does, you're correct and I have misread. I can't edit, delete, or flag my initial reply unfortunately.


I'm really not a fan of CSS in JS, however it does have it's use-cases. Class mangling is very convenient with it and allows you to be prescriptive about how you're doing theming support, which is great when building libraries that 3rd parties embed on their websites.

The trade-off is that of course your customers can't style things you haven't anticipated, but it means you can control what changes are breaking.

And you can always add an extra variable in a new version if a customer wants to change a border color.


vanilla-extract accommodates this by having an API you can use at runtime if you really really want, to allow for CSS variables, which the user could then use.


Not a package manager, but Renovate bot has a setting like that (minimumReleaseAge). Dependabot does not (Edit: does now).

So while your package manager will install whatever is newest, there are free solutions to keep your dependencies up to date in a reasonable manner.

Also, the javascript ecosystem seems to slowly be going in the direction of consolidation, and supply chain attacks are (again, slowly) getting tools to get addressed.

Additionally, current versions of all major package managers (NPM, PNPM, Bun, I don't know about Yarn) don't automatically run postinstall scripts - although you are likely to run them anyway because they will be suggested to you - and ultimately you're running someone else's code, postinstall scripts or not.



Oh, happy days!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: