Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Freenginx: Core Nginx Developer Announces Fork of Popular Web Server (infoq.com)
136 points by kungfudoi on March 13, 2024 | hide | past | favorite | 96 comments


> “Nginx is an incredible software and platform, but I was wondering if it wouldn't be time to just bite the bullet and create a more modern solution based on Rust.”

I actually don't understand why I am seeing arguments like this all the time. Rewrite takes time and effort. It may be OK for new projects. But for a mature project that has a quarter million lines of code and load-tested for cross-platform compatibility and security, it is way more than "just biting the bullet". (Not to mention that memory safety in Rust is ... still improvable. [1])

Rust is not a panacea. [2] When a corporate tries to seize control of a open source project, some one (or some entity?) has to do something, and rewriting in Rust by no means reclaims the freedom and openness of the project.

[1] https://news.ycombinator.com/item?id=39440808

[2] https://ariadne.space/2023/12/07/most-breaches-actually-begi...


> Rust is not a panacea.

Amen to that.

I don't understand why more people don't highlight Rust's main weakness. The lack of strong stdlib.

No stdlib means you have two choices:

    (a) Re-invent the wheel; or
    (b) Use a crate.
Most people choose (b) because they don't have the time, inclination or expertise to do (a).

Which means you open yourself up to supply chain attacks, unless you constantly audit the upstream crates, which, let's face it, most people won't, they probably won't even audit the original version they pulled in, let alone the updates.

Which means with something like an nginx re-write in Rust, which is a complex piece of software with many moving parts, you end up importing a whole ton of random crates. Half of which will probably become abandonware within a few years, because that's the way of the world with open-source, maintainers move on to bigger and better things.

There is a difference between rewrite in Rust and rewrite properly in Rust.

No doubt there are people out there doing solid open source work with Rust (e.g. the guys at tweede golf) but that takes time and effort, and in the case of tweede golf, the extra effort is financially sponsored, something that is not the case with most open source projects, especially the single-maintainer ones.


Also: No bootstrapping rustc from source. There are a bunch of projects trying to get that fixed, but currently pretty much everybody relies on binaries coming from upstream. Having a core infrastructure component like nginx rely on a compiler we can't verify would make a supply chain attack against the compiler very attractive.


Yeah, the poor stdlib is the main reason Go is my go-to (no pun intended) language for writing servers rather than Rust. That and the slow compiler, which a lot of people think shouldn't matter, but does matter.


> language for writing servers

Or indeed clients.

In 2024 you shouldn't need to import third-party code to make a call to a REST endpoint and parse the JSON. Or the same with crypto.

All the stuff that's bread-and-butter in today's cloud-centric world, you shouldn't need to farm out to third-party code, it should be stdlib.


That’s a bit of a slippery slope though and causes maintinence nightmares for mature projects. Let’s say I want to upgrade to a new more secure TLS protocol. If it’s bundled to the language version that may mean changing the entire concurrency model, or something equally burdensome on a million line code base.


I'd be tempted to think that if an application is designed in a way where switching your TLS implementation has consequences for the concurrency model, you have some serious design challenges.

That being said, a good reason for using Go and its standard library is that they have been very good in terms of keeping important things in good shape and they put a lot more care into smoothing upgrades than is usually the case for random libraries. That's much of the motivation for using standard libraries: they have to be maintained in a manner that doesn't blow up billions of lines of running code. Third party libraries tend to be a far more mixed bag.

As for crypto: I shudder to think what it was like to use SSL/TLS in the C/C++/Java world. Horrific implementations that are fiddly to work with so you won't be inclined to mess with them more than you absolutely need to in regimens that rarely, if ever, pushes you to improve security as algorithm preferences shift over time. We've made heavy use of the crypto parts of Go to build our own PKI solution for an IOT platform and it was a lot easier than in any language we've used before. By a good margin.

I think you inadvertently made the argument for using Go and its standard library :-)

(Of course, 10 years down the road, perhaps people have abandoned Go and it has become a wasteland and there's some other language with a better stdlib you should be using. But ask yourself this: how much of the code you have written in the past is still running unchanged? You'll have to evolve, develop and adapt over time no matter what so if Go were to fade away, it isn't going to happen overnight and you'll have plenty of time to move on)


> I'd be tempted to think that if an application is designed in a way where switching your TLS implementation has consequences for the concurrency model, you have some serious design challenges.

I think you're misunderstanding OP's theoretical. If TLS is in the stdlib, that means its version is tied to the version of the language you're using. I.e. if you want to upgrade to a newer TLS, you need to upgrade your entire language version (compiler, interpreter, whatever).

So you can end up in a spot where you need to go from v1 of a language to v2 to get a modern TLS implementation. However if v2 of the language also contains changes to how the language handles threading, you now need to rewrite your app's threading to be compatible with v2 of the language, just so you can get a safe TLS implementation.

Basically because TLS is in the stdlib, TLS can force you to upgrade your language version.

If TLS was 3rd party, you could upgrade the library without upgrading your language version (presuming the TLS library supports your language version).


I understand the argument. Fortunately, Go makes a lot of promises that makes this scenario unlikely. So in real life this becomes mostly a theoretical issue. It could happen. It just isn’t terribly likely.

And if it does happen, it is likely a lot of other people will find themselves in the same boat, making a practical solution emerging more likely.

And note: the same thing could happen, and is more likely to happen, if you use a third party library. Some of the libraries I’ve used for years are no longer actively maintained, for instance.


> That’s a bit of a slippery slope though and causes maintinence nightmares for mature projects.

I'm sorry, but I'm having great difficulty following your line of argument.

Are you seriously suggesting that Go does not have any mature projects ?

If not, are you seriously suggesting that the mature Go projects in Go are encountering maintenance nightmares because of stdlib ?

Your line of argument is giving off a whiff of FUD, sadly.


Well, they’re right. When the changes you want are part of the stdlib, then you may have to take on many changes you don’t want just to get the one change you do want. It’s a well known trade off.

A good middle ground here is to have official crates/pkgs managed by the core organization that can be upgraded separately.


Do you have any real-world examples of this happening in Go?


Discussing objective engineering tradeoffs is not the same as spreading FUD. It's baffling that you are so quick to shut down conversation and get combative. I never implied that there are no mature Go projects, just that the burden of maintenance is different than that of other languages that choose to break out more functionality into libraries, and that having a comprehensive stdlib is not 100% upsides as you implied.


I think it is worth trying to relate it to lived experience and ask "is this actually a problem or are we imagining problems that are unlikely to manifest?".

I used to develop in Java in large organizations for many years and be responsible for deciding how we use the language, what libraries we use etc. I used to be pretty hard on developers who would choose third party libraries for anything the standard library already delivered. Even in cases where third party libraries were perhaps a bit better than the standard library. They'd have to offer a clear and significant advantage over the standard library over time to warrant consideration. Especially if we're talking about core functionality or something that would be expensive to replace later.

Every third party dependency you introduce is a potential problem because you have to track them. You have to care about their development and release practices, you have to track their license (which may change), the health of the project, you have to know how they respond to defects being discovered etc. You have to do this for every single piece of third party code if the code you work on is critical to your business.

My real-world experience is that third party code carries a much higher risk than whatever you depend on from a standard library and you can make mistakes that end up being extremely costly.


> Discussing objective engineering tradeoffs is not the same as spreading FUD. It's baffling that you are so quick to shut down conversation and get combative.

What on earth are you on about ?

An assertion was made that Go stdlib was somehow "a slippery slope though and causes maintinence nightmares for mature projects".

It is a rather bizarre assertion to make, especially in relation to Go.

I am therefore merely asking for evidence of that assertion. Because, frankly, it smells of FUD and you know it.

But instead you are the one looking to shut down the conversation by not answering the perfectly reasonable questions I posed.


If you want an opposing opinion, check out Python. I haven't seen anyone use urllib in the stdlib in like a decade.

Putting things like that in the stdlib makes them subject to the same API stability constraints as stuff like the threading API.

They tend to ossify and get abandoned when a 3rd party lib makes a faster, or better, or just more convenient API.

Go has a fair bit of this. I haven't touched the stdlib logger in forever. The HTTP server side is kinda meh, the client side is okay but not fantastic.

I am a fan of Go's experimental packages. I kind of wish things could live there permanently as "things the core devs believe are a good implementation, but don't want to bind stability guarantees to the language".


> If you want an opposing opinion, check out Python.

I would argue that Go is a fair comparison, Python is not.

Python is just a mess of a language, dependency hell etc.

> Go has a fair bit of this. I haven't touched the stdlib logger in forever.

You do realise Go has log/slog in stdlib now ? So structured logging is now in stdlib.


> You do realise Go has log/slog in stdlib now ? So structured logging is now in stdlib.

I do, it's just twice as slow as the old stdlib logger, which was already twice as slow as 3rd party loggers, and it makes like 8x as many allocs as zap. If I were going to change my logger, it would be to zerolog, not slog.

> Python is just a mess of a language, dependency hell etc.

We're talking about stdlib though, which are the only packages dependency hell doesn't apply to.

> I would argue that Go is a fair comparison, Python is not.

Go is the youngest of the major languages, making it one of the worst place to check if you want to see how packages age in the stdlib. Look at a language that's actually old if you want to see the kind of problems old apps face.

It also has a weird place, language-wise, because its developed by Google who has an unusual amount of influence over developer patterns. Other languages have to follow devs to where they are, Go can kind of lead the way via Google.


The large "rewrite in language X" projects, I can't think of any that succeeded without the significant cost of time, effort and users.

Maybe there's a survivorship bias in that only the ones that go poorly people talk about?


Fish shell would be the big counterexample right now. C(++?) to Rust in about a year, IIRC. I believe they were still pushing out updates to the original codebase during that time, although I assume they could have done more if they weren't also rewriting.


With all due respect, is there anything that relies on production grade, highly scalable fish shell scripts, and in an adversarial security context?


With all due respect, I don't think that was the question! ;)


I'm not sure a shell comes anywhere close to the complexity of a modern fully fledged web server.


>The large "rewrite in language X" projects, [...] significant cost of time, effort and users.

Well... if you're pre-qualifying the hypothetical rewrite with "large", that already means we take as a given it will require significant time & effort. (Unless there's a sufficiently smart automatic transpiler tool like a future ChatGPT-v17.)

There have been some successful big rewrites. Google rewrote the web crawlers from Java to C++. They also rewrote the search results front page from Java to C++. Microsoft rewrote C# compiler from C++ to C#. Go rewrote the compiler from C to Go.

In any case, for curiosity... I ran the CLOC tool to count lines-of-code in the main NGINX repo ./src/* subdir. A project to convert ~165k of C to Rust is small enough to be doable by a small team. "Small" being 1 or 2 developers. (I'm not recommending they rewrite in Rust but just comparing the scope of the work to the resources.)

Semi-related footnote is Cloudflare replaced NGINX with in-house Pingora and that was written in Rust: https://hn.algolia.com/?q=pingora

  -----------------------------------------------------------------------------------
  Language                         files          blank        comment           code
  -----------------------------------------------------------------------------------
  C                                  256          54041           6165         153818
  C/C++ Header                       134           4810           1077          10041
  Perl                                 2             45             18            112
  SKILL                                3             24              0             99
  Bourne Shell                         1             30              5             78
  make                                 1              9              0             21
  C++                                  1              9              3             19
  Windows Resource File                1              3              2              1
  -----------------------------------------------------------------------------------
  SUM:                               399          58971           7270         164189
  -----------------------------------------------------------------------------------


Lots of JS ecosystem is going to "rebuild in Rust" right now, ironically.

JS has a different lifecycle though, I suppose. Everything gets rebuilt like 5-10 times more often than other on platforms.


I'd have to be pretty carefully persuaded that someone who wrote infrastructure tools in JS makes decent design decisions.


People use the tools they have. When you want to build an infrastructure tool, you won't (or shouldn't) start with "how to write rust".

Getting to the finish line matters more than using the absolute best fit so to speak.

Same goes for orgs that have a lot of JS talent but no Rust for example.


It's not an advocacy for rust but a fairly introductory level shell script would likely be a very sophisticated JavaScript program.

I've seen such efforts and they tend to be extremely complicated and not really doing much.

I don't assume rational agency but instead that many people make poor decisions.


If they add Typescript and primarily stick to wrappers around C libraries and calling out to other binaries it could be okay.

The core language is okay enough, not my favorite but it works. It's the surrounding ecosystem that's a pain.

I would have to think really hard if someone asked me if I'd rather have all the infra scripts written in Perl or TS. TS is more likely to break, but easier to read and fix. The odds of Perl breaking are slim, but it is a fucking nightmare to fix. I'm pretty sure I'd rather have Perl than JS, though. JS without a type system is rough, to me at least.


These rewrites are fantastic for performance of build tools, but add complexity and churn when the base language, usually Javascript or Typescript, was already memory safe anyways. I'm hesitant to depend on most of this newer tooling for a while when existing Javascript-based alternatives exist.


Not having proper static typing (and weak typing on top of that) in any medium-large project I think adds much more problems and complexity. JS is simply a bad language design-wise with TS being an improvement but still having the original sins of JS in it.


Performance is key. The performance differential of biome compared to prettier or eslint, for any monorepo of decent size, is a few orders of magnitude, for tools that should ideally be run on every file save.


While I don't see a sense in doing that versus managed compiled languages, as it isn't a kernel driver or game engine performance level, eventually I will get an excuse to use Rust at work exactly because of those rewrites.


Everything in JS gets rebuilt until they are no longer in JS.


I'm not sure which ones have gone poorly

I guess you'd consider librsvg in the poorly bucket, whereas its maintainer considers it a success. I think there's a distinction to be had between "maintainer lead RIIR" & "rando RIIR"

https://lwn.net/Articles/771355 https://viruta.org/librsvg-rust-and-non-mainstream-architect...


It of course depends on what you mean by "large", but one obvious case is rsvg.


Is it a rewrite...or a completely new piece of software that simply takes the NginX experience into consideration when designing its features?


The blogger keeps saying "Fork", but I can't find anything from the actual developer that says that. Instead, they keep saying it's a new project and that it "understands the nginx configuration language".

So I think it's pretty clear that it's not a fork.

To add to the confusion, it seems like they copied the site with all the bug reports as-is, which obviously would only apply to a fork, not to a project that hasn't even got any code yet.


A quick glance at the commit log (or whatever it's called for mercurial) makes it look like a fork to me, but so far not much more beside name changing.

http://freenginx.org/hg/nginx/

https://hg.nginx.org/nginx


A lot of the needed functionality might already be available in the Rust ecosystem, while a large amount of the code in Nginx predates general HTTP utility or even Unicode libraries. I assume that software that has grown organically with the web has gathered a lot of cruft and hacks that might not be needed anymore and make further development a lot harder.

It might be a worthwhile experiment to take a common set of HTTP libraries in Rust and add Nginx-specific configuration parsing and some other bits together and see how far that kind of hackery would get. With a good BDFL and some community involvement, it would be a great way to get the ball rolling.


> I actually don't understand why I am seeing arguments like this all the time.

Have a look at:

https://github.com/nginx/nginx/blob/master/src/http/modules/...

It's got the whole checklist: nginx idiosyncratic module system, inline parsing, custom utf conversion, buffer preallocation and adjustments, linked lists, comments about side effects of custom allocator, and probably other things.

It's not easy to deal with source like that and any serious improvement to that area would effectively be a rewrite anyway.

Since anything doing work in nginx is a module anyway, it wouldn't even have to be a full rewrite in one go.


Anecdotally in the cache module, it's unclear what invariants are supposed to be maintained for a big ol struct full of information about this cache lookup, which functions expect to only be called while holding a lock, etc. The module design and the issues you mentioned also causes suboptimal performance :(


While I agree that a rewrite of Nginx in Rust probably isn't the best use of resources, I wouldn't be so quick to dismiss the idea of writing an Nginx equivalent. And you might as well do that in Rust.

There are things I'd love to see in this space. For instance I'd love a proxy that explores the idea of being fully configurable via a gRPC API and which can delegate decisions to some other service (over gRPC).

(Doesn't the commercial Nginx have some of these features?)


> When a corporate tries to seize control of a open source project

Yeah, they wanted to have some CVEs for a bunch of bugs in experimental code, the author of the fork didn't.


What a weird hill to die on. Perhaps someone can illuminate the situation better, but I'm not keen to trust a project that seems to take CVE assignment against their software personally.


I obviously don't have any inside information on this, but I would presume that this isn't the first disagreement between the two parties.

I, just as lots of others, have had to terminate business relationships over "last straw" types of infractions and misbehavior. I've never regretted doing so, but cutting ties after trying to make a situation work is not something I've ever been eager to do.

I'm glad that I've never had to make that decision with something as well known as nginx.


Was the code in question included in stable releases?

If not, I might agree that assigning CVEs are not helpful.

But if the code was included in one or more stable release then I think it should probably have CVEs assigned. Depending on how serious or real the problems were.


I have personally reported bugs in the past that got closed as WONTFIX because they required some obscure configuration that the developer didn't believe would exist in the real world, but the way I found them was by breaking into actual real world servers owned by a client that I couldn't talk about due to an NDA. This happens a lot, and getting blown off constantly is a big reason why a lot of researchers just don't report bugs they find.

I don't know much about these nginx bugs, but it seems like they were found in real live production systems, not by researchers trying to bump their CVE credits. Also it seems like quite a few people in the world are using nginx for HTTP/3 and quic, so trying to say "it doesn't count because it's not on by default" seems like a bit of a stretch.

https://trac.nginx.org/nginx/ticket/2585

https://trac.nginx.org/nginx/ticket/2586


These bugs only occurred if you compiled nginx yourself with custom flags during build.

Should you be able to file CVEs against code posted in GitHub comments? StackOverflow Answers? Experimental branches on a maintainer's personal fork of the project?


The vulnerability also affected official version of Nginx Plus R30, released in August last year[1] and its release notes strongly suggest that the feature is no longer experimental.

The steps required to use HTTP/3 in the open source version are also outlined in the official documentation, and you can find numerous guides for using Nginx with HTTP/3, demonstrating that the feature has reached some level of public adoption.

This is in no way comparable to random snippets of code you find online, or in some experimental branch of a code repository that's not intended for public consumption. Filing a CVE for software that early adopters are likely to be using in production is completely justified, I'd certainly want to know about it.

[1] https://www.nginx.com/blog/nginx-plus-r30-released/

> Native support for QUIC+HTTP/3 – NGINX Plus now has official support for HTTP/3. [...]

> The QUIC+HTTP/3 support in NGINX Plus R30 is available as a single binary – unlike the experimental HTTP/3 support introduced in NGINX Plus R29, which had a separate binary for nginx quic. [...]

> Full HTTP/3 support is added. NGINX 1.25.0 mainline version introduced support for HTTP/3, and this support has been merged into NGINX Plus R30. The NGINX Plus R30 implementation has the following changes when compared to the experimental packages delivered in NGINX Plus R29 [...]


And this is the cause of the conflict.

Nginx, the open source project, was safe. No CVE should be assigned to it.

NGINX PLUS, a separate and independent corporate product, was vulnerable. A CVE should be assigned specifically to it.

The issue is that the maintainers of the open source nginx project were paid by the company behind NGINX PLUS, and the company demanded the CVE be assigned to the open source part.


Nginx, the open source project, wasn't "safe". It was merged, released, came with documentation, and it had public use.

I think that's all that matters. How else do you propose that users are informed about vulnerabilities in their installation? That's what CVEs are for.


Whether you develop new features on main gated by compile flags or on separate branches shouldn't have an effect on CVE assigments.

I'm pretty sure that setting arbitrary compile flags is enough to cause vulnerabilities in most software.

I personally ran nginx without this feature enabled because it was explicitly marked as experimental and potentially unsafe.


No, but code behind compile flags in a stable release seems like something we should file CVEs against.


There is no valid reason for "won't fix" to be a bug status, even if you aren't going to fix the bug. There's a status for that: "open".


Open is for tickets that are intended to be fixed.

Having nine gazillion open tickets is helpful for no one.

Devs will have to wade through open tickets that are not meant to be worked on time over again.

Tickets that should be worked on will be lost in the sea of irrelevant tickets.

People affected by the issue will see the open ticket and think it means that it’s going to be fixed.


Yes, we've all had or heard horror stories of managers like you who flip out over open and closed ticket metrics. That doesn't make it good practice.


That ("open") seems reasonable for a bug in the current release or a previous release that is still in wide use.

Is there ever a situation where a bug exists so far back in the history of the project that it doesn't make sense to fix? I'm asking this as an honest question and not a challenge. I can expect that some users still use what I'd consider to be legacy S/W but I wonder how far that goes back. If the developer(s) designate a version as EOL does it make sense to keep the bug open?

And OTOH, it conveys information if the developer marks a bug as "won't fix" rather than leaving the expectation that it might be addressed, someday.


If the bug only exists in old versions that sounds like a great case for marking it "fixed"! (Or possibly a cousin like "moot because this subsystem was deleted".)


>Was the code in question included in stable releases? If not, I might agree that assigning CVEs are not helpful.

F5 employee says code the was marked as "experimental" but the issue is that customers were running it in production:

https://news.ycombinator.com/item?id=39378523

https://news.ycombinator.com/item?id=39379984

It's worth going through that entire thread and Ctrl+F search userid "MZMegaZone" to get F5's rationale for assigning a CVE.

Why are some customers running experimental code in production?!? I can only assume they do it to solve a problem and take early advantage of a new feature that doesn't exist in the stable release.


Apparently F5 shipped it in production in their proprietary version, while it was not compiled into the free version.

So technically the CVE should be filed against NGINX Plus, not nginx ;)


It was included in stable releases but behind a compile time feature flag. Personally I think it was a very weird hill to die on. Weird to take CVEs so personally.


AFAIR it was im stable release BUTin a test code thats's behind a flag (disabled by default) so... it's a bit "shrodinger"


Rust has been getting A LOT of traction in the last few years.

It seems many programs are getting Rust replacements in one shape or another. If not being replaced it is, at the very least, being included in existing projects. The linux kernel now supports Rust which I believe, at this time, for Device Drivers. I also heared Rust compiler is included in the Windows Kernel.

Whether you/we like it or not, Rust is a serious contender! As a developer myself, would prefer to focus my energy on Odin or Zig but the reality is Rust is where job security is likely to be.. alongside languages like Java, Python, C#, etc.

Over the years, Job adverts have evolved from "must know OOP" to "must know design patterns" to "SOLID principles" and... now is likely to be "must focus on memory safety!"

For many, which Paul Graham referred as "the pointy haired boss" in his Revenge of the Nerd peice, is just cool buzz words of the time. It is just the current thing we need to do/use. Rust is going to gain the edge with these buzzwords.

This brings me back to projects like freenginx. The reality is new developers coming in are not focusing on C or C++. Academia is likely moving away from them in the coming years, in favour of Rust. The reality is new projects in C or C++ might not last when alternatives are being written in Rust. As it has "memory safety" it will likely be the default choice many look for, especially in a business environment.

As I have mentioned, I have been focusing my energy on Odin and (a bit of) Zig. However, the reality is there are not going to be jobs using them.. compared to Rust, C#, Python, etc. This means I will have to bite the bullet... and soon... learning Rust.


I think you're confusing two things. "create a more modern solution" is not the same statement as "rewrite it in rust". The later implies "make the same software in a new language", the former suggests "create new software because the use cases and problems the software solves have changed".

The software was initially released 20 years ago. It was designed in an era where most http requests served static assets and html w/o js, and https was for special situations like banks and order forms. At that time multicore CPUs were just emerging, and servers still shipped with 1Gb ethernet. JSON was a quirky idea, php was king, and Gmail was shaking up web-dev with this funky new Ajax thing[1].

Point being, things have changed - these days there are a lot of situations where you wouldn't even consider nginx since it's not the right tool, there are situations where nginx is no longer the easiest component to use and/or deploy, there are situations where it's a giant hassle to maintain an nginx deployment compared to tools written for a "servers are cattle" world.

Put another way, the web was 13 years old and the internet was a little over 30, and both had really only gotten big over the previous decade. If we were talking aviation, it would be arguing to retrofit the best fighters from WW1 on the dawn of WW2 or asking why we can't just strap jet engines on a DC3.

nginx has been an amazing bit of software and a solid workhorse getting us here. But more and more people are looking at other solutions, because the problem to solve isn't the problem nginx was designed for. That's ok - the world changes all the time.

Software isn't done, we haven't even had computers for a hundred years. Heck we haven't even finished figuring out all the ways to use computers yet. I'm leary of any claim that we fully solved some aspect of it, and more so when the claim is applied to tech from the early days of that field.

[1] it wasn't new new, but this was the first time most people really started to grasp what you could do on the web with javascript.


The person who actually wrote your quote explicitly referred to doing this as a “new project” within the quote.


Discussed previously Freenginx: Core Nginx developer announces fork https://news.ycombinator.com/item?id=39373327 2024-02-14, 475 comments


Now that pingora is open source, it might make a lot of sense to build that project on top of pingora. You'd get the battle tested quality that nginx is lauded for. I wouldn't switch to a "fork" of nginx if it's a from scratch rewrite with no proven track record, regardless of the track record of the author.


I know its early days but I think it's really broken to directly copy the upstream website, and impacts on how usable the product appears.

For example, https://freenginx.org/en/security_advisories.html states "All nginx security issues should be reported to security-alert@freenginx.org." and then has a complete history of bugs from prior to the fork (and thus were never freenginx issues).


> For example, https://freenginx.org/en/security_advisories.html states "All nginx security issues should be reported to security-alert@freenginx.org." and then has a complete history of bugs from prior to the fork (and thus were never freenginx issues).

What exactly to display on the security advisories page on a brand new fork project with no vulnerabilities yet seems like it could be a particularly hairy problem. You want users to visit the page and see the schema of information they would get in a security advisory, so they are prepared to use the website if they need to. So you could display an example advisory if none are available. But at that point it would be easier to keep the old advisories, because they also help to inform users on why they should upgrade from pre-fork versions.


Because it’s a fork those issues were in fact issues with freenginx IMO.


This might be a controversial opinion, but with the current Russian invasion of Ukraine and tensions between East/West, forking a project like Nginx to be maintained by a Russian company, by a person living in Russia, raises additional questions, especially considering how security critical nginx is when it comes to accessing all traffic of services. I'm thinking here as well about the case of Pavel Durov and his struggles with Russian government.

From the post it seems their intention are good, and towards better secured software, and against business interfering with security. But the part where they say "I no longer able to control which changes are made in nginx within F5" might be a good thing. Should a single person have that much control over a critical piece of infrastructure?


The same argument based on the “Russia vs the global west” narrative can be leveled perfectly accurately against western-citizen maintained software.

Everyday Russian developers are no more or less involved in their country’s empire-building than everyday American developers are in theirs (lest we forget about the 12(!) foreign countries in which US forces are currently invading). “Russian” is not synonymous with “suspect”; or, if indeed it is, perhaps “American” should be, too.


Like some guy from Finland?


To the best of my knowledge, Finnish government hasn't made any attempts at interfering with software development built in Finland, or spying on software users. The same cannot be said of Russia.

I'm not saying Russian people cannot make safe and secure software, just saying that people living in Russia have historically been targets of pressure.

* and to be honest, the USA and China do not have the cleanest of records either.


Pretty sure this is a reference to Linus Torwalds


More power to him! But calling the fork freenginx is perhaps not the best decision since there is nothing unfree about nginx. Call it rustnginx instead.


Agreed about the current name, but a "rusty engine" doesn't inspire confidence in a web server. :)



Excellent, I wish this project success!


Absolutely. This is great news.


[flagged]


What?


he might he referring to Angie, a fork of Nginx after a bunch of Russian software devs were fired from F5.

I guess there were implications there.


I'm glad I stuck with Apache httpd whilst everyone was moving to nginx. None of this drama with httpd.


Drama? This feels like your own projection.

Nginx is a stable, and reliable piece of software. It was before this, and will be after this. What happened is that someone who used to work on it now works on a new thing. This happens, people are not slaves to their previous projects. They can and should do new things when they wish to.

In time if the new thing becomes usefull in some way perhaps some people will choose to use it instead of nginx. Perhaps that day will never come. Both of those are fine.

What is certain is that nginx has a critical mass behind it that even if a core developer moves away to do something else it won’t shrivel up and die. If you are an nginx user today in a personal or professional capacity most likely you will be able to use it until the end of your life with no special worries.

I have no problem with you using Apache httpd, and I am happy that you are happy with your choice. On the other hand it does not feel usefull to portray this as some special reason why one should not have choosen nginx.


Is it that much of a drama? Seems more like this is the whole selling point of open source - if you don't like something, you fork it, and people can vote with their installations. This kind of stuff happens all the time in private companies, but we as users have much less influence over the outcome.


I'm having a great time with Caddy. My whole Jupyter stack is reverse proxied to Caddy, and the Caddy config is a single line compared to the 100 line mess that they reccomend people use for Apache or nginx

(yes caddy takes care of upgrading your headers, websockets, certificates, all of it)


once you actually need to configure Caddy, it is an abomination beyond imagination — the docs, Caddyfile, JSON config — all of them lack coherence or sensible design

never touching it again

good defaults matter; nginx configs were very concise and readable in 2005-2015 when I was using it heavily; http/3, ws, acme — perhaps new tech is yet to be incorporated properly (and maybe it never will)


I genuinely haven't had a problem with it -- I use it for all my projects, and it's seamless at rewriting baseurls handling subdomains, proxies, all of it.


I'd rather have gotten 15 years out of a superior solution personally.


A fork is not necessary bad for users. Nginx is already a great product and I believe both Nginx Inc version and freenginx will not become worse over the time. Likely they would gain different features, will have different release schedules e. t. c.


Apache needs slowloris mitigation, and a dozen other tweaks to get it secure (mpm-itk, fail2ban rules, and anti-spammer mods etc.) It can be good, but most people run the vanilla install. Sometimes it is the only solution... ;-)

However, Nginx gets over 20% more concurrent users per host... Hence the main reason people put up with its lack of features and silly HA design.

Forks can be healthy, but it usually quickly degrades into a competition for users.

Hope everyone has fun =)


Er, does nginx have some fail2ban built-in? Because it really sounds like that's apples-to-oranges.


Apache on the web is constantly probed for remote exploits, and it requires more preemptive precautions. These issues go well beyond the standard checks for bad bot traffic.

You are correct in that many of the CVEs were not technically Apache itself, but rather supporting libraries and old mods.

Best of luck =)


Obviously other webservers are also continuously being probed..


This is how Apache came into being (without so many hard feelings, maybe), though -- as (what we would now call) a fork, of NCSA httpd.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: