The real problem is software teams being given deadlines without being consulted about any sort of estimates. "This needs to be done in 60 days." Then we begin trading features for time and the customer winds up getting a barely functioning MVP, just so we can say we made the deadline and fix all the problems in phase 2.
OK, so that sounds fine. Software delivers value to customers when it's better than nothing some of the time. Even if it barely functions then they're probably happier with having it than not, and may be willing to fund improvements.
I've experimented with all those platforms, and macOS feels to me like "Unix" as much as a Sun SparcStation or SGI Indy from 30 years ago.
What is "Unix" to you? To me, it's the common shell commands / utilities and a POSIX API. If I can download some GNU source, run ./configure; make; make install ... it's Unix.
Certainly, macOS is a "weird" Unix if you compare it to Solaris and look at the administrative bits. But, then again, IBM's AIX is very weird, too. And that's one of the few commercial Unix implementations still kicking.
> Certainly, macOS is a "weird" Unix if you compare it to Solaris and look at the administrative bits. But, then again, IBM's AIX is very weird, too. And that's one of the few commercial Unix implementations still kicking.
That's why I said that "Unix" has always been kind of confusing as a name, because a lot of "Unix"es are very different. I've never used AIX personally, but I know it's pretty funky. And there have been weirder "unix"es, Domain/OS was another weird one. At least a few others had split BSD/SysV "personalities", I've read.
> If I can download some GNU source, run ./configure; make; make install ... it's Unix.
On the one hand, I agree with this.
But then, by that standard, you could call basically every OS in use today "Unix", including Windows via Cygwin, or WSL, or etc...
To me, "Unix" is epitomized by Sun's fix for SunOS 4 for disabling Yellow Pages and using only DNS for hostname lookups.
Their official advice? To unpack the libc shlib, delete the object code for the Yellow Page functions, then repackage it into a new libc version.
That feels like Unix to me, in a way that macOS just never will be. Which is also perfectly okay with me.
So Unix has to feel like dealing with old cruft to you? ;) I remember the SunOS 4 days and the annoying setup process for DNS. Those were the first Unix systems I worked with in a professional capacity.
I have a Sparc in my collection but it's running Solaris and too new to run SunOS 4. I'm considering getting a Sparc 10 or something so I can relive the SunOS days. That was my favorite early 90's Unix. Most open source software had first class support for SunOS.
Linux is "Unix" in my mind, though not UNIX (TM). WSL follows, since it is really virtualization under the hood. (WSL2, at least.). Cygwin seems like a gray area... Unix-like environment maybe?
> So Unix has to feel like dealing with old cruft to you? ;)
Well, maybe :)
It's something about the system being made of a lot of messy parts which can be split apart and taped back together. Reductively, all computers are like this, but SunOS and other "unixes" are more easily put back together.
For instance, besides enabling DNS, I've extended the libc quite a bit, to get modern OpenSSL and curl to build, as well as KDE 1 just for kicks.
You can do the same with almost any OS (that doesn't lock you out with security), but it feels easier with a "Unix". Linux is also very like this!
> I have a Sparc in my collection but it's running Solaris and too new to run SunOS 4.
You could always run NetBSD and use COMPAT_SUNOS to run a SunOS chroot ;) I haven't tried running Xsun this way but it'd probably go...
Haha, I haven't yet, but I want to. Maybe soon! Though extending libc isn't that exciting, really: that's kind of the cool (Unix-y??) thing about it. You just (IIRC) extract the static archive, add whatever .o object files with whatever symbols, add the symbols to a manifest, pack it up, and any C program on the system can call it. Since all C functions were implicit at the time (header files only had structs and enums) you can use trivially add whatever.
> Ultra 5
You should give it a shot! Even NetBSD/sparc64 has support for SunOS 4 binaries... allegedly.
If one was so inclined, you can abuse the kernel a bit and tell the compat layers to use root as their search path. I did this to make a "Linux system" with a NetBSD kernel and full GNU/Linux userland, just for kicks.
In my mind, you could do the same for SunOS. There's also COMPAT_MACH and COMPAT_DARWIN... imagine NeXT/SPARC binary compatibility alongside SunOS.
Neither of them prevent inbound connections, on their own or together.
I don't really think that "inbound connections work fine and you're basically just praying that the people that can do them simply won't" counts as being secure, but I'll admit that using RFC1918 does limit the set of people that can do them. If you made that your argument, you'd have more of a point than an argument based around NAT.
Okay! I didn't say it was absolutely secure. A firewall is obviously preferred. I'm just saying it's shades of gray... non-routeable addresses provides a level of security.
No, the router doesn't forward it because it doesn't get there in the first place. Your 192.168.1.0/24 private network is not going to be routed across the internet.
Non-routeable internal addresses are pretty effective at preventing external actors. When most people say "NAT", that is what they mean.
You are technically correct in that 1) disallowing external actors is not a property of "NAT" itself, 2) theoretically someone could establish routing to your RFC-1918 network if they had ISP control or had layer-2 adjacency.
Practically speaking, this is not a problem. NAT + RFC-1918 addressing provides a layer of security. Is a firewall better? Of course.
Yes, but NAT combined with RFC1918 private addresses does provide a layer of security. This is the most common NAT configuration for 99.99% of residential users. It is what most people mean by "NAT."
If your address cannot be routed across the Internet, it can't be accessed, firewall or not.
I have worked in corporate environments where we NAT'd public, route-able addresses for historical reasons. That would be insecure without a firewall and is not what most people are discussing.
Worse, there's sales in NOT doing it. When I buy a Mac, I get extra memory "just in case." I would've been fine with 24 gigs on my MacBook Pro, but I got 48.
Back in 1993, I remember booting SLS Linux on a 386 laptop with 3 megs of RAM (1 meg on the motherboard, 2 meg expansion.) I could barely get it to startx and open an xterm, so I mostly used it in from the console!
reply