Hacker Newsnew | past | comments | ask | show | jobs | submit | mjg59's commentslogin

Third party? Black Lotus was the first case we saw actually targeting individuals, and that was a vulnerability in the Windows bootloader.

No, there's nothing special about the spec secure boot variables as far as boot services goes - you can modify those in runtime as well. We use boot service variables to protect the MOK key in Shim, but that's outside what the spec defines as secure boot.

I really don't understand why people keep misunderstanding this post so badly. It's not a complaint about C as a programming language. It's a complaint that, due to so much infrastructure being implemented in C, anyone who wants to interact with that infrastructure is forced to deal with some of the constraints of C. C has moved beyond merely being a programming language and become the most common interface for in-process interoperability between languages[1], and that means everyone working at that level needs to care about C even if they have no intention of writing C.

It's understandable how we got here, but it's an entirely legitimate question - could things be better if we had an explicitly designed interoperability interface? Given my experiences with cgo, I'd be pretty solidly on the "Fuck yes" side of things.

(Of course, any such interface would probably end up being designed by committee and end up embodying chunks of ALGOL ABI or something instead, so this may not be the worst possible world but that doesn't mean we have to like it)

[1] I absolutely buy the argument that HTTP probably wins out for out of process


I don't see that as a problem. C has been the bedrock of computing since the 1970s because it is the most minimal way of speaking to the hardware in a mostly portable way. Anything can be done in C, from writing hardware drivers, to GUI applications and scientific computing. In fact I deplore the day people stopped using C for desktop applications and moved to bloated, sluggish Web frameworks to program desktop apps. Today's desktop apps are slower than Windows 95 era GUI programs because of that.

You're still thinking of C as a programming language but the blogpost is not about this, it's about using C to describe interfaces between other languages.

> because it is the most minimal way of speaking to the hardware in a mostly portable way.

C is not really the most minimal way to do so, and a lot of C is not portable anyway unless you want to become mad. It's just the most minimal and portable thing that we settled on. It's "good enough" but it still has a ton of resolvable problems.


Ok you're still missing the point. This isn't about C being good or bad or suitable or unsuitable. It's about whether it's good that C has, through no deliberate set of choices, ended up embodying the interface that lets us build rust that can be called by go.

Yes, because C is, by virtue of its history and central role in the development of all mainstream operating systems, the lowest common denominator.

Also, if I remember correctly, the first Rust and Go compilers were written in C.


Yes! It's easy to see why we got here, but that doesn't mean it's the optimal outcome!

Rust used OCaml, and Go only used C, because it was partially created by the C authors, and they repurposed the Plan 9 C compiler for Go.

Usually it helps to know why some decision was taken, it isn't always because of the technology alone.


OCaml was used for rust.

> Yes, because C is, by virtue of its history

Sure history is great and all, but in C it's hard to say reliably define this int is 64-bit wide, because of the wobbly type system. Plus, the whole historical baggage of not having 128-bit wide ints. Or sane strings (not null terminated).


> in C it's hard to say reliably define this int is 64-bit wide

That isn't really a problem any more (since c99). You can define it as uint64_t.

But we have a ton of existing APIs that are defined using the wobbly types, so we're kind of stuck with it. And even new APIs use the wobbly types because the author didn't use that for whatever reason.

But that is far from the only issue.

128 bit ints is definitely a problem though, you don't even get agreement between different compilers on the same os on the same hardware.


> 128 bit ints is definitely a problem though, you don't even get agreement between different compilers on the same os on the same hardware.

you technically have _BitInt(128) in C23, but I'm not sure that would even generate what you expect it to.


Of some computing platforms.

> I don't see that as a problem.

It kinda is. Because it was made in the 1970s, and it shows (cough null-terminated strings uncough).

Or you know having a 64-bit wide integer. Reliably.

You did read the article, right?


> could things be better if we had an explicitly designed interoperability interface?

Yes, we could define a language-agnostic binary interoperability standard with it's own interface definition language, or IDL. Maybe call it something neutral like the component object model, or just COM[1]. :)

[1] https://en.wikipedia.org/wiki/Component_Object_Model


The general idea is sound. The implementation less so.

VHDL vs Verilog is a good parallel from the chip world. VHDL was designed from ground up.

Verilog is loosely based on C. Most designs are done in Verilog.


VHDL tends to reign in European hardware companies.

And Japan, I'm told.

I wonder why there aren't many successful European hardware products.

I would assert any company that doesn't go bankrupt is successful, doesn't need to be late stage capitalism.

Other than that, Nokia until Microsoft placed an agent on it, Phillips that contributed to CDs, ASML...


Three is not many

ST, Infineon, ARM(!!), NXP, Raspberry Pi Holdings, Nordic Semiconductor (original developers of AVR, now known for Bluetooth chips and auch)...

" any company that doesn't go bankrupt is successful"

Well, VHDL was heavily based on Ada.

Of course things could be better. That doesn’t mean that we can just ignore the constraints imposed by the existing software landscape.

It’s not just C. There are a lot of things that could be substantially better in an OS than Linux, for example, or in client-server software and UI frameworks than the web stack. It nevertheless is quite unrealistic to ditch Linux or the web stack for something else. You have to work with what’s there.


It's a somewhat weird product. There's no real access to any of the hardware that made the Amiga impressive at the time, without an add-on graphics card you're going to have a bad time in X, and it replaces AmigaOS entirely so you don't have any ability to run Amiga software at the same time (it's not like a/is in that regard). It's an extremely generic Unix, and I don't know who Commodore really thought they were selling it to. But despite all this is was cheaper than a comparable Sun? Extremely confusing.

Wasn't there some government procurement rule that required any computers they bought be able to run UNIX? At least, that's the reason commonly cited for why Apple created A/UX, their Unix for 68k Macs, originally released in 1989.

Commodore wanted to shake their "just for games" reputation. They wanted to put the 'business' in Commodore Business Machines.

Their SVR4 was the first real port of SVR4 to a commercially available system that I know of.


It wasn't given enough time or resources to be awesome. Being an SGI alternative was probably being floated.

The early versions of most products suck. It's a matter of throwing down enough time and resources to get through that phase


Well that sounds disappointing. These days you're probably better off just running Linux or NetBSD on your old Amigas. But the ability to run true multiuser Unix on cheap desktop hardware was probably immensely valuable to businesses at the time, so it might've been worth it, even if you forgo much of the Amiga's Amiganess. The Tandy Model 16 family was not an Amiga by any stretch, but they had 68000 CPUs and were Unix capable in the form of Xenix. So they ran a lot of small business back office stuff until well into the 90s I'm guessing, despite first coming out in 1982.

Atari Corp was doing the same thing around the same time as Commodore was, with their own branded SysV fork. Both were trying to get into the later stages of the workstation market because it was seen as a new revenue source at a time when the "home computer" market was disappearing.

http://www.atariunix.com/

and the background:

https://web.archive.org/web/20001001024559/http://www.best.c...

But I distinctly remember an editorial in UnixWorld magazine (yes, we had magazines like that back then you could buy in like... a drug store...) with the headline "Up from toyland" talking about the Atari TT030 + SysV. Not exactly flattering.

The reality is by 1992, 93, 94 the workstation market was already being heavily disrupted by Linux (or other x86 *nix/BSD) on 386/486. The 68k architecture wasn't compelling anymore (despite being awesome), as Motorola was already pulling the rug out from under it.

And, yeah, many people just ran NetBSD on their Atari TTs or Falcon030s anyways.


>The reality is by 1992, 93, 94 the workstation market was already being heavily disrupted by Linux

From Wikipedia:

> Linux (/ˈlɪnʊks/ LIN-uuks)[16] is a family of open source Unix-like operating systems based on the Linux kernel,[17] a kernel first released on September 17, 1991, by Linus Torvalds

The first linux was not able to run much.


I personally bought a 486 (and left the Atari ST world) in the winter of 1992 precisely so I could run the earliest versions of Linux. Which were next to useless, but so was running most of the Unix-ish stuff on 68k platforms.

I imagine any home computers manufacturer looked at the workstation 68000 machines like Sun and said "we have the same CPU, if we have a Unix we can market our computers as workstations at a fraction of the cost". You also had Apple release A/UX for their 68k Macs.

May be of interest - here's what running Linux and NetBSD on Amiga is like these days: https://sandervanderburg.blogspot.com/2025/02/running-netbsd... https://sandervanderburg.blogspot.com/2025/01/running-linux-...

Sun and NeXT also sold 68k Unix workstations at the time. IMHO, The thing about Amiga was that it was not seen as a business machine. Commodore in general was seen as a home computer, and really one aimed at gaming first. AFAIK they didn't even have computers with the specs to compete with what Sun, SGI, HP, and others were doing.

The Sun and NeXT machines were pricey. Commodore may well have been trying to break into the business market by releasing an affordable business-attractive OS for the Amiga. They were also starting to sell PCs around this time. It certainly tracks with their scattershot marketing efforts late in their history.

There were video and multimedia applications at the time that could ONLY be tackled by an Amiga unless you wanted to pay $10,000 or more for specialized equipment. Besides the Video Toaster, which 'nuff said, Amigas also provided teletext-like TV information services in the USA, such as weather forecasts and the Prevue Channel (a cable channel that scrolled your cable system's program listings). Teletext itself never really caught on here.

Anime fan subtitling was also done almost exclusively on Amiga hardware.

Amiga gained a reputation as a glorified game console in the wider market, but those who knew... knew.


The vast majority of it is just recompiled AT&T code. The Amiga specific stuff is provided in object form and largely shipped with debug symbols so it'd be pretty easy to get something approximating the original.

What software? The Apricot ran DOS but didn't implement a full PC-compatible BIOS, so some software would work and some wouldn't. Even back in 1984 people didn't call it a PC compatible.


The transition enabled faster and more frequent service, which is something you probably do care about if you need to get into the office and are deciding how to get there.


/ has to be writeable (or have separate writeable mounts under it), /usr doesn't. The reasons for unifying under /usr are clearly documented and make sense and it's incredibly tedious seeing people complain about it without putting any effort into understanding it.


Documented where?



OK, there are 4 reasons listed there:

> Improved compatibility [...] That means scripts/programs written for other Unixes or other Linuxes and ported to your distribution will no longer need fixing for the file system paths of the binaries called, which is otherwise a major source of frustration. [..]

Scripts authors should use the binary name without a path and let the user's $PATH choose which binary to use and from where.

This union denies me the choice of using the statically linked busybox in /bin as a fallback if the "full" binaries in /usr are corrupted or segfaults after some library update.

> Improved compatibility with other Unixes (in particular Solaris) in appearance [...]

I don't care about appearances and I care even less about what Solaris looks like.

Did they take a survey of what Linux users care about, or just imposed their view on all of us because they simply know better? Or were paid to "know better" - I never exclude corruption.

> Improved compatibility with GNU build systems. The biggest part of Linux software is built with GNU autoconf/automake (i.e. GNU autotools), which are unaware of the Linux-specific /usr split.

Yeah, right. Please explain to me how GNU, the userspace of 99% of all Linux distributions isn't aware of Linux-specific /usr split.

And how is this any different from #1 ?

> Improved compatibility with current upstream development

AKA devs decided and users' opinion is irrelevant. This explains why GNU isn't aware of Linux /usr split - they simply don't want to be aware.


A meaningful gamble IBM made at the time was whether the BIOS was copyrightable - Williams v. Artic wasn't a thing until 1982, and it was really Apple v. Franklin in 1983 that left the industry concluding they couldn't just copy IBM's ROMs.


The limiting factor is the horizontal refresh frequency. TVs and older monitors were around 15.75kHz, so the maximum number of horizontal lines you could draw per second is around 15750. Divide that by 60 and you get 262.5, which is therefore the maximum vertical resolution (real world is lower for various reasons). CGA ran at 200 lines, so was safely possible with a 60Hz refresh rate.

If you wanted more vertical resolution then you needed either a monitor with a higher horizontal refresh rate or you needed to reduce the effective vertical refresh rate. The former involved more expensive monitors, the latter was typically implemented by still having the CRT refresh at 60Hz but drawing alternate lines each refresh. This meant that the effective refresh rate was 30Hz, which is what you're alluding to.

But the reason you're being downvoted is that at no point was the CRT running with a low refresh rate, and best practice was to use a mode that your monitor could display without interlace anyway. Even in the 80s, using interlace was rare.


Interlace was common on platforms like the Amiga, whose video hardware was tied very closely to television refresh frequencies for a variety of technical reasons which also made the Amiga unbeatable as a video production platform. An Amiga could do 400 lines interlaced NTSC, slightly more for PAL Amigas—but any more vertical resolution and you needed later AmigaOS versions and retargetable graphics (RTG) with custom video hardware expansions that could output to higher-freq CRTs like the SVGA monitors that were becoming commonplace...


Amigas supported interlace, but I would strongly disagree that it was common to use it.


CGA ran pretty near 262 or 263 lines, as did many 8-bit computers. 200 addressable lines, yes, but the background color accounted for about another 40 or so lines, and blanking took up the rest.


Everything runs at 262.5 lines at 60Hz on a 15.75KHz display - that's how the numbers work out.


The irony is that most of those who downvote didn't spend hours in front of those screens as I did. And I do remember these things were tiring, particularly in the dark. And the worst of all were computer CRT screens, that weren't interlaced (in the mid 90s, before higher refresh frequency started showing up).


I spent literally thousands of hours staring at those screens. You have it backwards. Interlacing was worse in terms of refresh, not better.

Interlacing is a trick that lets you sacrifice refresh rates to gain greater vertical resolution. The electron beam scans across the screen the same number of times per second either way. With interlacing, it alternates between even and odd rows.

With NTSC, the beam scans across the screen 60 times per second. With NTSC non-interlaced, every pixel will be refreshed 60 times per second. With NTSC interlaced, every pixel will be refreshed 30 times per second since it only gets hit every other time.

And of course the phosphors on the screen glow for a while after the electron beam hits them. It's the same phosphor, so in interlaced mode, because it's getting hit half as often, it will have more time to fade before it's hit again.


Have you ever seen high speed footage of a CRT in operation? The phosphors on most late-80s/90s TVs and color graphic computer displays decayed instantaneously. A pixel illuminated at the beginning of a scanline would be gone well before the beam reached the end of the scanline. You see a rectangular image, rather than a scanning dot, entirely due to persistence of vision.

Slow-decay phosphors were much more common on old "green/amber screen" terminals and monochrome computer displays like those built into the Commodore PET and certain makes of TRS-80. In fact there's a demo/cyberpunk short story that uses the decay of the PET display's phosphor to display images with shading the PET was nominally not capable of (due to being 1-bit monochrome character-cell pseudographics): https://m.youtube.com/watch?v=n87d7j0hfOE


Interesting. It's basically a compromise between flicker and motion blur, so I assumed they'd pick the phosphor decay time based on the refresh rate to get the best balance. So for example, if your display is 60 Hz, you'd want phosphors to glow for about 16 ms.

But looking at a table of phosphors ( https://en.wikipedia.org/wiki/Phosphor ), it looks like decay time and color are properties of individual phosphorescent materials, so if you want to build an RGB color CRT screen, that limits your choices a lot.

Also, TIL that one of the barriers to creating color TV was finding a red phosphor.


There are no pixels in CRT. The guns go left to right, ¥r¥n, left to right, while True for line in range(line_number).

The RGB stripes or dots are just stripes or dots, they're not tied to pixels. There would be RGB guns that are physically offset to each others, coupled with a strategically designed mesh plates, in such ways that e- from each guns sort of moire into only hitting the right stripes or dots. Apparently fractions of inches of offsets were all it took.

The three guns, really more like fast acting lightbulbs, received brightness signals for each respective RGB channels. Incidentally that means they could go between brightness zero to max couple times over 60[Hz] * 640[px] * 480[px] or so.

Interlacing means the guns draw every other lines but not necessarily pixels, because CRTs has beam spot sizes at least.


No, you don't sacrifice refresh rate! The refresh rate is the same. 50 Hz interlaced and 50 Hz non-interlaced are both ~50 Hz, approx 270 visible scanlines, and the display is refreshed at ~50 Hz in both cases. The difference is that in the 50 Hz interlaced case, alternate frames are offset by 0.5 scanlines, the producing device arranging the timing to make this work on the basis that it's producing even rows on one frame and odd rows on the other. And the offset means the odd rows are displayed slightly lower than the even ones.

This is a valid assumption for 25 Hz double-height TV or film content. It's generally noisy and grainy, typically with no features that occupy less than 1/~270 of the picture vertically for long enough to be noticeable. Combined with persistence of vision, the whole thing just about hangs together.

This sucks for 50 Hz computer output. (For example, Acorn Electron or BBC Micro.) It's perfect every time, and largely the same every time, and so the interlace just introduces a repeated 25 Hz 0.5 scanline jitter. Best turned off, if the hardware can do that. (Even if it didn't annoy you, you'll not be more annoyed if it's eliminated.)

This also sucks for 25 Hz double-height computer output. (For example, Amiga 640x512 row mode.) It's perfect every time, and largely the same every time, and so if there are any features that occupy less than 1/~270 of the picture vertically, those fucking things will stick around repeatedly, and produce an annoying 25 Hz flicker, and it'll be extra annoying because the computer output is perfect and sharp. (And if there are no such features - then this is the 50 Hz case, and you're better off without the interlace.)

I decided to stick to the 50 Hz case, as I know the scanline counts - but my recollection is that going past 50 Hz still sucks. I had a PC years ago that would do 85 Hz interlaced. Still terrible.


You assume that non interlaced computer screens in the mid 90s were 60Hz. I wish they were. I was using Apple displays and those were definitely 30Hz.


Which Apple displays were you using that ran at 30Hz? Apple I, II, III, Macintosh series, all ran at 60Hz standard.

Even interlaced displays were still running at 60Hz, just with a half-line offset to fill in the gaps with image.


I think you are right, I had the LC III and Performa 630 specifically in mind. For some reason I remember they were 30Hz but everthing I find googling it suggest they were 66Hz (both video card and screen refresh).

That being said they were horrible on the eyes, and I think I only got comfortable when 100Hz+ CRT screens started being common. It is just that the threshold for comfort is higher than I remember it, which explains why I didn't feel any better in front of a CRT TV.


Could it be that you were on 60Hz AC at the times? That is near enough to produce something called a "Schwebung" when artificial lighting is used. Especially when using flourescent lamps like they were common in offices. They need to be "phasenkompensiert" (phase compensated?/balanced), meaning they have to be on a different phase of the mains electricity, than the computer screens are on. Otherwise even not so sensitive people notice that as interference/sort of flickering. Happens less when you are on 50Hz AC, and the screens run at 60Hz, but with flourescents on the same phase it can still be noticeable.


I doubt that. Even in color.

1986 I got an AtariST with black and white screen. Glorious 640x400 pixels across 11 or 12 inch. At 72Hz. Crystal clear.

https://www.atari-wiki.com/index.php?title=SM124


If they weren't interlaced then they were updating at 60Hz, even in the 80s. You're being very confidently wrong here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: