Hacker Newsnew | past | comments | ask | show | jobs | submit | ggreg84's commentslogin

Chips and Cheese GPU analysis are pretty detailed, but they need to be taken with a huge grain of salt because the results only really apply to OpenCL and nobody buying NVIDIA or AMD GPUs for Compute runs OpenCL on them; its either CUDA or HIP, which differ widely in parts of their compilation stack.

After reading the entire analysis, I'm left wondering, what observations in this analysis - if any - actually apply to CUDA?


For benchmarking code like this CUDA, HIP and OpenCL are almost the same. You will only see the difference in big codebases, where you launch multiple kernels and move data between them.

Otherwise OpenCL is very good as well, with the added benefit of running on all GPUs.


> its either CUDA or HIP, which differ widely in parts of their compilation stack.

This is an ironic comment - OpenCL uses the same compiler as CUDA on NVIDIA and HIP on AMD.


Sort of. Same compiler backend, mostly, but the set of intrinsics and semantic rules are different.


i have no idea what your point is - same compiler, different frontend, yes that's literally what i said.


Asynchronous clean-up is a solved problem in C++ senders & receivers.

I wonder what's so different about rust that they can't solve it in the same way.


> My biggest concern with Rust is the sloppiness around the distinction between a "compiler bug" and a "hole in the type system."

That bug is marked as I-unsound, which means that it introduces a hole in the type system.

And so are all other similar bugs, i.e., your concern seems to be unfounded, since you can actually click on the I-unsound label, and view all current bugs of this kind (and past closed ones as well!).


Perhaps I should have said "hole in the type theory" to clarify what I meant.

To be clear I wasn't trying to imply the rustc maintainers were ignorant of the difference. I meant that Rust programmers seem to treat fundamental design flaws in the language as if they are temporary bugs in the compiler. (e.g. the comment I was responding to) There's a big difference between "this buggy program should not have compiled but somehow rustc missed it" and "this buggy program will compile because the Rust language has a design flaw."


But it's not a fundamental design flaw in the language, nor is it a "hole in the type theory". It's a compiler bug. The compiler isn't checking function contravariance properly. Miri catches the problem, while rustc doesn't.


You mean that Miri catches it at runtime, right? If so, that hardly demonstrates anything about the difficulty or lack thereof of fixing rustc’s type checker, since Miri is not a type checker and doesn’t know anything about “lifetimes” in the usual sense.

I agree that this isn’t a “fundamental design flaw in the language”, but Miri is irrelevant to proving that.


Yes. I'm not saying that Miri demonstrates anything other than that it's not a language issue.


I’m not sure it even demonstrates that. For comparison, C’s inability to prevent undefined behavior at compile time definitely is a fundamental weakness of the language. Yet C has its own tools that, like Miri, can detect most undefined behavior at runtime in exchange for a drastic performance cost. (tis-interpreter is probably the closest analogy, though of course you also have ASan and Valgrind.)

Or to put it another way, the reason that Rust’s implied bounds issue is not a fundamental language issue is that it almost certainly can be fixed without massive backwards compatibility breakage or language alterations, whereas making C safe would require such breakage and alterations. But Miri tells us nothing about that.


I believe it really is a flaw in the language, it's impossible for any compiler to check contravariance properly in this edge case. I don't think anything in this link is incorrect: https://counterexamples.org/nearly-universal.html?highlight=... (and it seems the rustc types team endorses this analysis)

I am not at all familiar with Miri. Does Miri consider a slightly different dialect of Rust where implicit constraints like this become explicit but inferred? Sort of like this proposal from the GH issue: https://github.com/rust-lang/rust/issues/25860#issuecomment-... but the "where" clause is inferred at compile time. If so I wouldn't call that a "fix" so much as a partial mitigation, useful for static analysis but not actually a solution to the problem in rustc. I believe that inference problem is undecidable in general and that rustc would need to do something else.


I think you're misinterpreting what "the implicit constraint 'b: 'a has been lost" in that link means. What it means is that the compiler loses track of the constraint. It doesn't mean that the language semantics as everyone understands them allows this constraint to be dropped. That sentence is describing a Rust compiler bug, not a language problem.


I think you're misinterpreting what "implicit" means here. It doesn't mean "a constraint rustc carries around implicitly but sometimes loses due to a programming bug." It means "a constraint Rust programmers use to reason about their code but which rustc itself does not actually keep track of." "The implicit constraint 'b: 'a has been lost" should more precisely read "the implicit constraint 'b: 'a no longer holds."

Look closely at what happens:

  fn foo<'a, 'b, T> (_: &'a &'b (), v: &'b T) -> &'a T
There is an implicit "where 'b : 'a" clause at the end of this declaration - this clause would be explicit if Rust was more like OCaml. The reason this clause is there implicitly is that in correct Rust code you can't get &'a &'b if 'b : 'a doesn't hold. So when a human programmer reasons about this code they implicitly assume 'b: 'a even though there's nothing telling rustc that this must be the case.

This runs into a problem with the requirements of contravariance, which let us replace any instance of 'b in foo with any type 'd such that 'd : 'b, in particular 'static:

  fn foo<'a, 'b, T> (_: &'a &'static (), v: &'b T) -> &'a T
Since contravariance allows us to replace the &'a &'b with &'a &'static without changing the second &'b, the completely implicit constraint 'b : 'a is no longer actually being enforced, and there's no way for it to be enforced. This is not a compiler bug! It's a flaw in the design.

In particular I don't think there's anything especially magical about 'static here except that it works for any type 'a[1]. If you had a specific type 'd such that 'd : 'a then I think you could trigger a similar bug, converting references to 'b into references to 'd.

[1] Actually maybe "static UNIT: &'static &'static () = &&();" is more critical here since I don't think &'d &'d will work. Perhaps there's a more convoluted way to trigger it. This stuff hurts my head :)


Because Rust doesn't have a specification, we can argue back and forth forever about whether something is part of the "language design". The point is that the design problem has been fixed by desugaring to bounds on "for" since 2016 [1]. The only thing that remains is to implement the solution in the compiler, which is an engineering problem. It's not going to break a significant amount of code: even the bigger sledgehammer of banning contravariance on function types was measured to not have much of an impact.

As far as I can tell, the main reason to argue "language design" vs. "compiler bug" is to imply that language design issues threaten to bring down the entire foundation of the language. It doesn't work like that. Rust's type system isn't like a mathematical proof, where either the theorem is true or it's false and one mistake in the proof could invalidate the whole thing. It's a tool to help programmers avoid memory safety issues, and it's proven to be extremely good at that in practice. If there are problems with Rust's type system, they're identified and patched.

[1]: https://github.com/rust-lang/rust/issues/25860#issuecomment-...


> Rust's type system isn't like a mathematical proof, where either the theorem is true or it's false and one mistake in the proof could invalidate the whole thing

Interestingly enough, in practice even mathematical proofs aren't like that either: flaws are routinely found when papers are submitted but most of the time the proof as a whole can be fixed.

Wiles first submission for his proof of Fermat's last theorem in 1993 is the best known example, but it's in fact pretty frequent.


His point seems to be that there are reasons in principle you cannot do this (holding the design fixed).

It's a little like making a type system turing-complete then saying, "we can fix the halting problem in a patch".

If you change the design, then you can fix it. This is what I interpret "design flaw" to mean.


The design problem has been fixed, as I noted above.


In the language,

    fn foo<'a, 'b, T>(_: &'a &'b (), v: &'b T) -> &'a T { v }
should be equivalent to

    fn foo<'a, 'b, T>(_: &'a &'b (), v: &'b T) -> &'a T where 'b: 'a { v }
because the type &'a &'b is only well-formed if 'b: 'a. However, in the implementation, only the first form where the constraint is left implicit is subject to the bug: the implicit constraint is incorrectly lost when 'static is substituted for 'b. This is clearly an implementation bug, not a language bug (insofar as there is a distinction at all—ideally there would be a written formal specification that we could point to, but I don’t think there’s any disagreement in principle about what it should say about this issue).


See my other comment above: https://news.ycombinator.com/item?id=39448909

The point is that rustc does not even implicitly have the "where 'b: 'a { v }" clause. The programmer knows "where 'b: 'a { v }" is true because otherwise &'a &'b would be self-contradictory, and in most cases that's more than enough. But in certain edge cases this runs into problems with the contravariance requirement since we can substitute either one of the 'bs with a type that outlives 'b (i.e. 'static), and the design of contravariance lets us do this pretty arbitrarily.


We could choose to design the language as I wrote where the constraint is implicitly present and variance consequently ought to preserve it, or as you wrote where it is not present at all and variance is unsound. But we want Rust to be sound, so we should choose the former design. We would need a very compelling reason to prefer the latter—for example, some evidence that “it would break too much code” or “it’s impossible to implement”—but I don’t see anyone claiming to actually have such evidence. The only reason the implementation has not been updated to align with the sound design is that it takes work and that work is still ongoing.


The issue isn't contravariance but that higher ranked lifetimes can't have subtyping relations between them in rustc's implementation, so the fn pointer type is treated as a more general type than it is. This user is still wrong, the problem with rustc just isn't how it handles contravariance.


Why is this distinction important? It's still something you fix by changing what programs the compiler accepts or rejects. Or were you trying to imply this is unfixable?


Probably better to be maximally pedantic here:

- Assume our language has a specification, even if it's entirely in your head

- a "correct" program is a program that is 100% conformant to the specification

- an "incorrect" program is a program which violates the specification in some way

Let's say we have a compiler that compiles correct programs with 100% accuracy, but occasionally compiles incorrect programs instead of erroring out. If the language specification is fine but the compiler implementation has a bug, then fixing the compiler does not affect the compilation behavior of correct programs. (Unless of course you introduce a new implementation-level bug.) But if the language specification has a bug, then this does not hold: the specification has to change to fix the bug, and it is likely that at least some formerly correct programs would no longer obey this new specification.

So this is true:

> It's still something you fix by changing what programs the compiler accepts or rejects

But in one case you are only changing what incorrect programs the compiler accepts, and in the other you are also changing the correct programs the compiler accepts. It's much more serious.


> - Assume our language has a specification, even if it's entirely in your head

It does not, and at the current pace, it might never have a spec.

The reason there is no spec - not even an hypothetical spec in my head - is that the exact semantics of Rust has not been settled.

With the constraints the Rust project are operating with, the only way forward I can think of is following the ideas laid out in post

https://faultlore.com/blah/tower-of-weakenings/

With the understanding that you can have multiple specs if one is entirely more permissive than the other (and as such, programmers must conform to the least permissive spec, that is, the spec that allows the smallest number of things)

But the problem is, Rust doesn't even have this least permissive spec. Or any other.


> Or were you trying to imply this is unfixable?

If it's been known since 2015 and not fixed, that's pretty suggestive.


It's been backed up behind a giant rewrite of more or less the entire trait system. This has been an enormous, many-year project that's been years late in shipping. The issue isn't so much the difficulty of the problem as the development hell that Chalk (and traits-next) has been stuck in for years.


Is there anything we users can do to help get that project out of development hell? Besides not constantly getting the developers down, and even burning them out, with uninformed complaints and criticisms, which is something I try not to do.


> If it's been known since 2015 and not fixed, that's pretty suggestive.

That means nothing. In Rust, "everyone is a volunteer" and you're not allowed to expect things to be fixed unless you do it yourself - so the fact that this hasn't been fixed is simply an artifact of the culture, not necessarily the difficulty of the problem.


The distinction matters because any existing code that breaks with the compiler fix was either relying on "undefined behavior" (in the case of a compiler bug incorrectly implementing the spec), so you can blame the user, or it was relying on "defined behavior" (in the case of a compiler bug correctly implementing a badly designed spec), so you can't blame the user.

I suppose the end result is the same, but it might impact any justification around whether the fix should be a minor security patch or a major version bump and breaking update.


Rust's backwards compatibility assurances explicitly mention that breaking changes to fix unsoundness are allowed. In practice the project would be careful to avoid breaking more than strictly necessary for a fix.

In the case of user code that isn't unsound but breaks with the changes to the compiler/language, that would be breaking backwards compatibility, in which case there might be a need to relegate the change to a new edition.


Well first of all, Rust doesn't even have a spec. And I would also advocate for not blaming anyone, let's just fix this bug ;)


It’s a consequence of not having a formal and formally-proved type system.


LUMI as an "AI customer" has a:

- low-budget: tax payer supercomputer for tax payer phd students

- high-risk tolerance: tolerate AI cluster arriving 5 years late (Intel and Aurora), lack of AI SW stack, etc.

- High FP64 FLOPs constraint: nobody doing AI cares about FP64

Private companies whose survival depend on very expensive engineers (10x EU phd student salary) quickly generating value from AI in a very competitive market are completely different kind of "AI customers".


Absolutely. We could definitely chalk this up to being the "exception that proves the rule".


> OpenCL based machine learning libraries, no?

Have you used OpenCL and CUDA C++ ?

CUDA C++ is single-source easy to use, has good tools, has a lots of libraries you can build your new library on top (empowered by generic libraries, it can use all pre-existing C++ libraries), ...

OpenCL... is not single-source, not easy to use, does not have good tools, does not have good libraries that you can reuse (does not have generics), ...

AI is like a F1 race, you can either go race with a F1 car (CUDA C++), or don't go at all. Trying to go with the horsecar OpenCL is, is a waste of time and money. Every other AI startup is going to loop you a million times.

> Nvidia deliberately undermined that at every turn, and pushed a proprietary solution, hard.

Undermined who at what? Nobody was interested in OpenCL succeeding: not Intel, not AMD, not Apple. All these companies pushed and continue to push for their own proprietary incompatible ecosystems to try to create a platform like NVIDIA has with CUDA. Intel pushes for OneAPI which is Intel only, AMD pushes for their CUDA clone, and Apple pushes for Metal.

Its easy to create a new standard. You and me can get together, write something on a napkin, call it a standard, and we are done. We could go around telling people that it's going to be "The Future", like it happened with OpenCL, but that doesn't mean anyone will actually ship something that's usable. It is particularly easy to create a Khronos "standard", of which there are 200, and of which _none_ standardizes standard practice: create the standard first, try to see if it solves the problem later. Claiming that you are entitled to NVIDIA implementing them all, is... well... just that... pure entitlement.

From all vendors, ironically, it actually seems that the only vendor working on a portable way to program GPUs is NVIDIA itself, since they have sent a bunch of people regularly to the ISO Fortran and ISO C++ standard committees to extend these languages to allow portable code to run on GPUs without changes. In contrast to OpenCL, ISO Fortran and ISO C++ are actual international standards.

I think Fortran supports this since 2018 standard, and C++ since 2017. Ironically, NVIDIA is the only vendor that actually ships this and has for years. The NVIDIA fortran and c++ compilers can run these languages on their GPUs. Intel and AMD will talk about how "portable and cross-vendor" OneAPI and ROCm are, yet their Fortran and C++ implementations still, 5 years later, can't actually use their GPUs. The reason is simple, AMD and Intel don't care / believe / want a portable programming model for GPUs. They want their own closed platform. Unfortunately, since they are at a disadvantage, they need their closed platforms to be able to run on NVIDIA GPUs because otherwise nobody would use them, but that doesn't mean that their platforms are portable or open, and they don't care about code using their platform being able to run well on NVIDIA GPUs since that would be counter productive. So in practice, people still need to use CUDA C++ to target NVIDIA GPUs.

Anyways, OpenCL didn't failed "because of NVIDIA". It failed because it is a bad standard, it was a bad standard when it was created (it was significantly worse than the standard practice back then, hence why nobody wanted to use it in practice), and it turns out no vendor is actually interested in it, so it is a worthless standard.


> NVIDIA itself, since they have sent a bunch of people regularly to the ISO Fortran and ISO C++ standard committees to extend these languages to allow portable code to run on GPUs without changes.

Could you probide more details? I could only find C++ AMP by Microsoft

> Claiming that you are entitled to NVIDIA implementing them all, is... well... just that... pure entitlement.

It is? What happened to 'Customer is always right?'


Accelerating Standard C++ with GPUs Using stdpar

https://developer.nvidia.com/blog/accelerating-standard-c-wi...



Once on a Khronos promoted session someone asked about Fortran support roadmap for OpenCL, everyone was clueless that was really a thing the scientific community would cared about, and ended up with "if you care about this kind of stuff please come talk to us".

Meanwhile CUDA is having Fortran support for ages.


The intended C++ equivalent for OpenCL is SYCL, which in practice is still under development.


And since Intel acquired CodePlay, it is pretty much an Intel thing now.


Which is (EDIT: NOT) the most widely sold console ever.


Not by a long shot.

PS2 and DS outsell by about 50 million units.


"PS2? That can't possibly be right..."

https://www.vgchartz.com/analysis/platform_totals/

Holay molay.


It was the most affordable DVD player. I think Sony owned patents on some DVD player tech? Same with PS4/5 and Blu Ray if I'm remembering correctly


This was also kind of the case with the PS3. Its sales weren't fantastic at release, partially because of its... $600 (?) price tag. But even at that price, at its release, it was one of the cheapest ways to get a Blu-ray player, and many people bought it for that.


Not just a Blu-ray player, but one that is guaranteed to be able to play practically all blu-ray discs as long as Blu ray discs are made or the console hardware fails.

Sony pushed updates to the firmware. Most commodity Blu ray players don't have an (easy) way to update.


"Five Hundred Ninety Nine US Dollars!"

But for both the PS2 and PS3, Getting folks to adopt the new formats was definitely a factor.

In the case of the PS2, I think less so; It wasn't the cheapest way to get a DVD player, but IIRC it wasn't that much more than a DVD Player with Component out at the time (note; All PS2s can do Component out, but only later models can play DVDs at 480p) and that made it a lot easier for families to buy-in.


A lot of early Blu-ray players had terrible load times. Long enough to be pretty annoying. The PS3 had the CPU horsepower to play discs quickly.


If memory serves, there was less than 1 game per PS3 sold at launch.


I think it has more to do with the fact they managed to reduce it's price down to $99. They haven't been able to do that with subsequent consoles.


(clicks link) time to get /sad/ about being a SEGA fan again.

More seriously I wish some of the old consoles were officially opened because the absolute install base of PS1 and NES compatible hardware must be insane. Indie NES games specifically have become popular lately, but I don't think any of the 3D capable consoles are popular or open targets.


There were eight new Dreamcast games just last year.

https://en.wikipedia.org/wiki/List_of_Dreamcast_homebrew_gam...


Indeed, wow.


I just use inotifywait for that. What does entr do differently?

This will run a binary when any file on the directory tree changes:

    #!/usr/bin/env sh
    set -e
    while true; do
        inotifywait -e modify,create,delete,move -r $1 $2
    done
and then just call it: run_on_modify path/to/dir path/to/bin

If someone thinks this is magic or wizardy, instead of teaching them that fire exists (entr), teach them how to make fire first (inotify, linux inodes, etc.). Knowing what kind of events inodes support and how to do something when they happen is pretty powerful.

If you then want to use "entr" or similar instead of just calling inotifywait yourself, that's fair game. But TBH I have a hard time justifying a C or Rust program that's not going to be available / installed everywhere when a 3 LOC POSIX shell script can solve the same problem (and many many more).


inotifywait does a decent job of exposing the inotify primitive from the Linux kernel to userspace, but it's, well, primitive. Using that loop, if you change a file twice in quick succession, what happens? It'll trigger off the first change, but potentially ignore the second change if $2 takes longer to run than you can edit a file.

You can improve the basic loop somewhat, but a more thoroughly written program (whatever the language) rather than a 3 LOC loop is going to have more features and be more ergonomic. In particular, to truly be useful, it should be able to kill the command and restart it every time the file is saved.


Buy professional appliances, for restaurants.

For some reason, they are often better along many axes: cheaper, easier to clean, more customizable, more powerful, more flexible, etc.

I really like the professional induction stoves, with actual physical dials, no touch crap.

And the best part, there is a huge used market for professional appliances in really good shape, for even lower price.


Commercial induction is pretty solid in a home. Microwaves too if you have a spare 220v hanging around in your kitchen. Mostly just because of the better controls as you mentioned though. A non-commercial appliance with a free spinning knob would be about as good.

Other than that though, and I have a lot of professional cooking experience, I wouldn't. They're generally uninsulated, made of just sheets of stainless steel. Touching them can burn you, they can have sharp edges and corners that can badly cut.

Their safety assumes frequent and thorough cleaning regimes. Where a residential system will have safer failure modes if not maintained correctly, a commercial one may just become a grease fire instead.

Also the dimensions and their dynamics just are often not good for the kinds of cooking you do at home. It takes an hour to heat a full-size uninsulated commercial stove's oven to baking temp. Commercial gear is generally built around that assumption: that you will turn it on once, every single day, and run it for 12-18 hours straight. That completely changes the design, efficiency, and maintenance constraints in ways that may not work at home.

I used to want a big two-basic stainless commercial sink with sprayer. I stayed in an airbnb with that once, and it turns out it takes several gallons of water to fill that even a few inches, a minimal amount to wash dishes. A restaurant's water line can do that in seconds, it can take minutes on a home one, depending.

There are definitely specific, individual pieces where the commercial versions are easily adapted to home use and superior. There are many more cases where they aren't. A blanket "buy commercial" is not good advice imo.


Thank you. I learned quite a bit from this.


This. When my Cuisinart food processor bowl broke for the 2nd time, I went and got a RobotCoupe professional model. The cheese grater attachment I got for it says that you shouldn't run it for longer than 4 hours! at a time. Once you start going commercial, you will not look back. Pay more (if you buy new), work forever and if they do break, there are people/places you can get them serviced.


Until the day it starts asking your support contract number…


I think they're designed to be more durable too. A consumer microwave / food processor isn't intended to run 8 hours a day.

Any particular brands on the induction stoves? I may want to move on from gas someday.


Can you recommend any particular sellers?

My oven stops working with some nonsensical error message when I try to use it above 350F, so I'm in the market for a new one.


Not OP, and not making any recommendations, but even something mainstream like webstaurantstore.com carries a fair amount of commercial cooking equipment. Enough to give you some ideas, at least.


Odds are there are a few restaurant supply stores around you. Most independent of any chain. They tend to be open the the public only 'working hours'. Find one that wants to serve you


Aren’t they too big for a regular home kitchen?


Not necessarily. Some might be, but generally there's demand for smaller appliances in commercial kitchens too. For example, think of the now ubiquitous food trucks -- they don't have a ton of room for the fridge or freezer, but still need one.

If anything, the consumer grade stuff tends to all be made roughly the same, and in many cases is just someone slapping their brand on an OEM appliance.


> Isn't there a single GPU benchmark that actually does the same work so that comparisons can actually be made?

Apple does not support OpenGL or Vulkan, only Metal, and most app devs have better things to do than rewriting code for the Mac.

The recommended way of gaming on a mac is to use emulation by emulating whatever the game is used with a metal wrapper.

IMO the claim that these benchmarks aren't fair is naive. I don't care about how good the hardware is, but about what performance I get. If I get poor performance because the software, drivers, etc. are poor, I want to know that.


> I don't care about how good the hardware is, but about what performance I get.

Of course, I'm not at all saying that you're wrong to want that, or that these benchmarks don't show what you're interested in.

However this and many other articles are using these benchmarks to derive comparative hardware performance, which is simply wrong to do. That's what I'm criticizing.


Since you can't really buy apple cpus or gpus and put them in a PC, for benchmarking... today at least one can't compare apple's hardware against intel hardware.

What one can compare, is the performance of the "Apple platform" and the "PC platform" at similar price points, power budgets, features. This is more meaningful for most people, which mostly care about how fast can the computer do X (whether it uses the CPU, GPU, or some other chip, most people don't care).


Is this a serious comment? Alwrapper doesn't use emulation. An API wrapper simply translates calls to the appropriate API. But more to the point, If the benchmark isn't properly ported to the platform it's testing then the benchmark is BROKEN. It isn't naive to expect them to do their job right.


> Is this a serious comment? Alwrapper doesn't use emulation. An API wrapper simply translates calls to the appropriate API. B

QEMU simply translates calls from one hardware API to another hardware API, so it isn't an emulator either right?


It depends on your perspective, yours is a subjective perspective that you care your experience only, which makes sense for a consumer to know what to buy. (But then everyone’s subjective bias has different weights so to speak.)

But from a technological perspective, the logical way to test is to eliminate as much variable as possible and really compare the hardware compatibility. Using software that’s not optimized for the hardware is not an objective test of the hardware.

There are many ways to try to conduct as objective a test as possible. In that criteria, I found that only those review from Anandtech is up to standard. Article like this is more like click bait.

Now I’m not an expert, but if I were to compare the performance somewhat objectively, I might start with TensorFlow where Apple has releases a metal backend for it to run. Then may be write some naive kernel in Julia using CUDA and the pre release metal library (It might not be fair, but that’s where I would start given what I know.)


Can't you just install Linux on both machines?


Linux for Macs with M1 architecture is still in development. There's a team that's currently reverse engineering the M1 GPU to create linux drivers for it ... but until that's finished, unfortunately no, you can't just install Linux on both systems :/


Using the in-progress Asahi Linux to benchmark the hardware could be justified as hampering the hardware even more than MacOS's limited support for mainstream graphics APIs.


The GPU is not supported yet in Asahi Linux.


If you can re-compile all your source code, then ABI stability is not that important.

In the real world,

- people use operating systems to get work done,

- such work is often done with proprietary software that people buy, and that lot of people get paid to develop

- breaking such software is not acceptable, because it means that suddenly dozens of thousand of people can't work

- telling all those people to "stop working" until their software is fixed is not acceptable either.

People use Linux for work in the real world, and that's why Linux tries very hard not to break ABIs.

Microsoft has had a stable ABI for 30 years, and people still run software compiled 30 years ago on the latest Windows version today.

What this tells you about OpenBSD users, is another story.


And even if you can recompile everything, it's still a pain to do so, and it's better to avoid doing so. Also recompiling the software may not be as simple, for example can require modifications of its source code to adapt it to newer versions of the compiler or other libraries that changed, and of course you can't build it on an older system since the ABI changed.

This is also the reason why containers are so popular these days, ship a software with all its dependencies to avoid having to recompile stuff each time you upgrade or change the operating system.


The source-based Linux distros (Gentoo in my experience) do try to get better at keeping things working and moving forward when ABI breaks occur. Recompiling downstream dependencies (FFIs, libraries/packages), keeping the old library/packages around until a new ones are available, and keeping track of breaks/automating fixes with versioning all happen when for example a new boost or qt version comes out. All of this automated by the package manager. Proprietary packages that require fixed ABIs from certain libraries have *-compat packages in the gentoo repos, and other distros have this too.

Ultimately, it is about the APIs when you're compiling from source, a package that isn't using a new API is going to need the old library, and both might need to be compiled with and linked against the same toolchain. I think Linux and BSDs are closer than one might think here. Packaging and upstreams have both gotten better here over time I think, at least over the last couple of decades I've used Linux. I've only played around with the BSDs in VMs.


no one uses Debian on s/390 for anything important anyway


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: