Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’d argue that the optics still matter, and that it’s not Apple who Intel are worried about.

Intel have to contend with: “this Apple chip over here is more efficient, why can’t you do the same?”. Apple blew a hole in the side of the age old x86 armour by proving the arch isn’t untouchable.

Then other companies like Qualcomm can finally capitalize on that damage to the x86 armour.



Imagine if NVIDIA had been able to buy ARM. That could be a threat in a couple directions.


Perhaps. Though I don’t think they need to. I don’t see what owning Arm would have done for them since they can already make arm processors, and could do custom cores.

Though perhaps the opposite direction is more interesting and what you were getting at: what if the off the shelf arm cores had NVIDIA’s engineering behind it.


Nvidia could have shifted the model by firing the customers and bringing things in-house. That'd involve phasing out ARM licensees and buying up leading node fab capacity to manufacture their own stuff.

If they pulled that off, that Nvidia would have been very powerful. CUDA + ARM + leading node would have put them in a similar position that Intel had with x86 back in the day.

The competition would have struggled there - nobody has CUDA equivalent, only AMD has GPU equivalent, nobody has install base of software like ARM. RISC-V (or intel) even executing super fast would have been years away from being competitive across the board like this.

It might have also wrecked AMD, since the weak point of their GPU strategy is reliance on x86. If Nvidia+ARM came on strong, that'd weaken x86's stranglehold on PCs, which would ultimately weaken AMD since they don't have a leading position in other markets to fall back on.

Nvidia would have a lock on mobile devices, the increasingly AI-based datacenters, as well as some degree of PC side including gamers. Microsoft would pick up on the software side and we'd have Winvidia instead of Wintel.


This sounds like one of the easiest anti-trust suits, it would be foolish for Nvidia to do this.


Well antitrust is basically what stopped the deal from happening [1]. If they had gone through with the deal and done this strategy I imagine it would have been a slow 5-10 year shift, not an immediate action.

> The main concern among regulators was that Nvidia, after the transaction, would have the ability and incentives to restrict access to its rivals to Arm´s technology, which would eventually lead to higher prices, less choice and reduced innovation. [2]

[1]: https://www.ftc.gov/legal-library/browse/cases-proceedings/2...

[2]: https://www.pymnts.com/news/regulation/2022/collapse-of-nvid...


>Nvidia could have shifted the model by firing the customers and bringing things in-house.

That wouldn't work since many companies already had been licensing the ARM architecture, not just cores.


Well there'd be a transition phase and an honoring of existing contracts to some extent but the way to do it would be to slow roll the release of customer ISA updates or stop providing ISA updates altogether, e.g. anything past ARMv9.4-A would be in-house only. After 4-5 years competition would be struggling.


NVIDIA would charge their customers an ARM and a leg.


amd a leg


or 2 arms and a peg


I wish they'd capitalize on it by making ARM as well supported as x64 is, where any program, any OS, any line of code ever written will most likely run flawlessly. Meanwhile the average ARM computer struggles to support even a single version and distro of Linux. Like why is ARM such a fucking travesty when it comes to compatibility?


Your expectations seem unrealistic to me.

1. Even x86(_64) can’t run every line of code ever written flawlessly. It can only run the code that is on a compatible architecture, where compatibility largely breaks based on intrinsics used or how old the arch is. You already cannot run everything on a modern x86 processor, only a subset and that’ll get worse when Intel drops support for 16 bit soon.

2. That leaves translation or emulation for the arches that are non-native. Which is exactly what arm does today on all three of the big OSs. macOS, windows and Linux all have translation/emulation layers now so can run most x86 code with varying levels of performance penalty.

3. Many Linux distros support arm. I’m not sure where you’re coming from on this. It’s been multiple decades of support at this point, and even the raspberry pi is a decade old itself now as the poster child for consumer arm64 Linux . They may not support every flavor of it, but that’s also true for x86 systems.


> even the raspberry pi is a decade old itself now as the poster child for consumer arm64 Linux

That's a pretty good example actually, the various Pi versions are probably the best supported ARM in existence, and even they are incredibly limited in what they can run.

For some reason the way OS support works on ARM is that every OS needs to explicitly support the exact underlying hardware or it doesn't run. For example the recently released Pi 5 can only really run two OSes right now: Pi OS 12 and Ubuntu 23.10. How is that possible, I ask? Why the fuck isn't the required firmware shipped with the SoC and made compliant to run any aarch64 build of anything? It's not like it's new hardware either, it uses a dated 5 year old Cortex A72.

Meanwhile x64 has apparently done the opposite and standardized hardware to a level where software support is completely irrelevant. Pick any new or old version of Windows or Linux or FreeBSD or whatever, pick any motherboard, CPU, GPU, disk combo and it'll install and just work (with a few exceptions). It baffles me that this standardization that's been a blessing on x64 is impossible to achieve with ARM. I don't need a specific release of Debian with firmware from Gigabyte to work with their motherboard, it doesn't give a shit if it's an Intel or AMD CPU or something third entirely, but for ARM this level of support is apparently like asking for cold fusion.

> that’s also true for x86 systems.

Really? I mean I suppose there must be some very specific OSes out there that aren't compatible, but I've yet to hear of any. Hell, you can even run Android.


That exists, actually; you're looking for systems like https://libre.computer/products/ that support UEFI by default, so you can just grab a generic OS image and have it work because device enumeration works the same as on PC and not the... "interesting" hard-coded stuff that's weirdly common in ARM. The official marketing is ... I think this is it? The Arm SystemReady program - https://www.arm.com/architecture/system-architectures/system... but I find it easier to just say UEFI.


I have two of their Sweet Potato AML-S905X-CC-V2 boards running Fedora IoT for some containers under Podman. Very fun devices to work with so far.


I'll have to keep an eye on that list, one of these might actually be a solid RPi alternative eventually if they can get good long term OS support this way. All I'm seeing on it right now are somewhat obsolete boards though, 2 or 4 GB of memory at most, LPDDR3 and 4, no wifi chip.


Your arguments seem to be around device tree support rather than the actual cores and arch.

That’s largely where the holdup is. Most arm devices use a variety of more unique supplementary hardware that often only distribute their support in binary blobs. So due to lack of ubiquity, the support in distros varies.

If you could skip the rest of the device and focus on the processor itself, the distros would largely all run as long as they didn’t remove support explicitly.

This is the same process as on x86. It’s just that the hardware vendors are also interested in selling the components by themselves, and therefore have a vested interest in adding support to the Linux kernel.

It’s very much the case that when new hardware comes out that you need a new kernel version to support it properly. That is true of processors, GPus and even motherboards. They don’t just magically function, a lot of work goes into submitting the patches into the kernel prior to their availability.

Since arm manufacturers right now have no interest in that market, they don’t do the same legwork. They could. If Intel or AMD entered the fray it would definitely change the makeup.

The one other big issue is there’s no standard BIOS system for arm. But again, it’s just down to the hardware manufacturers having no interest as you’re not going to be switching out cores on their devices.


Device Trees also don't magically cause incompatibilities either. They're just a declarative specification of the non-discoverable hardware that exists. The old BIOS systems are so much worse than DT and harder to test properly.


Yeah great point. I think people just take today’s state of things for granted and don’t realize how much has been going on behind the scenes to enable it today. And it’s still not great.


DTs are part of the problem.

It is one of those "devil in the details" kinds. In theory, DT would be okay, but it's not. The issue starts with HW vendors failing to create 100% (backwards) compatible hardware. For example, if they need a uart, they fail to hook up the standard baud rate divisor and instead use a clock controller somewhere else in the machine because its already there. So now DT describes this relationship, and someone needs to go hack up the uart driver to understand it needs to twiddle a clock controller rather than use the standard registers. Then, of course, it needs to be powered up/down, but DT doesn't have a standard way to provide an ACPI-like method for that functionality. So now it either ends up describing part of the voltage regulation/distribution network, or it needs a custom mailbox driver to talk to the firmware to power the device on/off. Again, this requires kernel changes. And that is just an example of a uart, it gets worse the more complex the device is.

On x86, step one is hardware compatibility, so nothing usually needs to be changed in the kernel for the machine to understand how to setup an interrupt controller/uart/whatever. The PC also went through the plug and play (PnP) revolution in the 1990's and generally continues to utilize self-describing busses (pci, usb) or at least make things that aren't inherently self-describing look that way. Ex: intel making the memory controller look like a pci root complex integrated endpoint, which is crazy but solves many software detection/configuration issues.

Second, the UEFI specification effectively mandates that all the hardware is released to the OS in a configured/working manner. This avoids problems where Linux needs to install device-specific firmware for things like USB controllers/whatever because there is already working firmware, and unless Linux wants to replace it, all the HW will generally work as is. Arm UEFIs frequently fail at this, particularly uboot ones, which only configure enough hardware to load grub/etc, then the kernel shows up and has to reset/load firmware/etc as though the device were just cold powered on.

Thirdly, ACPI provides a standard power management abstraction that scales from old pentium from the 1990s where it is just traping to SMM, to the latest servers and laptops with dedicated power management microcontrollers, removing all the clock/regulator/phy/GPIO/I2C/SPI/etc logic from the kernel, which are the kinds of things that change not only from SoC to Soc but board to board or board revision to board revision. So, it cuts out loads and loads of cruft that need kernel drivers just to boot. Nothing stops amd/intel from adding drivers for this stuff, but it is simply unnecessary to boot and utilize the platform. While on arm, its pretty much mandated with DT because the firmware->OS layers are all over the place and are different with every single arm machine that isn't a server.

So, the fact that someone can hack a DT and some drivers allows the hardware vendors to shrug and continue as usual. If they were told, "Sorry, your HW isn't Linux compatible," they would quickly clean up their act. And why put in any effort, random people will fund ashai like efforts to reverse engineer it and make it work. Zero effort on apples part, and they get Linux support.


> "For some reason the way OS support works on ARM is that every OS needs to explicitly support the exact underlying hardware or it doesn't run."

x64 servers and PCs are compatible because all of them adhere to an architectural standard backed by a suite of a suite of compatibility tests defined by Microsoft: https://learn.microsoft.com/en-us/windows-hardware/design/co... and https://learn.microsoft.com/en-us/windows-hardware/test/hlk/ . The ARM world has no similar agreed on standard.


Per another comment, there apparently is something along the lines of that now: https://www.arm.com/architecture/system-architectures/system...

Took them long enough.


It seems almost like it's all historical accident from the IBM PC wars. Now all the companies want vendor lock in, and developers don't seem to get excited anymore.

Back in the day people liked tech and wanted to use it everywhere and have everything just work every time. Now it seems like everyone just wants to tinker with small projects.

Like, look at all the stuff Bluetooth 5 can do. If that existed in 1997, I think there would be about 5x the excitement. But there are probably more people working on customizing their window manager than doing anything with the big name standards.


> It baffles me that this standardization that's been a blessing on x64 is impossible to achieve with ARM.

Intel only sold CPUs at one point. They needed compatibility with all sorts of hardware. ARM manufacturers, on the other hand, sell SOCs, so they don't want any sort of compatibility.


Not sure what you mean by that.

The only issue I see compatibility-wise is that ARM doesn't have a standard method for an OS kernel to auto-discover peripherals. x86_64 has UEFI and ACPI. ARM manufacturers could adopt that if they wanted to, but apparently they (mostly) don't want to.

Otherwise, non-assembly code written for x86_64 tends to run just fine on ARM, when compiled for it.


> Not sure what you mean by that.

Traditionally the Linux-on-ARM market has sucked because every single board computer and phone and tablet has needed its own specially compiled kernel with its own set of kernel patches, and probably its own half-assed Linux distro that never gets any updates.

You can't even buy a USB wifi or bluetooth stick and expect it to work without checking the chipset - let alone use a single distro across the BeagleBone Black, the Xilinx Zinq, the iMX6 and the RPi.


>ARM manufacturers could adopt that if they wanted to, but apparently they (mostly) don't want to.

ARM manufacturers don't want compatibility.


>I wish they'd capitalize on it by making ARM as well supported as x64 is, where any program, any OS, any line of code ever written will most likely run flawlessly.

Who is they? X86 had IBM to establish a standard. And when IBM wasn't interested anymore, it was Intel who stepped in.


Nobody cares outside the enthusiast market - all they just care about price, that it runs Windows, the latest CPU, availability and the name recognition of Intel and the maker like Dell,Acer or Lenovo.

Office, Youtube and WWW.


This isn't relevant when one CPU can go anywhere and the other doesn't.


Qualcomm sell chips to more diverse uses than Intel do.

If Apple’s opened the gateway on them finally cracking into the desktop/laptop market by removing the stigma of their previous arm offerings, then I’d argue they’re very serious competition to Intel.

The only market I don’t see them tackling is consumer sales of just the chips, but that’s honestly such a small percent of sales to begin with for these companies.


The market on cpus is dominated by cloud sales. Desktop and laptop cpus are just such small business in comparison to everything else it seems anymore.

And in the cloud, custom arm seems to be the path that was chosen


>And in the cloud, custom arm seems to be the path that was chosen

Doesn't the cloud run on x86?


More and more of AWS is available to run on Graviton. It can often be cheaper to use the Graviton instances when possible. And why not, when all you're caring about is having a PostgreSQL RDS instance or OpenSearch or an Athena query, it doesn't really make much difference if its an Intel or AMD or Graviton CPU under the hood so long as the performance per dollar is right.

Microsoft And Google have had Ampere-based CPUs available for over a year. They're still very x86 heavy though.

And don't forget, if you're really craving PowerPC in the cloud you can get LPARs on IBM's cloud for another flavor of non-x86 cloud compute.


The more relevant question is AMD vs. Apple. Intel is pretty much out of the equation now. Anybody have insights into the advantages and disadvantages from both sides from that perspective?


> Anybody have insights into the advantages and disadvantages from both sides from that perspective?

Do either of them have the volume to supply the server market? At the end of the day that's why Intel is still relevant.


Are there actual volume and production issues at TSMC?

I figured intel is still "relevant" because of past momentum, not because of actual specs or production capacity.

So while AMD is relevant. Intel is only "relevant" with quotations.


I think Intel are still relevant on the basis of having broadly competitive specs (not quite as far at as others, but pretty close), and their own production capacity.

There is enough demand for both TSMC and Intels capacity.


In terms of performance for value they aren't relevant. If you have cash and you want the best bang for your buck intel is not a rational option. That's just reality but some people are in denial.

In terms of potential for future success and raw ranking, yeah intel is still relevant in this area. But given that most people on HN are consumers I would think "relevant" applies to my first statement. They aren't currently a rational choice when you are making a purchase decision. But I guess I'm wrong. People will buy a shittier product just out of patriotism I guess? Nothing wrong with that.


HPC nowadays seems using AMD EPYC exclusively.


AWS is pushing their ARM (Graviton) chips over the Intel ones. The price/performance is just better, generally speaking.


> Anybody have insights into the advantages and disadvantages from both sides from that perspective

Yes. Having a lot of cash buys you exclusivity for TSMC 3nm and then for TSMC 2nm.


Yeah so which team is winning on that front? Apple or amd.


> Apple blew a hole in the side of the age old x86 armour by proving the arch isn’t untouchable.

the fact that they had to create an emulation/virtualization layer (rosetta) says other wise. The x86 instruction set is such an entrenched one that i dont think intel has any fear of it being rooted any time in this half-century.


> the fact that they had to create an emulation/virtualization layer (rosetta) says other wise.

That is a nonsensical and ahistorical take.

Apple created an emulation layer because they were migrating ISA and thus had a backlog of legacy which they needed to support for users to switch over.

They did the same from PPC to x86, and from 68k to PPC, and nobody would have accused those of being entrenched.


A half century is a pretty long time. I wouldn't be surprised if even Intel has moved on in 30 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: