"Apple appears to be using the HTTP/2 protocol only as a mechanism to maintain persistent connections, passing empty HEADERS frames to open new streams. This breaks section 8.2.1 of the HTTP/2 specification. We work around this by modifying the h2 module to allow empty HEADERS frames.
Apple’s use of the HTTP/2 protocol is also irregular for two other reasons. First, the HTTP/2 specification mandates that stream IDs must be created in a monotonically increasing order. However, Apple’s application endpoints appear to hardcode a handful of particular stream IDs for certain communications, which when opened do not conform to this monotonic ordering requirement. Second, client-server communication patterns often use multiple streams in unusual ways. For instance, heartbeat messages send requests over stream 1 and responses on stream 3."
Apple tends to do that a lot. Back when I worked at Apple I had noticed the same thing. Two services that are sharing memory end up communicating over HTTP and I always found it silly.
If you've worked at Apple recently, I'm sure you'd know that "two services sharing memory" will probably stop doing that and end up isolated in different processes as soon as it's possible to do so…
> Two services that are sharing memory end up communicating over HTTP and I always found it silly.
why is that silly? It's be easy to migrate the services away from shared memory architecture when required, and http is well known and easy to use and debug.
My understanding is that there’s not much constant comunication going on, bridgeOS hands over control to macOS and that’s it. So HTTP is not a major overhead. But I could be wrong.
> Passing baseDirectory sets an internal variable on the sysdiagnose server, but it is unclear from our manual analysis how this variable is used.
It looks like this might be the directory the completed sysdiagnose is saved to? I don't have a Mac with a Touch Bar to test this, so I'm guessing this based on analysis of /usr/bin/sysdiagose pulled from a BridgeOS firmware.
Tangentially related, I have a wild idea: assuming Apple makes a transition to using ARM processors, existing Macs could be able to execute both ARM and x86 apps, just like they did with the Intel transition, but instead of translating machine code like with Rosetta, all Macs with a T2 chip can simply execute machine code on whichever processor the app was built for.
Here's how it would work: macOS already works with more than one CPU core. What if several of those cores just happen to be for a different architecture? When an app is launched, it would read the executable's header and schedule the app to run on the appropriate CPU. Things like producing sound, drawing to the screen, or getting user input is all done via the OS anyways; it doesn't really matter what architecture the app or OS is on, thing would just get translated, like how communication is being shipped over the bus with this protocol.
To me, it seems like the T2 chip is the real processor of newer Macs and the x86 chip is just some co-processor that the T2 chip offloads a bunch of work to. The T2 controls many of the internal components already and the x86 chip must interact with the T2 chip to interact with said components (e.g. booting, webcam, audio, SSD, etc). It will be interesting to see if this is the route that Apple takes with a supposed ARM transition.
Processors have not decreased all that much in price though. It's unlikely that even Apple is interested in making each computer $100 or $200 more when software emulation works almost as well and is much less work.
Apparantly apple's A12 costs around $72. It wouldn't be entirely ridiculous to include one of these in macs (it would be awesome for ios app dev if nothing else).