Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Deciphering the Messages of Apple’s T2 Coprocessor (duo.com)
130 points by walterbell on Feb 17, 2019 | hide | past | favorite | 25 comments


Why did Apple choose to use a networking protocol like HTTP to interface with two chips in the same hardware device?


The amount of money and eyeballs testing out an http stack far exceed that of a proprietary protocol.

The real question is why don't more rpc style protocols just use http?


Because http protocols give you lots of extra chances to screw things up.

https://justi.cz/security/2019/01/22/apt-rce.html

> Unfortunately, the HTTP fetcher process URL-decodes the HTTP Location header and blindly appends it to the 103 Redirect response:


But then the article states that they do non-standard/non-compliant things with HTTP/2 so they are using modified or new stacks...


"Apple appears to be using the HTTP/2 protocol only as a mechanism to maintain persistent connections, passing empty HEADERS frames to open new streams. This breaks section 8.2.1 of the HTTP/2 specification. We work around this by modifying the h2 module to allow empty HEADERS frames.

Apple’s use of the HTTP/2 protocol is also irregular for two other reasons. First, the HTTP/2 specification mandates that stream IDs must be created in a monotonically increasing order. However, Apple’s application endpoints appear to hardcode a handful of particular stream IDs for certain communications, which when opened do not conform to this monotonic ordering requirement. Second, client-server communication patterns often use multiple streams in unusual ways. For instance, heartbeat messages send requests over stream 1 and responses on stream 3."


gRPC, arguably the RPC framework with the most momentum behind it right now, uses HTTP/2 as it's transport


Apple tends to do that a lot. Back when I worked at Apple I had noticed the same thing. Two services that are sharing memory end up communicating over HTTP and I always found it silly.


If you've worked at Apple recently, I'm sure you'd know that "two services sharing memory" will probably stop doing that and end up isolated in different processes as soon as it's possible to do so…


> Two services that are sharing memory end up communicating over HTTP and I always found it silly.

why is that silly? It's be easy to migrate the services away from shared memory architecture when required, and http is well known and easy to use and debug.


My understanding is that there’s not much constant comunication going on, bridgeOS hands over control to macOS and that’s it. So HTTP is not a major overhead. But I could be wrong.


Don't tell Jonathan Blow about this unless you want to kill him.


I have no special knowledge but I assume since it is an encrypted-only protocol with significant research behind it.


Although many libraries only implement HTTP/2 with encryption, it never actually made the specification.


You are right-- good catch.


Simple explanation: maybe there were simply nobody who can come up with non-web programming? Webdevs are everywhere these days...


May be they implemented it with Electron and React?


> Passing baseDirectory sets an internal variable on the sysdiagnose server, but it is unclear from our manual analysis how this variable is used.

It looks like this might be the directory the completed sysdiagnose is saved to? I don't have a Mac with a Touch Bar to test this, so I'm guessing this based on analysis of /usr/bin/sysdiagose pulled from a BridgeOS firmware.


Tangentially related, I have a wild idea: assuming Apple makes a transition to using ARM processors, existing Macs could be able to execute both ARM and x86 apps, just like they did with the Intel transition, but instead of translating machine code like with Rosetta, all Macs with a T2 chip can simply execute machine code on whichever processor the app was built for.

Here's how it would work: macOS already works with more than one CPU core. What if several of those cores just happen to be for a different architecture? When an app is launched, it would read the executable's header and schedule the app to run on the appropriate CPU. Things like producing sound, drawing to the screen, or getting user input is all done via the OS anyways; it doesn't really matter what architecture the app or OS is on, thing would just get translated, like how communication is being shipped over the bus with this protocol.

To me, it seems like the T2 chip is the real processor of newer Macs and the x86 chip is just some co-processor that the T2 chip offloads a bunch of work to. The T2 controls many of the internal components already and the x86 chip must interact with the T2 chip to interact with said components (e.g. booting, webcam, audio, SSD, etc). It will be interesting to see if this is the route that Apple takes with a supposed ARM transition.


Processors have not decreased all that much in price though. It's unlikely that even Apple is interested in making each computer $100 or $200 more when software emulation works almost as well and is much less work.


Apparantly apple's A12 costs around $72. It wouldn't be entirely ridiculous to include one of these in macs (it would be awesome for ios app dev if nothing else).


Except that Apple is already shipping Macs with both processors


I suspect that T2 is too slow for such a purpose.


T2 is a variant of A10 so it should handle apps.


Do you have a source on that? I was under the impression that it was closer to their Watch SoCs.


0x42133742 is clearly someone having a bit of fun. 42 from Hitchhiker’s and then 1337 for “leet”, short for elite.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: