Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honest question: Isn't this within the kind of behavior that AppStore reviews are supposed to prevent, at least if there isn't an app specific functional explanation for it? Does Apple have a list of what kind of behavior like this is tolerated or does word just get out about what they don't reject?


Two 'by review' app stores I've had experience with are iTunes App Store and Amazon App Store. Here's what I've seen:

- iOS app review is very minimal. For the initial submission, they'll play around with the app for ~5 minutes. I've had updates approved without the app even being launched, and other times it's approved with simply logging in and launching the app on different devices. They are mainly concerned about policies, private APIs, etc. Things get stricter when you submit in-app purchases, but again those are more administrative than functional. So, I don't think they would ever catch something like this.

- Amazon's testing is insanely detailed compared to Apple's(at least, for the first submission - I haven't submitted updates yet). They tested the app on several Android devices, and also were looking at data over the wire using, presumably, a client proxy. They will reject the app if you send up passwords/usernames without using SSL, for instance. They hit all the menu buttons and try most features. And they review all permissions your app needs.


Well, since you only ever only submit the compiled application binary to Apple, it'd be pretty darn hard for them to detect behaviour like this. Especially if the code to do so is obfuscated, and/or the data is smuggled out via SSL (or worse, steganography-style piggy-backed on to other data).

Sometimes it's tempting to speculate whether the real purpose of the app store review team is just to ensure developers aren't trying to access Private Frameworks (i.e. non-public APIs) or try to upsell the customer while bypassing the 30% Apple tax?


Pulling contact data requires API calls that can be detected in the compiled binary (this is one way that Apple detects calls to unpublished API's).

That said, it's humorous how a blatant abuse of trust such as this gets through unscathed but god help you if you try to access the iPod library the wrong way!


Well, the app could have legitimate reasons for linking to the required API (such as pretending to only use it after obtaining user confirmation), but then you could add additional obfuscated calls to the same API without prompting the user. So that wouldn't really help.


Perhaps, however when calls like this are noted additional scrutiny of the application could be applied to ensure they are not abused (such as using a proxy in the way described by the parent).

There are other actions allowed by the SDK that seem to have little non-nefarious use, such as the ability to hide the fact that an application is transmitting and receiving data (the network "spinner" can be disabled by the application); as others have mentioned it's interesting which API calls require authorization from the user while others do not.


A postdoc in my lab published an academic paper that did exactly this: automated static analysis of iOS compiled binaries for privacy violations.

As far as I know Apple was not interested.

Here's the paper if you want to take a look: http://seclab.cs.ucsb.edu/media/uploads/papers/egele-ndss11....


Interesting. Quick question, how would you deal with things that call APIs via, for example, NSSelectorFromString, where the String is built in an obfuscated way?

(I'll go back and read the paper in more detail soon)


As I remember, the analysis doesn't handle calls that can't be determined statically.

So the analysis would fail to determine the method and class of a obfuscated string.


I've received a rejection for using a "private" ivar (it was actually a framework doing it).

The ivar was in a public header, and was not marked @private, which is the only correct way to designated an ivar as private in Objective-C. Putting a comment above it saying "this is private" (which they did) doesn't count. It's protected, by definition.

NSActionCell.h, I think.


Eh, I don't think you're quite right here. @private means "Only accessible by this class and its instances, not parent, sibling or child classes." What Apple means by "private" in that case, though, is "Only for use by Apple, not outside vendors." If NSActionCell has private subclasses that need the variable, marking it @private would be flat-out wrong.


No, the correct way to do it in that case would be to mark the ivar as @private, and have a private category on the class with a @property definition for that ivar (or just getter/setter methods). Leaving the ivar as protected and relying on a header file comment is just sloppy. Protected implies that any subclass can use it, not just Apple-blessed subclasses.


Apple would simply tell their SSL library to dump the raw data, I mean they wrote it (or at least have source access to it) and have absolute control of the devices used to test. Nothing hard at all.


They could do the same thing this guy did in an automated way (seed the device with unique data, sniff traffic for that that data), but as you said there are many ways to obfuscate it.


Certainly. An even easier way is to have the app call home to a web service that returns "stealUserData: false" until the app is approved, after which you switch the web service response over to "stealUserData: true"....


Yes that's the offical line. There are numerous examples of bad behaviour going live though.


The app explanation for it will be 'path can hook into your address book' - presumably for sending invites or messages to friends. However at this point the cat's out the bag and path can do what they like with this data (albeit against app store policy).

The problem is surely one of governance - it must be that the app reviewers simply don't (whether through sheer volume of apps they have to review, or lack of ability) see what's being posted, and where.

What's more if Path used https and a CA, would we ever have found out what was being posted short of live debugging?


The address book is uploaded using TLS/SSL and the author used mitmproxy.


D'oh. Would this man-in-the-middle attack have worked if path validated against a CA or stored cert and only submitted the data when it was sure it wasn't being snooped on?


I've come across the latter, but it's not a difficult thing to get around if you're willing to play with the binary. You might be able to recognize the stored cert and sub it out with your own, or you can just ensure the branch that validates it never runs.


Presumably Apple could demand the ability to change the certificate an app validated against for testing purposes, if Apple cared enough to do that.


Nope. Turns out Siri was (at least originally, not sure if it still is) vulnerable to the same attack.


Honest answer: This is the kind of behavior that justifies the expense of writing multiple native versions of an app rather than just developing a single website accessible from any browser but having limited access to data stored on the users' computing device.


Our app was recently rejected specifically for this reason, though we had a "skip" button, contacts weren't just automatically 'farmed'. So we had to add a popup with explicit allow/deny buttons and then the app passed subsequent reviews.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: