Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the EFF is probably doing good by calling attention to the issue, but let's... actually look at the feature before passing judgement, e.g:

https://twitter.com/josephfcox/status/1423382200880439298/ph...

- It's run for Messages in cases where a child is potentially viewing material that's bad.

- It's run _before upload to iCloud Photos_ - where it would've already been scanned anyway, as they've done for years (and as all other major companies do).

To me this really doesn't seem that bad. Feels like a way to actually reach encrypted data all around while still meeting the expectations of lawmakers/regulators. Expansion of the tech would be something I'd be more concerned about, but considering the transparency of it I feel like there's some safety.

https://www.apple.com/child-safety/ more info here as well.



True. But, first, it also means anyone, anywhere, as long as they use iOS, is vulnerable to what the US considers to be proper. Which, I will agree, likely won’t be an issue in the case of child pornography. But there’s no way to predict how that will evolve (see Facebook’s ever expanding imposing of American cultural norms and puritanism).

Next, it also means they can do it. And if it can be done for child pornography, why not terrorism? And if it can be done for the US’ definition of terrorism, why not China's, Russia's or Saudi Arabia's? And if terrorism and child pornography, why not drugs consumption? Tax evasion? Social security fraud? Unknowingly talking with the wrong person?

Third, there apparently is transparency on it today. But who is to say it's possible expansion won't be forcibly silenced in the same way Prism's requests were?

Fourth, but that's only because I slightly am a maniac, how can anyone unilaterally decide to waste the computing power, battery life and data plan of a device I paid for without my say so? (probably one of my main gripes with ads)

All in all, it means I am incorporating into my everyday life a device that can and will actively snoop on me and potentially snitch on me. Now, while I am not worried today, it definitely paves the way for many other things. And I don't see why I should trust anyone involved to stop here or let me know when they don’t.


What transparency? The algorithm is secret. The reporting threshold is secret. The database of forbidden content is secret.


Like always with Apple. But at least they’re saying they are doing it and supposedly detailing how.

Regarding the amount of secrecy on the how, it’s the way they do everything.

As for "sharing" the database of "forbidden content", freely or not, that would be sharing child pornography.


They can reveal the algorithm but this time abusers can find a way to change the information of an image to bypass the algorithm.

The most safe step is not to start this in the beginning because it will create more problems than it solve.



The secret threshold is a parameter in that protocol. It runs after the secret hash algorithm.


> is vulnerable to what the US considers to be proper

This stirs up all sorts of questions about location and the prevailing standards in the jurisdiction you're in. Does the set of hashes used to scan change if you cross an international border? Is the set locked to whichever country you activate the phone in? This could be a travel nightmare.


As this isn't a list of things the U.S. finds prudish, but actual images of children involved in being/becoming a victim of abuse, it doesn't look like there are borders, at least according to the official Apple explanation[0].

If the situation OP suggests happens in the form of FBI/other orgs submitting arguably non-CSAM content, then Apple wouldn't be complicit or any wiser to such an occurrence unless it was after-the-fact. If it happens in a way where Apple decides to do this on their own dime without affecting other ESPs, I imagine they wouldn't upset CCP by applying US guidance to Chinese citizens' phones.

0: https://www.apple.com/child-safety/


It’s still a US database, and from it’s goal it would fit US defintion of child abuse.

I am no expert of what it means to the US, but I’d assume there can be a lot of definition of what “child”, “abuse” and “material” mean depending on beliefs.


I think your points are mostly accurate, and that's why I led with the bit about the EFF calling attention to it. Something like this shouldn't happen without scrutiny.

The only thing I'm going to respond to otherwise is this:

>Fourth, but that's only because I slightly am a maniac, how can anyone unilaterally decide to waste the computing power, battery life and data plan of a device I paid for without my say so? (probably one of my main gripes with ads)

This is how iOS and apps in general work - you don't really control the amount of data you're using, and you never did. Downloading a changeset of a hash database is not a big deal; I'd wager you get more push notifications with data payloads in a day than this would be.

Battery life... I've never found Apple's on-device approaches to be the culprit of battery issues for my devices.

I think I'd add to your list of points: what happens when Google inevitably copies this in six months? There really is no competing platform that comes close.


> what happens when Google inevitably copies this in six months? There really is no competing platform that comes close.

Then you have to make a decision about what matters more. Convenience and features, or privacy and security.

I've made that decision myself. I'll spend a bit more time working with less-than-perfect OSS software and hardware to maintain my privacy and security.


> This is how iOS and apps in general work - you don't really control the amount of data you're using, and you never did. Downloading a changeset of a hash database is not a big deal; I'd wager you get more push notifications with data payloads in a day than this would be.

Oh, definitely. But I am given the ability to remove those apps, or to disable these notifications, and I consider the ones I leave to be of some value to me? This? On my phone? It’s literal spyware.

But, as I said, it's only because I am a maniac regarding how tools should behave.

The point you add about Google, however, is a real issue. I’ve seen some people mention LineageOS and postmarketOS. But isn’t really a solution for most people.


To be clear: don't use iCloud Photos and you won't get compared to hashes.

This is, currently, the same opt-out of hash checking anyway, as it's already checked on the server.


The problem with the “there’s no way to predict how this will evolve” argument is that it would apply equally as well years before this was announced, and to literally anything that Apple could theoretically do with software on iPhones.


Well it does - people have been pointing out downfalls of the walled garden, locked boot loaders and proprietary everything on "iDevices" for years, pointing to scenarios similar to the one unfolding right now.


That’s the problem with closed-source software in general, and one that can be remotely updated in particular.

And I am writing that as a (now formerly?) happy iPhone user. It’s just that I don’t trust it or Apple as much anymore.

And although there is no way of predicting with certainty how it will evolve, most past successful similar processes and systems usually went down the anti-terrorism & copyright enforcement roads.


Indeed. That's why some people have been consistently arguing against proprietary software and against locked-down platforms for decades.


> it also means anyone, anywhere,

it's only in the US. not in Europe or ROW.


The US is very openly, publicly moving down the road called The War on Domestic Terrorism, which is where the US military begins targeting, focusing in on the domestic population. The politicians in control right now are very openly stating what their plans are. It's particularly obvious what's about to happen, although it was obvious at least as far back as the Patriot Act. The War on Drugs is coming to an end, so they're inventing a new fake war to replace it, to further their power. The new fake war will result in vast persecution just as the last one did.

You can be certain what Apple's scanning is going to be used for is going to widen over time. That's one of the few obvious certainties with this. These things are a Nixonian wet dream. The next Trump type might not be so politically ineffectual; more likely that person will be part of the system and understand how to abuse & leverage it to their advantage by complying with it rather than threatening its power as an outsider. Trump had that opportunity, to give the system what it wanted, he was too obtuse and rigid, to understand he had to adapt or the machine would grind him up (once he started removing the military aparatus that was surrounding him, like Kelly and Mattis, it was obvious he would never be allowed to win a second term; you can't keep that office while being set against all of the military industrial complex including the intelligence community, it'll trip you up on purpose at every step).

The US keeps getting more authoritarian over time. As the government gets larger and more invasive, reaching ever deeper into our lives, that trend will continue. One of the great, foolish mistakes that people make about the US is thinking it can be soft and cuddly like Finland. Nations and their governments are a product of their culture. So that's not what you're going to get if you make the government in the US omnipotent. You're going to get either violent Latin American Socialism (left becomes dominant) or violent European Fascism (right becomes dominant). There's some kind of absurd thinking that Trump was right-wing, as in anti-government or libertarian; Trump is a proponent of big government, just as Bush was, that's why they had no qualms about spending like crazy (look at the vast expansion of the government under Bush); what they are is the forerunners to fascism (which is part of what their corporatism is), they're right wingers that love big government, a super dangerous cocktail. It facilitates a chain of enabling over decades; they open up pandora boxes and hand power to the next authoritarian. Keep doing that and eventually you're going to get a really bad outcome (Erdogan, Chavez, Putin, etc) and that new leadership will have extraordinary tools of suppression.

Supposed political extremists are more likely to be the real target of what Apple is doing. Just as is the case with social media targeting & censoring those people. The entrenched power base has zero interest in change, you can see that in their reaction to both Trump and Sanders. Their interest is in maintaining their power, what they've built up in the post WW2 era. Trump and Sanders, in their own ways, both threatened what they constructed. Trump's chaos threatened their built-up system, so the globalists in DC are fighting back, they're going to target what they perceive as domestic threats to their system, via their new War on Domestic Terrorism (which will actually be a domestic war on anyone that threatens their agenda). Their goal is to put systems in place to ensure another outsider, anyone outside of their system, can never win the Presidency (they don't care about left/right, that's a delusion for the voting class to concern themselves about; the people that run DC across decades only care if the left/right winner complies with their agenda; that's why the Obamas and Clintons are able to be so friendly with the Bushes (what Bush did during his Presidency, such as Iraq, is dramatically worse than anything Trump did, and yet Bush wasn't impeached, wasn't pursued like Trump was, the people in power - on both sides - widely supported his move on Iraq), they're all part of the same system so they recognize that in eachother, and reject a Trump or Sanders outsider like an immune system rejecting a foreign object).

The persistent operators in DC - those that continue to exist and push agenda regardless of administration hand-offs - don't care about the floated reason for what Apple is doing. They care about their power and nothing else. That's why they always go to the Do It For The Kids reasoning, they're always lying. They use whatever is most likely to get their agenda through. The goal is to always be expanding the amount of power they have (and that includes domestically and globally, it's about them, not the well-being of nations).

We're entering the era where all of these tools of surveillence they've spent the past few decades putting into place, will start to be put into action against domestic targets en masse, where surveillence tilts over to being used for aggressive suppression. That's what Big Tech is giddily assisting with the past few years, the beginning of that switch over process. The domestic population doesn't want the forever war machine (big reasons Trump & Sanders are so popular, is that both ran on platforms opposed to the endless foreign wars); the people that run DC want the forever war machine, it's their machine, they built it. Something is going to give, it's obvious what that's going to be (human liberty at home - so the forever wars, foreign adventurism can continue unopposed).

Systems of power always act to defend and further that power. Historically (history of politics, war, governmental systems) or psychologically (the pathology of power lusting) there isn't anything surprising about any of it, other than perhaps that so many are naive about it. I suspect most of that supposed naivety is actually fear of confrontation though (you see the same thing in the security/privacy conflicts), playing dumb is a common form of self-defense against confrontation. To recognize the growing authoritarianism, requires a potent act of confrontation mentally (and then you either have to put yourself back to sleep (which requires far more effort), or deal with the consequences of that stark reality laid bare).


The US is imposing puritanism? Oh man, on which planet do you live?


The one on which a social network used by nearly 3 billion people worldwide (Facebook) bans pictures of centuries old world famous paintings containing naked women, as if it were pornography.

The one on which a video hosting platform used by over 2 billion people (YouTube) rates content as 18+ as soon as it, even briefly, shows a pair of breasts.

Why? Which planet do you live on?


Try not living in the US for example.


So your argument is, if you've done nothing wrong, you have nothing to worry about. Really? Will you feel the same when Apple later decides to include dozens more crimes that they will screen for, surreptitiously? All of which are searches without warrants or legal oversight?

Let me introduce you to someone you should know better. His name is Edward Snowden. Or Louis Brandeis, who is spinning in his grave right about now.

The US Fourth Amendment exists for a damned good reason.


You do realize you could get this message across without the needlessly arrogant tone, yeah? All it does is make me roll my eyes.

Anyway, that wasn't my stated position. I simply pointed out that this is done for a subset of users (where there's already existing reasons to do so, sub-13 and all) and that on syncing to iCloud this _already happens anyway_.

I would gladly take this if it removes a barrier to making iCloud E2E encrypted; they are likely bound to do this type of detection, but doing it client-side before syncing feels like a sane way to do it.


Actaully, I don't think it will remove a barrier for iCloud E2E encryption at all. On the contrary. All it will remove, is the barrier for what we find acceptible for companoes like Apple to implement. I think Apple made a very intrusive move, one that we will come to accept over time. After that, a next move follows...and so on. That's the barrier being moved. A point will be reached when E2E encryption is nothing more than a hoax, a non-feature with no added value. A mirage of what it is supposed to be. All of these things are implemented under the Child Protection flag. Sure, we need child protection, we need it badly, but the collateral is huge and quite handy too for most 3 letter agencies. I don't have the solution. The other day my 3 year old son had a rash, I took pictures of it over the course of a few days. A nude little boy, pictures from multiple angles. I showed my dermatologist. What will happen in the future? Will my iPhone "flag" me as a potential child predator? Can I tell it I'm a worried dad? Do I even have to be thinking about these things?


> I would gladly take this if it removes a barrier to making iCloud E2E encrypted; they are likely bound to do this type of detection, but doing it client-side before syncing feels like a sane way to do it.

But there is an issue there. Now there is a process on your phone capable of processing unencrypted data on your phone and communicating with the outside world. That is spyware which will almost certainly be abused in some way.


> Now there is a process on your phone capable of processing unencrypted data on your phone and communicating with the outside world.

What? That’s what all apps by definition do. My retinas can’t do decryption yet!


From the proposal it seems that this system 1. cannot be opted out of and 2. can run at any time the phone is powered on without user consent.

That puts it clearly in a different category from apps.


E2E encryption isn’t meaningful if third parties get to look at the decrypted content.


Hmm, seems to me since most smart criminals understand not to leave a digital footprint that what Apple will catch is those are idiots and make a honest mistake and those how are dumb and make a mistake in putting their illegality online.

So I would ask US Lawmakers why cannot the phone companies make the same commitments? As the reason seems to be we have bad people doing crime using digital communication devices.

Last time I checked the digital pipeline ie phone lines is still under FFC rules is it not?

If they answer that its to hard tech wise then why cannot Apple make the same argument ot law makers?


Teens are also children. Apple has no business checking if they send or receive nude pics. Let alone tell their parents. This is very creepy behavior from Apple.

Edit: I'm talking about this https://pbs.twimg.com/media/E8DYv9hWUAksPO8?format=jpg&name=...


The fact that teens are children means that if, say a 16-yo sends a nude selfie to their s.o., they've just committed a felony (distributing child pornography) that can have lifelong consequences (thanks to hysterical laws about sex offender registries, both kids could end up having to register as sex offenders for the rest of their life and will be identified as having committed a crime that involved a minor. Few if any of the registries would say more than this and anyone who looks in the registry will be led to believe that they molested a child and not shared a selfie or had one shared with them). The laws may not be just or correct, but they are the current state of the world. Parents need to talk to their kids about this sort of thing, and this seems one of the less intrusive way for them to discover that there's an issue. If it were automatically shared with law enforcement? That would be a big problem (and a guarantee that my kids don't get access to a device until they're 18), but I'm not ready¹ to be up in arms about this yet.

1. I reserve the right to change my mind as things are revealed/developed.


> they've just committed a felony (distributing child pornography)

In the US, maybe (not sure if this is even true in all states), but not in most other countries in the world, where a 16-year-old is not a child, nudity is not a problem, and "sex offender registries" don't exist.

The US is entitled to make its own (crazy, ridiculous, stupid) laws, but we shouldn't let them impose those on the rest of us.


Yet for the largest part this is where we are ending up. Just look at Facebook and Twitter deciding what is right and wrong. I think that's wrong in a lot of ways but appearantly there is very little the EU and others can do about it.


There is a lot the EU could do if it wanted to, or had any kind of courage. But we are weak and fearful.


I’d argue that the problem of minors declared sex offenders for nude pictures has reached a critical mass that scares me. At this point, sex offenders of truly vile things can hide by saying that they are on a sex offender registry because of underage selfies. And I think most people will believe them.


Am I missing something? Why would they say they're sex offenders at all?


As I understand it, one of the primary purposes of sex offender registries is to get information out about who is on them. I believe some people are forced to knock on doors in their neighbourhood and disclose it. In other situations they would just be getting ahead of the story.


They wouldn't. But if they were sex offenders, they could claim that their offense was simply sending a nude when they were 16. While their real offense may have been rape.

I don't actually know what the law is in regard to sex offense. I'm simply explaining what I understood from the previous comment.


I worked with someone that claimed this, years ago. And they were still young enough that I believed them.


> if, say a 16-yo sends a nude selfie to their s.o., they've just committed a felony ... The laws may not be just or correct, but they are the current state of the world.

Hence, strong E2E encryption designed to prevent unjust government oppression, without backdoors.

Parents should talk to their teenagers about sex regardless of if they get a notification on their phone telling them they missed the boat.


I get your points, but the end result is that the client in an E2EE system can no longer be fully trusted to act on the clients behalf. That seems alarming to me.


> The laws may not be just or correct, but they are the current state of the world

America, not Europe. Or Japan.


> Apple has no business checking if they send or receive nude pics. Let alone tell their parents.

Some people might disagree with you.

There are people out there who are revolted by the "obviously okay" case of 2 fully-consenting teenagers sending each other nude pics, without any coercion, social pressure, etc.

Not to mention all the gray areas and "obviously not okay" combinations of ages, circumstances, number of people involved, etc.


It will be the parents who are deciding this - particularly if they are buying these phones.

If parents don't like this feature, they can buy a lineage OS type phone. If parents do they will buy this type of phone for their kids.


Big correction: it will be the other kids’ parent who will decide for your kid. Apple will give your children’s picture to the other kid’s parents.

That’s terrifying.


What a lie - I'm really noticing an overlap between folks fighting this type of stuff (which as a parent I want) and folks just lying horribly.

"if a child attempts to send an explicit photo, they’ll be warned before the photo is sent. Parents can also receive a message if the child chooses to send the photo anyway." - https://techcrunch.com/2021/08/05/new-apple-technology-will-...

So this gives kids a heads up that they shouldn't send it, and that if they do, their parent will be notified. So that's me in case you are not reading this clearly.

Now yes, if someone is SENDING my child porn from a non-child account, I as the parent will be notified. Great.

If this is terrifying - that's a bit scary! HN is going a bit off the rails these days.

Allow me to make a prediction - users are going to like this - it will INCREASE their trust in apple in terms of a company trying to keep them and their family safe.

I just looked it up - Apple is literally the #1 brand globally supposedly in 2021. So they are doing the right thing in customers minds so far.


Nobody's lying. Consider the scenario where the sender is an Android user and the recipient is an iOS user.


"it will be the other kids’ parent who will decide for your kid. Apple will give your children’s picture to the other kid’s parents."

This is an absolute falsehood. Literally a total flat lie. Have people not read the paper?

Apple does not give android phone users the child porn. They block it on device. So no, it will not be the other children's parents who will decide it will be me if my child is sending it.

And if someone sends my child a photo of child porn etc, then apple will block it and will notify me and that is also fine - if you are sending my child child porn - then I have a right to know - and I will pursue you aggressively as almost any parent will I think.

The idea that apple is doing something wrong here is ridiculous. The lying about what apple is doing is also ridiculous.


Could you elaborate on what you think would happen in this case, vs. an iOS user sending an image to another iOS user?


Presumably the sender won't get a notification that the picture they are about to send will be flagged, but it will be flagged by the recipient's device automatically.

Neither participant will have the opportunity to be warned in advance and avoid getting flagged.


The claim that if you buy an apple phone that it will notify the OTHER kids parents is a flat out lie.

Yes, android users sending porn to kids will be flagged by apple devices if the kid has one. My point, parents may like this.


That's entirely GP's point: preferring to cater to those people affects the rest of us in a way we find detrimental.


Children as defined by Apple differs per legal region, for the US it is set to 13 years or younger. Also, your parents need to have added your account to the iCloud family for the feature to work.


Call me crazy, but if your 13yo is sending nudes, I think that as a parent you want to know that.

Current society is pushing a lot of adult behavior into kids, and they don’t always understand the consequences of their actions.

Parents can’t inform their kids if they aren’t aware.


Wanting to know as a parent, and the way Apple is going about this are two different issues.

The government also wants to know about potential terrorist attacks. Why not scan all our phones for all kinds of data to protect innocent people from being killed by mass shootings?

That's nonsense. I'm saying that and I'm deeply locked in Apple's ecosystem. Which is pissing me off.


> Parents can’t inform their kids if they aren’t aware

Why not inform your children of the potential consequences when giving them a phone? Why do you need Apple to notify you of inappropriate behavior before having that conversation? That's like waiting until you find a pregnancy test in the garbage before talking to your children about sex.


Yes, that’s great, but that’s not how kids work, especially 13yo..


I wouldn't give a 13 year old an iphone, but I guess that makes me odd.


Yeah good luck with that.


> Apple has no business checking if they send or receive nude pics.

Furthermore, if Apple is deliberately, by design resending them to additional people beyond the addressee, as a feature it markets to the people that will be receiving them, that seems to...raise problematic issues.


Who said Apple re-sends pictures?


It would be if that were what they were doing. They are not.


Yes, it is, and you need to read the entire article.


I think you're both partially right.

> In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the “parent” will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.

This specifically says that it will not notify the parents of teens, as GGP claims. So GP is right that Apple isn't doing what GGP claimed. However I still think you might be right that GP didn't read the full article and just got lucky. Lol.


That's not what this solution is doing, it's checking a hash of the photo against a hash of known offending content.

If someone sends nude pics there is still no way to tell that it's a nude pic.


You're conflating the CSAM detection of photos uploaded to iCloud with the explicit detection for child devices. The latter is loosely described here: https://www.apple.com/child-safety/.

> Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.


Ah yeah you are right, two posts about different Apple features for photo scanning on a single day has thrown me!


That’s only the first part of what was announced and addressed in the article.

The other part is on-device scanning for nude pics a child is intending to send using machine learning and securely notifying the child, and then parents within the family account. The alert that the kids get by itself will probably be enough to stop a lot of them from sending the pic in the first place.


I'm a parent. It's weird seeing HN push against this.

This sounds like a feature I'd like


If you are worried about the photos your kids are sending/receiving the problem isn't in the device.

I won't use my own kids devices as information weapons against them. I will teach them what kind of information weapons they are however.


If you are talking about parents giving their children a device to use as a weapon against them, you are implying some really bad parenting and really bad general human behavior by some other parents.

Not that there aren't some really lousy parents out there.

I suspect/hope the exclusion parenting style (you don't get a phone because I don't trust you with it or the rest of the world with you) and the guidance parenting style (you get a phone but I'm going to make sure to talk to you about responsibility and about lousy people in the world, and I want to know if something happens so we can keep talking) both far outweigh this sort of proposed 'entrapment' parenting style.


Good lord - people have scary parenting styles - "information weapons"???

A phone is a tool. You start slowly. Do you give a 3 year old a chainsaw or machine gun right away? No.

I grew up and learned to shoot. I started with a bb gun, then a 22lr rife etc. You start with stuff less dangerous.

A phone potentially exposes children to all sorts of crap.

It's not some horrible thing to start slowly with something a bit locked down (and yes, blocking porn is part of that), then it opens up as they get mature.

This conversation has really made me look at the HN community in a different way. I've worked with kids in lots of contexts and with parents etc - the HN feelings about apple's efforts in this area are going to be somewhat extreme outliers relative to population as a whole and particularly vis a vis parents.


It seems like the "guiding parent" style is acceptable to you.

And yes, a phone - and the web and computing in general - are information weapons. Same as a knife or a hammer they have utility too which is great and also something to be lauded. But dangers must be recognised.

I won't trust Apple or Google to draw the line in the sand for me. Both the kid and myself will know what the expectations are and be talking about it the whole way through growing up.


I agree with you in principle, but I also know that kids will soon find methods of sharing that defeat any scans. Other apps and ephemeral websites can be used to escape Apple’s squeaky-clean version of the world.


Sure - that's fine.

But if I'm picking a phone for my kid, and my choice is this (even if imperfect) and the HN freedomFone - it's going to be Apple. We'll see what other parents decide.


@philistine is right that apple solution can only worth for short time being. Parents want easy automatic solution with "out of sight out of mind", but the only solution that will work long-term is talking with kid and educating them instead of out-sourcing parenting to tech company.

It's easily to design solution that will work around and allow sending either criminals or kids nude pictures under the radar, e.g.

1) Someone will eventually write web app that will use webrtc or similar for snapping nude picture (so that no picture is stored on device) 2) encrypting those photos on the server 3) sending the link to this nude image that will be rendered in html canvas (again no image will be stored on device) 4) link to web app that will rendered image will be behind captcha so that automated bots cannot scan it

Now do we want to go into the rabbit hole to 'protect the kids' and make Apple buildin camera drivers that will filter all video stream for nudity?


> the only solution that will work long-term is talking with kid and educating them instead of out-sourcing parenting to tech company.

Sometimes otherwise good, smart kids, with good parents, can be caught up by predation. It isn't always as simple as making sure the kid is educated; they'll still be young, they'll still be immature (< 13!), and it will sometimes be possible to manipulate them. I'm not saying that what Apple is doing is a good answer, it probably isn't, but it's an answer to a genuine felt need.


Under 13 seems like an ok cutoff but I’d be very concerned that they push it to under 18. Pretty much everyone is sharing adult context responsibly at 17.


False, and this shows that you didn't read the entire article. You should go and do that.


Nude pic ID is routine online. Facebook developed this capability over 5 years ago and employs it liberally today, as do many other net service providers.


Not true. We don't know how the fuzzy hash is working. It's very likely a lot of nudes would fall in the threshold Apple has set.


This is not how the system works at all.




Answered incorrectly. You need to read the rest of the article.


Parents do have a legal and moral responsibility to check on their children’s behaviour, and that includes teens. It’s somewhat analogous to a teacher telling parents about similar behaviour taking place at school.

I suspect a lot of how people feel about this will come down to whether they have kids or not.


I don't know about your country but in mine in Europe teens have privacy rights OVER their parents' paranoia.

That includes secrecy in private communications and OFC privacy within their own data in smartphones.


You describe it as if Apple's got people in some room checking each photo. It's some code that notifies their parents in certain situations. ;P

I know several parents in just my extended circle alone that would welcome the feature, so... I just don't think I agree with this statement. These parents already resort to other methods to try and monitor their kids but it's increasingly (or already) impossible to do so.

I suppose we should also take issue with Apple letting parents watch their kids location...?


Since nobody would ever object to it, protecting against child abuse gets used as a wedge. As the article points out, the way this story ends is with this very backdoor getting used for other things besides preventing child abuse: anything the government asks Apple to give them. It's an almost inevitable consequence of creating a backdoor in the first place, which is why you have to have a zero-tolerance policy against it.


My big issue is what it opens up. As the EFF points out, it's really not a big leap for oppressive governments to ask Apple to use the same tech (as demoed by using MS's tech to scan for "terrorist" content) to remove content they don't like from their citizens' devices.


That's my concern: what happens the first time a government insists that they flag a political dissident or symbol? The entire system is opaque by necessity for its original purpose but that seems to suggest it would be easy to do things like serve a custom fingerprints to particular users without anyone being any the wiser.


My heart goes to the queer community of Russia, whose government will pounce on this technology in a heartbeat and force Apple to scan for queer content.


They’d have many other countries keeping them company, too.

One big mess: how many places would care about false positives if that gave them a pretext to arrest people? I do not want to see what would happen if this infrastructure had been available to the Bush administration after 9/11 and all of the usual ML failure modes played out in an environment where everyone was primed to assume the worst.


First, standard disclaimer on this topic that there were multiple independent technologies announced - I assume you are speaking to content hash comparisons on photo upload specifically to Apple's photo service, which they are doing on-device vs in-cloud.

How is this situation different from an oppressive government "asking" (which is a weird way we now use to describe compliance with laws/regulations) for this sort of scanning in the future?

Apple's legal liability and social concerns would remain the same. So would the concerns of people under the regime. Presumably the same level of notification and ability of people to fight this new regulation would also be present in both cases.

Also, how is this feature worse than other providers which already do this sort of scanning on the other side of the client/server divide? Presumably Apple does it this way so that the photos remain encrypted on the server, and release of data encryption keys is a controlled/auditable event.

You would think the EFF would understand that you can't use technical measures to either fully enforce or successfully defeat regulatory measures.


> It's run _before upload to iCloud Photos_ - where it would've already been scanned anyway

Right, so ask yourself, why is it on the device? Why not just scan on the server?

To me (agreeing with much of the commentary I’ve seen) the likeliest answer is that they are confining the scan to pre uploads now not for any technical reason but to make the rollout palatable to the public. Then they’re one update away from quietly changing the rules. There’s absolutely no reason to do the scan on your private device if they plan to only confine this to stuff they could scan away from your device.


Well for one they get to use YOUR compute power (and energy) to create the hashes. Sounds like a nice way to save money.


> - It's run _before upload to iCloud Photos_ - where it would've already been scanned anyway, as they've done for years (and as all other major companies do).

Then why build this functionality at all? Why not wait until it's uploaded and check it on their servers and not run any client side code? This is how literally every other non-encrypted cloud service operates.


I assume (and this is my opinion, to be ultra-clear) that it's a blocker for E2E encryption. As we've seen before, they wanted to do it by backed off after government pressure. It wouldn't surprise me if this removes a blocker.

Apple has shown that they prefer pushing things to be done on-device, and in general I think they've shown it to be a better approach.


From what I remember iCloud is only encrypted at rest but not E2E. Apple can decrypt it anytime.

The password manager (Keychain) is the only fully encrypted part of iCloud; If you lose your devices or forget the main password, the manager will empty itself. This does not happen with any other part of iCloud.


...yes, I'm saying that I think they _can't_ get to E2EE on iCloud without moving something like this client side.


Oh that would make sense.


That really makes little to no sense - it's not E2EE if you're going to be monitoring files that enter the encrypted storage. That's snakeoil encryption at that point.

I sincerely doubt Apple is planning to do E2EE with iCloud storage considering that really breaks a lot of account recovery situations & is generally a bad UX for non-technical users.

They're also already scanning for information on the cloud anyway.


Eh, I disagree - your definition feels like moving the goalposts.

Apple is under no obligation to host offending content. Check it before it goes in (akin to a security checkpoint in real life, I guess) and then let me move on with my life, knowing it couldn't be arbitrarily vended out to x party.


Going on with your life in this situation means police officers have been given copies of the photos that triggered the checkpoint. Do you want that?


Any image that would trigger _for this hashing aspect_ would already trigger _if you uploaded it to iCloud where they currently scan it already_. Literally nothing changes for my life, and it opens up a pathway to encrypting iCloud contents.


Apple's paper talks about decrypting suspect images. It isn't end to end.[1]

[1] https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


Feel free to correct me if I'm wrong, but this is a method for decrypting _if it's matching an already known or flagged item_. It's not enabling decrypting arbitrary payloads.

From your link:

>In particular, the server learns the associated payload data for matching images, but learns nothing for non-matching images.

Past this point I'll defer to actual cryptographers (who I'm sure will dissect and write about it), but to me this feels like a decently smart way to go about this.


Matching means suspect. It doesn't have to be a true match.

It could be worse. But end to end means the middle has no access. Not some access.


And remember the E2EE is pure speculation at this point.


As long as your using an iPhone, apples got access. To be E2E, the screen still needs to be showing the encrypted values, not the real image


> To be E2E, the screen still needs to be showing the encrypted values, not the real image

No that is literally not the definition of end to end encryption.

End to end encryption means that only the final recipients of data can see what the data is. In this case, it's the user.


Then don't offer "E2EE"


"Feels like a way to actually reach encrypted data all around while still meeting the expectations of lawmakers/regulators"

And isn't that a problem? Encrypted data should be secure, even if lawmakers don't want math to exist.


Your data should be encrypted on Apple's servers and unreadable by them; rather, this is my desire from Apple. They are likely bound to scan and detect for this kind of abusive content.

This handles that client-side instead of server side, and if you don't use iCloud photos, it doesn't even affect you. If syncing? Sure, decrypt it on device and check it before uploading - it's going to their servers after all.

Don't want to even go near this? Don't use Message or iCloud, I guess. Very possible to use iOS/iDevices in a contained manner.


> considering the transparency of it

What transparency? Apple doesn't publish iOS source.


There are always scenarios that one cannot catch. EFF highlights one such.

It sounds like it could be quite common. And it could be an absolute nightmare scenario for the kid who does not have the feature turned on.

This means that if—for instance—a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be “explicit” or that the recipient’s parent will be notified. The recipient’s parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the “sexually explicit image” cannot be deleted from the under-13 user’s device.


Now it will be "before upload". In 1-2 years it's "scan all local photos" in the name of "make the World a better place". It's such a small technical step for Apple to change this scanning behaviour in the future and scan even offline photos. All the necessary software is on all Apple i-devices already by then.

Everybody is a potential criminal with photos on your phone unless you prove otherwise by scanning. This is the future we are heading to. To do the scanning on device is actually the weakest point of their implementation IMHO.


This seems even worse. If the images are only scanned before upload to iCloud then Apple has opened a backdoor that doesn’t even give them any new capability. If I am understanding this right an iPhone can still be used to distribute CSAM as long as the user is logged out of iCloud? So it’s an overreach and ineffective?


The point of encrypted data is not to be “reached.”


> Expansion of the tech would be something I'd be more concerned about

Yeah, and that’s precisely what will happen. It always starts with child porn, then they move on to “extremist content”, of which the term expands to capture more things on a daily basis. Hope you didn’t save that “sad Pepe” meme on your phone.


It runs on my device and uses my CPU, battery time and my network bandwidth (to download/upload the hashes and other necessary artifacts).

I'd be fine with them scanning stuff I uploaded to them with their own computers because I don't have any really expectation of privacy from huge corporations.


I feel like this argument really doesn't add much to the discussion.

It runs only on a subset of situations, as previously noted - and I would be _shocked_ if this used more battery than half the crap running on devices today.

Do you complain that Apple runs code to find moments in photos to present to you periodically...?


What is the point of running this on device? The issue here is now Apple has built and is shipping what is essentially home-phoning malware that can EASILY be required with a court order to do something entirely than what it is designed to do.

They're opening themselves to being forced by 3 letter agencies around the world to do some really fucked up shit to their users.

Apple should never have designed something that allows for fingerprinting of files & users for stuff stored on their own device.


Your entire argument could be applied to iOS itself. ;P


Not really, iOS didn't really have the capability of scanning and reporting files based on a database received by the FBI/other agencies.

There is a big difference when this has been implemented & deployed to devices. Fighting questionable subpoenas and stuff becomes easier when you don't have the capability.


Given iOS being totally proprietary on a heavily locked down device making inspecting even the binary blobs complicated, how can anyone can be sure what it is doing ? Not to mention any such capability missing now is just one upgrade away from being added with no ability for the user to inspect and reject it.


No one is. Just that it's harder to develop tech like this without it leaking.

So apple is just outright saying it is.


> I feel like this argument really doesn't add much to the discussion.

Oh, I guess I should have just regurgitated the Apple press release like the gp?

> It runs only on a subset of situations...

For now. But how does that fix the problem of them using my device and my network bandwidth?

> I would be _shocked_ if this used more battery than half the crap running on devices today.

You think you'll be able to see how much it uses?

> Do you complain that Apple runs code to find moments in photos to present to you periodically...?

Yes. I hate that feature, it's a waste of my resources. I'll reminisce when I choose to, I don't need some garbage bot to troll my stuff for memories. I probably already have it disabled, or at least the notifications of it.


Apple already uses ‘your’ CPU, battery time and network bandwidth for its Find My / AirTag product.


You can turn it off.


Is it just me that finds the list of "Things to turn off." getting ridiculously long?


It’s not just you. It’s fucking enraging at this point. I feel like I woke up one day and gleaned a fat look at Finder or various iCloud/background service junk and just realized it is to me what fucking bloatware ware of 2010 PC’s (and presumably today) was/is.

I just want general purpose computation equipment @ reasonably modern specifications - albeit largely devoid of rootUser-privileged advertisement stacks (included libraries etc).

I mean what the fuck, is that so fucking hard? This is hellworld, given the obviously plausible counter factual where we just… don’t… do this


> I just want general purpose computation equipment @ reasonably modern specifications

So ... LineageOS (without GApps) on Pixel or OnePlus hardware? (Purchased directly from the manufacturer, not a carrier!)


I think that is intentional. When they really want people to do something they make it opt-out. When they don’t they make it opt-in.


As many, many people have pointed out, building a mechanism to scan things client-side is something which could easily be extended to encrypted content, and perhaps, is intended to be extended at a moment's notice to encrypted content, if they see an opportunity to do so.

It's like having hundreds of nukes ready for launch, as opposed to having the first launch being a year away.

If they wanted to "do it as all major companies do", then they could have done it on the server-side, and there wouldn't have been a debate about it at all, although it is still extremely questionable, as far as privacy is concerned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: