Apple's CSAM Scanning Is Causing People To Lose Their Minds


The announcement that Apple will be scanning iCloud storage for images which match known child abuse images seems to have caused a whole selection of the population to lose their minds. Or at least take leave of common sense temporarily.

Let's be very clear what Apple is an isn't doing here. iMessage will attempt to prevent children on Family accounts from viewing or sending images which might be harmful to them. If they choose to ignore the warning, the Family leader will be notified and can choose to take any action which might be appropriate.

This is emphatically a good thing. Teen and pre-teens do not always make the right choices. When it comes to sending images of themselves, that's a mistake that can haunt them for the rest of their lives and have severely detrimental effects on their long-term mental health. I can see no parent doing anything but welcome this change. Children might find this an invasion of their rights, but as of now Apple is not actually preventing anything from happening, only warning and notifying.

iCloud scanning is a different matter entirely. Here Apple will take a set of known child abuse images from child protection agencies, hash them and then run a process on macOS and iOS devices which will search the connected iCloud storage for matches. If any are found, a manual review off the match will be undertaken and then anything of concern notified to the appropriate authorities.

Again, it is hard to see how this is anything but a good thing. Apple's scanning is being undertaken from the user's device and only images which are already identified as abusive will result in action. This is not a process of looking at the user's images and trying to decide if they of abusive practices, so the chances of a false positive are low and the manual review process should address any of those.

The only people with anything to be concerned about are the very people we, as a community, would want to identify and protect children from.

Now, there is an additional concern which has been raised around governments intervening to demand that Apple updates the algorithms to scan for images which those governments find of interest for political reasons rather than child protection. Apple could be asked to do this and it could also refuse. If it were to accede to demands of this nature because of the technology which it has for scanning for child abuse, chances are it will be doing plenty more and worse things at those government's request anyway. 

So, the clamour against Apple can be safely ignored. CSAM monitoring goes on in some form on every cloud storage service already. Apple's way of doing this is new and a lot smarter than some of the blunt instruments which have been used elsewhere so really should be applauded.

Comments

Anonymous said…
"This is not a process of looking at the user's images and trying to decide if they of abusive practices, so the chances of a false positive are low and the manual review process should address any of those."

Given that you acknowledge there's a manual review process that can constitute a stranger deciding -- without my knowledge -- whether a NeuralHash-collision image of my daughter frolicking in the rain is sexually gratifying or not, this seems both contradictory and ironic.

Is it your intent to be respectful or dismissive of my concerns as a parent? Because, just like with Apple (from whose cloud services I will withdraw) I'm getting mixed messages from this...
elbowz said…
Hi, thanks for your comment.

I think you might be overstating the chances of false positives. The deliberately engineered SHA-1 hash collision took 6,500 years of computer time to achieve. A false positive on an image you had taken would be unlikely on an unthinkable scale. And even if you were to manage it for one image, Apple is taking no action until a threshold has been reached; one large enough to ensure that the effects of false positives are negated. At that point manual review validates the findings. There is no review of your pictures to decide whether they are in any way gratifying.

Other services have been doing this for years - Microsoft developed PhotoDNA back in 2015 and has been applying it on its servers for years; as well as making it available to other parties. Google has its own offering.

If you aren't happy with the process, it's absolutely right that you should withdraw from Apple's services. As a parent though, I believe the greater benefit far outweighs any perceived loss of privacy and I will continue to use Apple's services (along with Microsoft's and Google's)

Anonymous said…
Thanks, in turn, for your reply. I don't believe I'm overestimating the odds of my family's personal exposure, however:

To your example of SHA-1, 6,500 years of computer time is a significant number of collisions per year over a billion devices. Without additional detail on the thresholding mechanism, it is unclear as to the risk provided by self-similar sets of images. (Perhaps that has been addressed in technical white papers, which admittedly would take me a long time to parse).

Your assessment that "there is no review of your pictures" is at odds with Apple's own statements; e.g., "These visual derivatives are then examined by human reviewers who confirm that they are CSAM material..." and "All positive matches must be visually confirmed by Apple as containing CSAM before Apple will disable the account and file a report with the child safety organization."

You might object that a derivative isn't the image itself... but it is intended to be evaluated as to whether or not it is CSAM material. As for the "everyone else is doing it" argument, I haven't invested trust in those systems and have no interest in their behavior.

Ultimately, we agree that it is absolutely right for parents to evaluate the cost-benefit of the system and come to our own conclusions (although our own participation ostensibly contributes no meaningful benefit). My assertion is simply that there is a small but real existential threat if one objects to strangers so evaluating personal photos without any parental notification -- and that "leave of common sense" (or "screeching voice of the minority") rhetoric is misplaced and unhelpful.
elbowz said…
Yes, you are right, 6,500 years divided across one billion users does amount to a significant chance of collision; although I believe this was the time taken to craft an image which would replicate a specific hash. The chance of your own image producing a random match of a particular CSAM hash would be magnitudes higher - although I also accept that the larger the database of images the greater the probability.

Apple has confirmed that manual review of the derivative will occur after 30 CSAM matches, which swings the balance the other way.

As an aside, I've read that scanning of cloud stored images will be mandated by US and UK governments (possibly others too) and this is Apple's solution to allow encrypted images stored on iCloud to comply in a manner which most closely ties to its privacy stance.

Whilst you have a reasoned and cohesive argument for why you don't support this, (and also considered the option to opt out) much of what I saw immediately after Apple's announcement was far below that mark - an instant hot take without understanding of the issue - hence the headline.