Researchers show that Apple’s CSAM scan can be easily fooled

0


A team of researchers from Imperial College London presented a simple method to evade detection by image content analysis mechanisms, such as Apple’s CSAM.

CSAM (Child Sexual Abuse Material) was a controversial proposal submitted by Apple earlier this year. The proposal was finally withdrawn in September, following a strong reaction from customers, advocacy groups and researchers.

Apple has not abandoned CSAM but instead postponed its rollout to 2022, promising further rounds of improvements and a more transparent approach to its development.

The main idea is to compare image hashes (IDs) of privately shared images between iOS users to a hash database provided by NCMEC and other child safety organizations.

If a match is found, Apple reviewers will review the content and alert authorities to the distribution of child abuse and pornography, all without compromising the privacy of those who share legal images (non-matches). .

In theory, this sounds like a good system to prevent the spread of harmful material, but in practice it inevitably opens up a “Pandora’s box” for mass surveillance.

However, the question posed by researchers at Imperial College London is: Would such a detection system work reliably in the first place?

Trick the algorithm

Research presented at the recent USENIX Security Symposium by UK researchers shows that neither Apple’s CSAM nor any such system would effectively detect illegal material.

As the researchers explain, it’s possible to trick content detection algorithms 99.9% of the time without visually changing the images.

The trick is to apply a special hash filter to the images, making them appear different from the detection algorithm even though the processed result looks identical to the human eye.

The article presents two white box attacks and one black box attack for discrete algorithms based on cosine transformation, successfully altering the unique signature of an image on a device and helping it go under the radar.

Applying a filter on the images gives them a new identity without modifying the content
The images before and after the filter are visually identical
Source: Imperial College London

Countermeasures and Complications

A possible countermeasure to the evasion methods presented in the article would be to use a larger detection threshold, which would lead to an increase in false positives.

Another approach would be to report users only after Image ID matches reach a certain threshold number, but this introduces probability complications.

Applying an additional image transformation before calculating the perceptual hash of the image is also unlikely to make the detections more reliable.

Increasing the hash size from 64 to 256 would work in some cases, but this poses privacy concerns because longer hashes encode more information about the image.

Overall, research demonstrates that current perceptual hashing algorithms are not as robust as they should be for adoption in illegal content distribution mitigation strategies.

“Our results cast serious doubt on the robustness to black-box attacks of hash-based client-side perceptual analysis as currently proposed. The detection thresholds required to make the attack more difficult are likely to be very high. high, possibly requiring over a billion images must be wrongly reported daily, raising serious privacy concerns. ” – concludes the article.

This is an important finding at a time when governments are considering invasive hash-based surveillance mechanisms.

The article shows that in order for illegal image detection systems to work reliably in their current form, people will have to give up their privacy, and there is no technical way around this at this time.


Leave A Reply

Your email address will not be published.