Apple has announced that it will start scanning your personal files on your devices for "Child Sexual Abuse Material", as identified by its cryptographic hash. It will apparently match hashes against a database of hash codes of "known" bad content distributed by some sort of well-meaning activist organization. A match will apparently trigger an automatic disabling of one's Apple account, just for starters.

Not creepy at all, right? Hey don't worry, it's not like political activists are trying to stop the spread of right wing hate memes via the exact same image-hash-code matching technology. Oh wait.

But it's okay, you're safe, you're a conformist, you would never snicker at that icky frog meme. Yet, yet, yet, don't be too comfortable. You might have enemies. Enemies who want you to lose your accounts, online presence, or even livelihood, and are too smart to simply send you one of these Verboten bits of digital horror straight.

What if an enemy were cunning enough to mount a cryptographic collision attack by crafting a brand new file so that its hash matches any on these designated bad-bad-bad lists? It just takes some time & possible brief rental of cloud computing resources. They can create an innocent looking get-rich-quick document, cute kitten video, or social issue awareness email. Something so good that you'd be tempted to save & forward it.

A few days later, Apple's (and other Big Tech's) surveillance software detects it as a hit, and automatically accuses you of abusing feelings or children or whatever. Accounts shut down, scarlet letter issued, literal police reporting can all come next. Heck, your friends and colleagues may also get flagged by that kitten video you shared, and be rather miffed at you. Depending on whether you can get through to a Big Tech support phone staff, you may or may not be able to clear your name.

Good luck! What could possibly go wrong!