𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠
  • 0 Posts
  • 10 Comments
Joined 1Y ago
cake
Cake day: Aug 16, 2023

help-circle
rss

Both WhatsApp and Signal show the same amount of chats to me (9 for both). WhatsApp does show a small sliver of a tenth chat, but it’s not really properly visible. There is a compact mode for the navigation bar in Signal, which helps a bit here.

From what I can see there’s slightly more whitespace between chats, and Signal uses the full height for the chat (eg same size as the picture), whereas WhatsApp uses whitespace above and below, pushing the name and message preview together.

In chats the sizes seem about the same to me, but Signal colouring messages might make it appear a bit more bloated perhaps? Not sure.


The PR had some issues regarding files that were pushed that shouldn’t have been, adding refactors that should have been in separate PRs, etc…

Though the main reason is that Signal doesn’t consider this issue a part of their threat model.


Aaand here’s your misunderstanding.

All messages detected by whatever algorithm/AI the provider implemented are sent to the authorities. The proposal specifically says that even if there is some doubt, the messages should be sent. Family photo or CSAM? Send it. Is it a raunchy text to a partner or might one of them be underage? Not 100% sure? Send it. The proposal is very explicit in this.

Providers are additionally required to review a subset of the messages sent over, for tweaking w.r.t. false positives. They do not do a manual review as an additional check before the messages are sent to the authorities.

If I send a letter to someone, the law forbids anyone from opening the letter if they’re not the intended recipient. E2E encryption ensures the same for digital communication. It’s why I know that Zuckerberg can’t read my messages, and neither can the people from Signal (metadata analysis is a different thing of course). But with this chat control proposal, suddenly they, as well as the authorities, would be able to read a part of the messages. This is why it’s an unacceptable breach of privacy.

Thankfully this nonsensical proposal didn’t get a majority.


https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=COM:2022:209:FIN

Here’s the text. There are no limits on which messages should be scanned anywhere in this text. Even worse: to address false positives, point 28 specifies that each provider should have human oversight to check if what the system finds is indeed CSAM/grooming. So it’s not only the authorities reading your messages, but Meta/Google/etc… as well.

You might be referring to when the EU can issue a detection order. This is not what is meant with the continued scanning of messages, which providers are always required to do, as outlined by the text. So either you are confused, or you’re a liar.

Cite directly from the text where it imposes limits on the automated scanning of messages. I’ll wait.


The point is is that it should never, under no circumstances monitor and eavesdrop private chats. It’s an unacceptable breach of privacy.

Also, please explain what “specific circumstances” you are referring to. The current proposal doesn’t limit the scanning of messages in any way whatsoever.


It does require invasive oversight. If I send a picture of my kid to my wife, I don’t want some AI algorithm to have a brainfart and instead upload the picture to Europol for strangers to see and to put me on some list I don’t belong.

People sharing CSAM are unlikely to use apps that force these scans anyway.


The financial sector offers a magnitude more services than just “transactions”. It’s a stupid comparison.


I tried this but can’t reproduce your results. AdGuard doesn’t seem to be sending any weird DNS or tracking requests on my phone.

I’m fairly certain you’re seeing some kind of false positive, but I don’t quite know what’s going on exactly.