Thank you 😊
I actually thought about this. Adapting the same approach with other kind of content like image, audio or video would be game breaker!!
Imagine uploading videos to Youtube that only viewers with a key would be able to understand!
But it is a challenge as it might require advanced knowledge in image and audio.
But why? Why do you people hate AI so much?
I don’t think it’s a question to “hate” AI or not. Personally, I have nothing against it.
As always with Privacy, it’s a matter of choice: when I publish something online publicly, I would like to have the choice wether or not this content is going to be indexed or used to train models.
It’s a dual dilemma. I want to benefit from the hosting and visibility of big platforms (Reddit, LinkedIn, Twitter etc.) but I don’t want them doing literally anything with my content because lost somewhere in their T&C it’s mentioned “we own your content, we do whatever tf we want with it”.
but in general, if google can’t read it–few eyeballs will ever see it.
You bring up a good point. The Internet is full of spider bots that crawl the web to index it and improve search results (ex: Google). In my case, I don’t want that any comment I post here or on big platforms like Reddit, Twitter or LinkedIn to be indexed. But I still want to be part of the conversation. At least I would like to have the choice wether or not any text I publish online is indexed.
Exactly!
For example, here’s a Medium article with encrypted content: https://redakt.org/demo/
You’re right. “Securing” is bad word. “Obfuscating” might be more appropriate. Actually had the same feedback from Jonah of Privacy Guides.
I use AES encryption with a single public key at the moment. That way, if I want to give the option to the user to create encrypt with a custom key, I don’t have to change the encryption method.
EDIT: Editing the title of this thread ̶P̶r̶o̶t̶e̶c̶t̶
You have a point. Or even malicious links!
We have to be careful with the decrypted output. Redakt is an open source and collaborative project, just saying… 😜
Slow them down and prevent them to scale is actually not that bad. We are in the context of public content accessible to anyone, so by definition it can not be bulletproof.
Online Privacy becomes less binary (public vs private) when the internet contains content encrypted using various encryption methods, making it challenging to collect data efficiently and at scale.
Thank you so much for your comment though <3
You are absolutely right! Using a single public encryption key can not be considered as secured. But it is still more than having your content in clear.
I intend to add more encryption options (sharable custom key, PGP), that way users can choose the level of encryption they want for their public content. Of course, the next versions will still be able to decrypt legacy encrypted content.
In a way, it makes online Privacy less binary:
Instead of having an Internet where we choose to have our content either “public” (in clear) or “private” (E2E encrypted), we have an Internet full of content encrypted with heterogeneous methods of encryption (single key, custom key, key pairs). It would be impossible to scale data collection at this rate!
Captcha was just an example :-)
What I’m trying to say is that any small changes that we add to the extension will have very few (or none) effect on the real users, but will force the srappers to adapt. That might require important human and machine ressources to collect data at a massive scale.
EDIT: And thank you for your feedback <3
You’re right, App traffic is something we’ll need to crack. But as a first step, anything traffic going through a web browser is already significant.