• 1 Post
  • 16 Comments
Joined 7M ago
cake
Cake day: Nov 16, 2023

help-circle
rss

You’re right, App traffic is something we’ll need to crack. But as a first step, anything traffic going through a web browser is already significant.


Thank you 😊

I actually thought about this. Adapting the same approach with other kind of content like image, audio or video would be game breaker!!

Imagine uploading videos to Youtube that only viewers with a key would be able to understand!

But it is a challenge as it might require advanced knowledge in image and audio.


What do you mean by non private platforms?

In this POC, you can only encrypt content using Redakt’s public key. That way you are guaranteed to see the content since the key is already installed in the extension.

I intend to add the option to encrypt with a custom sharable key in the v.2.


Image injection is something I will need to stress out.


But why? Why do you people hate AI so much?

I don’t think it’s a question to “hate” AI or not. Personally, I have nothing against it.

As always with Privacy, it’s a matter of choice: when I publish something online publicly, I would like to have the choice wether or not this content is going to be indexed or used to train models.

It’s a dual dilemma. I want to benefit from the hosting and visibility of big platforms (Reddit, LinkedIn, Twitter etc.) but I don’t want them doing literally anything with my content because lost somewhere in their T&C it’s mentioned “we own your content, we do whatever tf we want with it”.



But on topic: I see the same problem as with link shorteners. One single service or extension disappears and all good content or links are gone.

Not exactly. The extension is open source so even if the official extension is gone, you would still be able to decrypt previously “redakted” content.


but in general, if google can’t read it–few eyeballs will ever see it.

You bring up a good point. The Internet is full of spider bots that crawl the web to index it and improve search results (ex: Google). In my case, I don’t want that any comment I post here or on big platforms like Reddit, Twitter or LinkedIn to be indexed. But I still want to be part of the conversation. At least I would like to have the choice wether or not any text I publish online is indexed.


Exactly!

For example, here’s a Medium article with encrypted content: https://redakt.org/demo/


You’re right. “Securing” is bad word. “Obfuscating” might be more appropriate. Actually had the same feedback from Jonah of Privacy Guides.

I use AES encryption with a single public key at the moment. That way, if I want to give the option to the user to create encrypt with a custom key, I don’t have to change the encryption method.

EDIT: Editing the title of this thread ̶P̶r̶o̶t̶e̶c̶t̶


You have a point. Or even malicious links!

We have to be careful with the decrypted output. Redakt is an open source and collaborative project, just saying… 😜


Slow them down and prevent them to scale is actually not that bad. We are in the context of public content accessible to anyone, so by definition it can not be bulletproof.

Online Privacy becomes less binary (public vs private) when the internet contains content encrypted using various encryption methods, making it challenging to collect data efficiently and at scale.

Thank you so much for your comment though <3


I don’t think AI is bad as a whole. At least I would like to choose if the content I post online can be used (or not) to train models.


You are absolutely right! Using a single public encryption key can not be considered as secured. But it is still more than having your content in clear.

I intend to add more encryption options (sharable custom key, PGP), that way users can choose the level of encryption they want for their public content. Of course, the next versions will still be able to decrypt legacy encrypted content.

In a way, it makes online Privacy less binary:

Instead of having an Internet where we choose to have our content either “public” (in clear) or “private” (E2E encrypted), we have an Internet full of content encrypted with heterogeneous methods of encryption (single key, custom key, key pairs). It would be impossible to scale data collection at this rate!


Captcha was just an example :-)

What I’m trying to say is that any small changes that we add to the extension will have very few (or none) effect on the real users, but will force the srappers to adapt. That might require important human and machine ressources to collect data at a massive scale.

EDIT: And thank you for your feedback <3


r3d4kt-U2FsdGVkX1/lGJZ5fHhIJPQ8w7fdKIrvJKGa4C6hVzgxa99BNXMr7LQFL9Rur05EFVITe2pREZaianyq1F5k4dQEovbUKXWwjoj7R2ZXmu3z836vItVgTHh/Wen4p0pp&&&


Hey everyone, so for the past few month I have been working on this project and I'd love to have your feedback on it. As we all know any time we publish something public online (on Reddit, Twitter or even this forum), our posts, comments or messages are scrapped and read by thousands of bots for various legitimate or illegitimate reasons. With the rise of LLMs like ChatGPT we know that the "understanding" of textual content at scale is more efficient than ever. So I created **Redakt**, an open source zero-click decryption tool to encrypt any text you publish online to make it only understandable to other users that have the browser extension installed. **Try it!** Feel free to install the Chrome/Brave extension *(Firefox coming soon)*: [https://redakt.org/browser/](https://redakt.org/browser/) EDIT: For example, here’s a Medium article with encrypted content: [https://redakt.org/demo/](https://redakt.org/demo/) Before you ask: What if the bots adapt and also use Redakt's extension or encryption key? Well first they don't at the moment (they're too busy gathering billions of data points "in clear"). If they do use the extension then any changes we'll add to the extension (captcha, encryption method) will force them to readapt and prevent them to scale their data collection. Let me know what you guys think!
fedilink