cultural reviewer and dabbler in stylistic premonitions

  • 10 Posts
  • 78 Comments
Joined 3Y ago
cake
Cake day: Jan 17, 2022

help-circle
rss

in other news, the market price of hacked credentials for MAGA-friendly social media accounts:

📈

note

in case it is unclear to anyone: the above is a joke.

in all seriousness, renaming someone else’s account and presenting it to CBP as one’s own would be dangerous and inadvisable. a more prudent course of action at this time is to avoid traveling to the united states.


were you careful to be sure to get the parts that have the key’s name and email address?

It should be if there is chunks missing its unusable. At least thats my thinking, since gpg is usually a binary and ascii armor makes it human readable. As long as a person cannot guess the blacked out parts, there shouldnt be any data.

you are mistaken. A PGP key is a binary structure which includes the metadata. PGP’s “ascii-armor” means base64-encoding that binary structure (and putting the BEGIN and END header lines around it). One can decode fragments of a base64-encoded string without having the whole thing. To confirm this, you can use a tool like xxd (or hexdump) - try pasting half of your ascii-armored key in to base64 -d | xxd (and hit enter and ctrl-D to terminate the input) and you will see the binary structure as hex and ascii - including the key metadata. i think either half will do, as PGP keys typically have their metadata in there at least twice.


how did you choose which areas to redact? were you careful to be sure to get the parts that have the key’s name and email address?


TLDR: this is way more broken than I initially realized

To clarify a few things:

-No JavaScript is sent after the file metadata is submitted

So, when i wrote “downloaders send the filename to the server prior to the server sending them the javascript” in my first comment, I hadn’t looked closely enough - I had just uploaded a file and saw that the download link included the filename in the query part of the URL (the part between the ? and the #). This is the first thing that a user sends when downloading, before the server serves the javascript, so, the server clearly can decide to serve malicious javascript or not based on the filename (as well as the user’s IP).

However, looking again now, I see it is actually much worse - you are sending the password in the URL query too! So, there is no need to ever serve malicious javascript because currently the password is always being sent to the server.

As I said before, the way other similar sites do this is by including the key in the URL fragment which is not sent to the server (unless the javascript decides to send it). I stopped reading when I saw the filename was sent to the server and didn’t realize you were actually including the password as a query parameter too!

😱

The rest of this reply was written when I was under the mistaken assumption that the user needed to type in the password.


That’s a fundamental limitation of browser-delivered JavaScript, and I fully acknowledge it.

Do you acknowledge it anywhere other than in your reply to me here?

This post encouraging people to rely on your service says “That means even I, the creator, can’t decrypt or access the files.” To acknowledge the limitations of browser-based e2ee I think you would actually need to say something like “That means even I, the creator, can’t decrypt or access the files (unless I serve a modified version of the code to some users sometimes, which I technically could very easily do and it is extremely unlikely that it would ever be detected because there is no mechanism in browsers to ensure that the javascript people are running is always the same code that auditors could/would ever audit).”

The text on your website also does not acknowledge the flawed paradigm in any way.

This page says "Even if someone compromised the server, they’d find only encrypted files with no keys attached — which makes the data unreadable and meaningless to attackers. To acknowledge the problem here this sentence would need to say approximately the same as what I posted above, except replacing “unless I serve” with “unless the person who compromised it serves”. That page goes on to say that “Journalists and whistleblowers sharing sensitive information securely” are among the people who this service is intended for.

The server still being able to serve malicious JS is a valid and well-known concern.

Do you think it is actually well understood by most people who would consider relying on the confidentiality provided by your service?

Again, I’m sorry to be discouraging here, but: I think you should drastically re-frame what you’re offering to inform people that it is best-effort and the confidentiality provided is not actually something to be relied upon alone. The front page currently says it offers “End-to-end encryption for complete security”. If someone wants/needs to encrypt files so that a website operator cannot see the contents, then doing so using software ephemerally delivered from that same website is not sufficient: they should encrypt the file first using a non-web-based tool.

update: actually you should take the site down, at least until you make it stop sending the key to the server.


Btw, DeadDrop was the original name of Aaron Swartz’ software which later became SecureDrop.

it’s zero-knowledge encryption. That means even I, the creator, can’t decrypt or access the files.

I’m sorry to say… this is not quite true. You (or your web host, or a MITM adversary in possession of certificate authority key) can replace the source code at any time - and can do so on a per-user basis, targeting specific IP addresses - to make it exfiltrate the secret key from the uploader or downloader.

Anyone can audit the code you’ve published, but it is very difficult to be sure that the code one has audited is the same as the code that is being run each time one is using someone else’s website.

This website has a rather harsh description of the problem: https://www.devever.net/~hl/webcrypto … which concludes that all web-based cryptography like this is fundamentally snake oil.

Aside from the entire paradigm of doing end-to-end encryption using javascript that is re-delivered by a webserver at each use being fundamentally flawed, there are a few other problems with your design:

  • allowing users to choose a password and using it as the key means that most users’ keys can be easily brute-forced. (Since users need to copy+paste a URL anyway, it would make more sense to require them to transmit a high-entropy key along with it.)
  • the filenames are visible to the server
  • downloaders send the filename to the server prior to the server sending them the javascript which prompts for the password and decrypts the file. this means you have the ability to target maliciously modified versions of the javascript not only by IP but also by filename.

There are many similar browser-based things which still have the problem of being browser-based but which do not have these three problems: they store the file under a random identifier (or a hash of the ciphertext), and include a high-entropy key in the “fragment” part of the URL (the part after the # symbol) which is by default not sent to the server but is readable by the javascript. (Note that the javascript still can send the fragment to the server, however… it’s just that by default the browser does not.)

I hope this assessment is not too discouraging, and I wish you well on your programming journey!


When it’s libre software, we’re not banned from fixing it.

Signal is a company and a network service and a protocol and some libre software.

Anyone can modify the client software (though you can’t actually distribute modified versions via Apple’s iOS App Store, for reasons explained below) but if a 3rd party actually “fixed” the problems I’ve been talking about here then it really wouldn’t make any sense to call that Signal anymore because it would be a different (and incompatible) protocol.

Only Signal (the company) can approve of changes to Signal (the protocol and service).

Here is why forks of Signal for iOS, like most seemingly-GPLv3 software for iOS, cannot be distributed via the App Store

Apple does not distribute GPLv3-licensed binaries of iOS software. When they distribute binaries compiled from GPLv3-licensed source code, it is because they have received another license to distribute those binaries from the copyright holder(s).

The reason Apple does not distribute GPLv3-licensed binaries for iOS is because they cannot, because the way that iOS works inherently violates the “installation information” (aka anti-tivozation) clause of GPLv3: Apple requires users to agree to additional terms before they can run a modified version of a program, which is precisely what this clause of GPLv3 prohibits.

This is why, unlike the Android version of Signal, there are no forks of Signal for iOS.

The way to have the source code for an iOS program be GPLv3 licensed and actually be meaningfully forkable is to have a license exception like nextcloud/ios/COPYING.iOS. So far, at least, this allows Apple to distribute (non-GPLv3!) binaries of any future modified versions of the software which anyone might make. (Legal interpretations could change though, so, it is probably safer to pick a non-GPLv3 license if you’re starting a new iOS project and have a choice of licenses.)

Anyway, the reason Signal for iOS is GPLv3 and they do not do what NextCloud does here is because they only want to appear to be free/libre software - they do not actually want people to fork their software.

Only Signal (the company) is allowed to give Apple permission to distribute binaries to users. The rest of us have a GPLv3 license for the source code, but that does not let us distribute binaries to users via the distribution channel where nearly all iOS users get their software.


Downvoted as you let them bait you. Escaping WhatsApp and Discord, anti-libre software, is more important.

I don’t know what you mean by “bait” here, but…

Escaping to a phone-number-requiring, centralized-on-Amazon, closed-source-server-having, marketed-to-activists, built-with-funding-from-Radio-Free-Asia (for the specific purpose of being used by people opposing governments which the US considers adversaries) service which makes downright dishonest claims of having a cryptographically-ensured inability to collect metadata? No thanks.

(fuck whatsapp and discord too, of course.)


it’s being answered in the github thread you linked

The answers there are only about the fact that it can be turned off and that by default clients will silently fall back to “unsealed sender”.

That does not say anything about the question of what attacks it is actually meant to prevent (assuming a user does “enable sealed sender indicators”).

This can be separated into two different questions:

  1. For an adversary who does not control the server, does sealed sender prevent any attacks? (which?)
  2. For an adversary who does control the server, how does sealed sender prevent that adversary from identifying the sender (via the fact that they must identify themselves to receive messages, and do so from the same IP address)?

The strongest possibly-true statement i can imagine about sealed sender’s utility is something like this:

For users who enable sealed sender indicators AND who are connecting to the internet from the same IP address as some other Signal users, from the perspective of an an adversary who controls the server, sealed sender increases the size of the set of possible senders for a given message from one to the number of other Signal users who were online from behind the same NAT gateway at the time the message was sent.

This is a vastly weaker claim than saying that “by design” Signal has no possibility of collecting any information at all besides the famous “date of registration and last time user was seen online” which Signal proponents often tout.


False.

edit: it’s funny how people downvoting comments about signal’s sealed sender being a farce never even attempt to explain what its threat model is supposed to be. (meaning: what attacks, with which adversary capabilities specifically, is it designed to prevent?)


You can configure one or more of your profiles’ addresses to be a “business address” which means that when people contact you via it it will always create a new group automatically. Then you can (optionally, on a per-contact basis) add your other devices’ profiles to it (as can your contact with their other devices, after you make them an admin of the group).

It’s not the most obvious/intuitive system but it works well and imo this paradigm is actually better than most systems’ multi-device support in that you can see which device someone is sending from and you can choose to give different contacts access to a different subset of your devices than others.


You can just make a group for each contact with all of your (and their) devices in it.


Messages are private on signal and they cannot be connected to you through sealed sender.

No. Signal’s sealed sender has an incoherent threat model and only protects against an honest server, and if the server is assumed to be honest then a “no logs” policy would be sufficient.

Sealed sender is complete security theater. And, just in case it is ever actually difficult for the server to infer who is who (eg, if there are many users behind the same NAT), the server can also simply turn it off and the client will silently fall back to “unsealed sender”. 🤡

The fact that they go to this much dishonest effort to convince people that they “can’t” exploit their massive centralized trove of activists’ metadata is a pretty strong indicator of one answer to OP’s question.


StartPage/StartMail is owned by an adtech company who’s website boasts that they “develop & grow our suite of privacy-focused products, and deliver high-intent customers to our advertising partners” 🤔

They have a whitepaper which actually does a good job explaining how end-to-end encryption in a web browser (as Tuta, Protonmail, and others do) can be circumvented by a malicious server:

The malleability of the JavaScript runtime environment means that auditing the future security of a piece of JavaScript code is impossible: The server providing the JavaScript could easily place a backdoor in the code, or the code could be modified at runtime through another script. This requires users to place the same measure of trust in the server providing the JavaScript as they would need to do with server-side handling of cryptography.

However (i am not making this up!) they hilariously use this analysis to justify having implemented server-side OpenPGP instead 🤡


Tuta’s product is snake oil.

A cryptosystem is incoherent if its implementation is distributed by the same entity which it purports to secure against.

If you don’t care about their (nonstandard, incompatible, and snake oil) end-to-end encryption feature and just want a freemium email provider which (purports to) protect your privacy in other ways, the fact that their flagship feature is snake oil should still be a red flag.


https://digdeeper.club/articles/browsers.xhtml has a somewhat comprehensive analysis of a dozen of the browsers you might consider, illuminating depressing (and sometimes surprising) privacy problems with literally all of them.

In the end it absurdly recommends something which forked from Firefox a very long time ago, which is obviously not a reasonable choice from a security standpoint. I don’t have a good recommendation, but I definitely don’t agree with that article’s conclusion: privacy features are pointless if your browser is trivially vulnerable to exploits for a plethora of old bugs, which will inevitably be the case for a volunteer-run project that diverged from Firefox a long time ago and thus cannot benefit from Mozilla’s security fixes in each new release.

However, despite its ridiculous conclusion, that page’s analysis could still be helpful when you’re deciding which of the terrible options to pick.


short answer: because nobody flagged that other one. (it is deleted now too.)

re: riseup, is it even possible to use their VPN without an invite code? (i don’t think it is?)

in any case, riseup says clearly that their purpose is “to provide digital self-determination for social movements” - it is not intended for torrenting, even if it might work for it.

feel free to PM me if you want to discuss this further; i am deleting this post too. (at the time of deletion it has 8 upvotes and 33 downvotes, btw.)



some of the privacy messengers here (like Briar) have blogging/forum features

many people incorrectly assume briar aims to provide some sort of anonymity, because it uses tor onion services and is a self-described “secure messenger”. however, that is not the case:

https://code.briarproject.org/briar/briar/-/wikis/FAQ#does-briar-provide-anonymity (answer: no)

tldr: briar contacts, even when only actually using onions, exchange their bluetoooth MAC addresses and their most recent IPv6 link-local address and last five IPv4 addresses briar has seen bound to their wlan interfaces, just in case you’re ever physically near a contact and want to automatically connect to them locally.



Those instructions will likely still work, but fwiw MotionEyeOS (a minimal Linux distro built on buildroot rather than Debian) appears to have ceased development in 2020.

The MotionEye web app that distro was built for is still being developed, however, as is Motion itself (which is packaged in Debian/Ubuntu/etc and is actually the only software you really need).


CSI camera modules can be a pain; it’s easier to use a normal USB webcam and have more options for positioning it.

Also, you don’t need to limit yourself to a Raspberry Pi; you can use any single-board computer - hackerboards.com has a database of them.





he wouldn’t be able to inject backdoors even if he wanted to, since the source code is open

Jia Tan has entered the chat


If you use systemd’s DHCP client, since version 235 you can set Anonymize=true in your network config to stop sending unique identifiers as per RFC 7844 Anonymity Profiles for DHCP Clients. (Don’t forget to also set MACAddressPolicy=random.)


They only do that if you are a threat.

Lmao. Even CBP does not claim that. On the contrary, they say (and courts have so far agreed) that they can perform these types of border searches without any probable cause, and even without reasonable suspicion (a weaker legal standard than probable cause).

In practice they routinely do it to people who are friends with someone (or recently interacted with someone on social media) who they think could be a threat, as well as to people who have a name similar to someone else they’re interested in for whatever reason, or if the CBP officer just feels like it - often because of what the person looks like.

It’s nice for you that you feel confident that you won’t be subjected to this kind of thing, but you shouldn’t assume OP and other people don’t need to be prepared for it.


If they ask for a device’s password and you decline to give it to them, they will “detain” the device. See this comment for some links on the subject.


I’m pretty sure that immigration in the US can just confiscate your devices if you are not a citizen .

CBP can and does “detain” travelers’ devices at (or near) the border, without a warrant or any stated cause, even if they are US citizens.

Here is part of the notice they give people when they do:

Screenshot of the initial paragraphs of CBP Publication No. 3160-0423, Revised April 2023, titled "Border Search of Electronic Devices" with text: All persons, baggage, and merchandise arriving in, or departing from, the United States are subject to inspection by U.S. Customs and Border Protection (CBP). This search authority includes all electronic devices crossing our nation’s borders.  What to Expect You are receiving this document because CBP intends to conduct a border search of your electronic device(s). This may include copying and retaining data contained in the device(s). The CBP officer conducting the examination will speak with you and explain the process.  Travelers are obligated to present electronic devices and the information resident on the device in a condition that allows for the examination of the device and its contents. Failure to assist CBP in accessing the electronic device and its contents for examination may result in the detention of the device in order to complete the inspection.  Throughout CBP’s inspection, you should expect to be treated in a courteous, dignified, and professional manner. As border searches are a law enforcement activity, CBP officers may not be able to answer all of your questions about an examination that is underway. If you have concerns, you can always ask to speak with a CBP supervisor.  CBP will return your electronic device(s) prior to your departure from the port of entry unless CBP identifies a need to temporarily detain the device(s) to complete the search or the device is subject to seizure. If CBP detains or seizes your device(s), you will receive a completed written custody receipt detailing the item(s) being detained or seized, who at CBP will be your point of contact, and how to contact them. To facilitate the return of your property, CBP will request contact information.


Or just removing my biometrics?

Ultimately you shouldn’t cross the US border carrying devices or encrypted data which you aren’t prepared to unlock for DHS/CBP, unless you’re willing to lose the hardware and/or be denied entry if/when you refuse to comply.

If they decide to, you’ll be handed this: “You are receiving this document because CBP intends to conduct a border search of your electronic device(s). This may include copying and retaining data contained in the device(s). […] Failure to assist CBP in accessing the electronic device and its contents for examination may result in the detention of the device in order to complete the inspection.

Device searches were happening a few hundred times each month circa 2009 (the most recent data i could find in a quick search) but, given other CBP trends, presumably they’ve become more frequent since then.

In 2016 they began asking some visa applicants for social media usernames, and then expanded it to most applicants in 2019, and the new administration has continued that policy. I haven’t found any numbers about how often they actually deny people entry for failing to disclose a social media account.

In 2017 they proposed adding the authority to also demand social media passwords but at least that doesn’t appear to have been implemented.


It seems to me that switching SIMs provides little privacy benefit, because carriers, data brokers, and the adversaries of privacy-desiring people whom they share data with are obviously able to correlate IMEIs (phones) with IMSIs (SIMs).

What kind of specific privacy threats do you think are mitigated by using different SIMs in the same phone (especially the common practice of using an “anonymous” SIM in a phone where you’ve previously used a SIM linked to your name)?


If you’re ready to break free of Android, I would recommend https://postmarketos.org/ though it only works well on a small (but growing!) number of devices.

imho if you want to (or must) run Android and have (or don’t mind getting) a Pixel, Graphene is an OK choice, but CalyxOS is good too and runs on a few more devices.


It’s literally a covert project funded by google to both sell pixels and harvest data of “privooocy” minded users. It seems to be working well.

Is it actually funded by Google? Citation needed.

I would assume Graphene users make up a statistically insignificant number of Pixel buyers, and most of the users of it I’ve met opt to use it without any Google services.


Indeed, the only thing WhatsApp-specific in this story is that WhatsApp engineers are the ones pointing out this attack vector and saying someone should maybe do something about it. A lot of the replies here don’t seem to understand that this vulnerability applies equally to almost all messaging apps - hardly any of them even pad their messages to a fixed size, much less send cover traffic and/or delay messages. 😦




So then send the URL to the play store page from the app posted in ops photo. Go ahead, waiting.

lol, what? i did, in another comment, shortly before you posted this. here it is again: https://play.google.com/store/apps/details?id=com.google.android.apps.devicelock



You act like it is Google’s fault that someone found questionable software on the phone they got from Rent-a-center or Alibaba.

Google made the app.


It sure is convenient for law enforcement and others to have the ability to immediately get the IP addresses of all visitors to a specific URL. (They just need to circumvent the OHTTP by asking fastly and google to collude…)


they basically agree with you

yes, I realize :)

I should’ve made clear in my comment that, aside from a bit of imperfect English and incorrect use of the term snake oil, I think this is an excellent blog post.


post-quantum cryptography can be compared with a remedy against the illness that nobody has, without any guarantee that it will work. The closest analogy in the history of medicine is snake oil.

Good on them for saying that.

A “remedy against the illness that nobody has” is a good analogy, but it is important to note that it’s an illness which there is a consensus we are likely to eventually have and a remedy that there is good reason to believe will be effective.

It isn’t a certainty that there will ever be a cryptographically relevant post-quantum computer, and it also isn’t a certainty that any of the post-quantum algorithms (as with most classical cryptography) which exist today won’t turn out to be breakable even by yesterday’s computers. The latter point is why it’s best to deploy post-quantum cryptography in a hybrid construction such that the system remains secure even if one of the primitives turns out to be breakable.

That said, I think it is totally wrong to call PQC snake oil because that term in the context of cryptography specifically means that a system is making dishonest claims: https://en.wikipedia.org/wiki/Snake_oil_(cryptography)






...but participating websites aren't supposed to use it unless you "consent" 🤡
fedilink