cultural reviewer and dabbler in stylistic premonitions

  • 10 Posts
  • 57 Comments
Joined 3Y ago
cake
Cake day: Jan 17, 2022

help-circle
rss



he wouldn’t be able to inject backdoors even if he wanted to, since the source code is open

Jia Tan has entered the chat


If you use systemd’s DHCP client, since version 235 you can set Anonymize=true in your network config to stop sending unique identifiers as per RFC 7844 Anonymity Profiles for DHCP Clients. (Don’t forget to also set MACAddressPolicy=random.)


They only do that if you are a threat.

Lmao. Even CBP does not claim that. On the contrary, they say (and courts have so far agreed) that they can perform these types of border searches without any probable cause, and even without reasonable suspicion (a weaker legal standard than probable cause).

In practice they routinely do it to people who are friends with someone (or recently interacted with someone on social media) who they think could be a threat, as well as to people who have a name similar to someone else they’re interested in for whatever reason, or if the CBP officer just feels like it - often because of what the person looks like.

It’s nice for you that you feel confident that you won’t be subjected to this kind of thing, but you shouldn’t assume OP and other people don’t need to be prepared for it.


If they ask for a device’s password and you decline to give it to them, they will “detain” the device. See this comment for some links on the subject.


I’m pretty sure that immigration in the US can just confiscate your devices if you are not a citizen .

CBP can and does “detain” travelers’ devices at (or near) the border, without a warrant or any stated cause, even if they are US citizens.

Here is part of the notice they give people when they do:

Screenshot of the initial paragraphs of CBP Publication No. 3160-0423, Revised April 2023, titled "Border Search of Electronic Devices" with text: All persons, baggage, and merchandise arriving in, or departing from, the United States are subject to inspection by U.S. Customs and Border Protection (CBP). This search authority includes all electronic devices crossing our nation’s borders.  What to Expect You are receiving this document because CBP intends to conduct a border search of your electronic device(s). This may include copying and retaining data contained in the device(s). The CBP officer conducting the examination will speak with you and explain the process.  Travelers are obligated to present electronic devices and the information resident on the device in a condition that allows for the examination of the device and its contents. Failure to assist CBP in accessing the electronic device and its contents for examination may result in the detention of the device in order to complete the inspection.  Throughout CBP’s inspection, you should expect to be treated in a courteous, dignified, and professional manner. As border searches are a law enforcement activity, CBP officers may not be able to answer all of your questions about an examination that is underway. If you have concerns, you can always ask to speak with a CBP supervisor.  CBP will return your electronic device(s) prior to your departure from the port of entry unless CBP identifies a need to temporarily detain the device(s) to complete the search or the device is subject to seizure. If CBP detains or seizes your device(s), you will receive a completed written custody receipt detailing the item(s) being detained or seized, who at CBP will be your point of contact, and how to contact them. To facilitate the return of your property, CBP will request contact information.


Or just removing my biometrics?

Ultimately you shouldn’t cross the US border carrying devices or encrypted data which you aren’t prepared to unlock for DHS/CBP, unless you’re willing to lose the hardware and/or be denied entry if/when you refuse to comply.

If they decide to, you’ll be handed this: “You are receiving this document because CBP intends to conduct a border search of your electronic device(s). This may include copying and retaining data contained in the device(s). […] Failure to assist CBP in accessing the electronic device and its contents for examination may result in the detention of the device in order to complete the inspection.

Device searches were happening a few hundred times each month circa 2009 (the most recent data i could find in a quick search) but, given other CBP trends, presumably they’ve become more frequent since then.

In 2016 they began asking some visa applicants for social media usernames, and then expanded it to most applicants in 2019, and the new administration has continued that policy. I haven’t found any numbers about how often they actually deny people entry for failing to disclose a social media account.

In 2017 they proposed adding the authority to also demand social media passwords but at least that doesn’t appear to have been implemented.


It seems to me that switching SIMs provides little privacy benefit, because carriers, data brokers, and the adversaries of privacy-desiring people whom they share data with are obviously able to correlate IMEIs (phones) with IMSIs (SIMs).

What kind of specific privacy threats do you think are mitigated by using different SIMs in the same phone (especially the common practice of using an “anonymous” SIM in a phone where you’ve previously used a SIM linked to your name)?


If you’re ready to break free of Android, I would recommend https://postmarketos.org/ though it only works well on a small (but growing!) number of devices.

imho if you want to (or must) run Android and have (or don’t mind getting) a Pixel, Graphene is an OK choice, but CalyxOS is good too and runs on a few more devices.


It’s literally a covert project funded by google to both sell pixels and harvest data of “privooocy” minded users. It seems to be working well.

Is it actually funded by Google? Citation needed.

I would assume Graphene users make up a statistically insignificant number of Pixel buyers, and most of the users of it I’ve met opt to use it without any Google services.


Indeed, the only thing WhatsApp-specific in this story is that WhatsApp engineers are the ones pointing out this attack vector and saying someone should maybe do something about it. A lot of the replies here don’t seem to understand that this vulnerability applies equally to almost all messaging apps - hardly any of them even pad their messages to a fixed size, much less send cover traffic and/or delay messages. 😦




So then send the URL to the play store page from the app posted in ops photo. Go ahead, waiting.

lol, what? i did, in another comment, shortly before you posted this. here it is again: https://play.google.com/store/apps/details?id=com.google.android.apps.devicelock



You act like it is Google’s fault that someone found questionable software on the phone they got from Rent-a-center or Alibaba.

Google made the app.


It sure is convenient for law enforcement and others to have the ability to immediately get the IP addresses of all visitors to a specific URL. (They just need to circumvent the OHTTP by asking fastly and google to collude…)


they basically agree with you

yes, I realize :)

I should’ve made clear in my comment that, aside from a bit of imperfect English and incorrect use of the term snake oil, I think this is an excellent blog post.


post-quantum cryptography can be compared with a remedy against the illness that nobody has, without any guarantee that it will work. The closest analogy in the history of medicine is snake oil.

Good on them for saying that.

A “remedy against the illness that nobody has” is a good analogy, but it is important to note that it’s an illness which there is a consensus we are likely to eventually have and a remedy that there is good reason to believe will be effective.

It isn’t a certainty that there will ever be a cryptographically relevant post-quantum computer, and it also isn’t a certainty that any of the post-quantum algorithms (as with most classical cryptography) which exist today won’t turn out to be breakable even by yesterday’s computers. The latter point is why it’s best to deploy post-quantum cryptography in a hybrid construction such that the system remains secure even if one of the primitives turns out to be breakable.

That said, I think it is totally wrong to call PQC snake oil because that term in the context of cryptography specifically means that a system is making dishonest claims: https://en.wikipedia.org/wiki/Snake_oil_(cryptography)




fwiw, besides the “Proton’s Free plan now offers up to […] after completing certain tasks.” post earlier, i also just deleted some adverinfonewstainment tutanota spam blogpost ("Chat Control May Finally Be Dead: European Court Rules That Weakening Encryption Is Illegal") from this community.

tutanota is just like protonmail except there is more evidence indicating that they are primarily a honeypot for privacy-seeking rubes (as opposed to protonmail where it is maybe only obvious to people knowledgeable about the history of the privacy industry).

People should be skeptical of anyone selling a service involving cryptography software which has nearly no conceivable purpose except for to protect against the entity delivering the software. Especially if they re-deliver the software to you every time you use it, via a practically-impossible-to-audit channel, and require you to identify yourself before re-receiving it (as almost any browser-based e2ee software which doesn’t require installing any software does, due to the current web architecture).

If you think this kind of perfect-for-targeted-exploitation architecture isn’t regularly used for targeted exploitation… well, you’re mistaken. In the web context specifically, it has been happening since the 90s.

imo this community should not tolerate advertising (or other posts who’s purpose is to encourage using/purchasing) this type of deceptively-marketed service.


almost every proprietary thing, including windows and macos, has some open source components.




Briar has even fewer N/As than SimpleX and all greens otherwise. Second column in the table.

Briar has a yellow Yes in row 12 ('requires global identity')

… presumably because (if you have one instance of the Briar installed) when you’re talking to two different people they can check and confirm you’re the same person, while in SimpleX you can create disposable/ephemeral identities for different chats.

I haven’t reviewed this thoroughly but I can see that there are a lot of attributes that could be added to this table in regards to metadata protection against various parties, including revealing online presence to servers and contacts (which is a place where briar falls short).


This is worthy of a more usable interface than this spreadsheet widget.

It took me a fair bit of scrolling to identify which attributes each of the six purple “N/A” values for SimpleX are, but now that I have I agree they’re accurate (though I think there is an argument to be made for just writing a green “no” for each of them).

It is noteworthy that SimpleX is currently the only one of these (currently 34) messengers to not have a single red or yellow cell in its column. well done, @epoberezkin@lemmy.ml! 😀

edit: istm that SimpleX (along with several other things) getting a “no” in the “can hand IP address to the police” row is not really accurate. SimpleX does better than many things here in that they don’t have a lot of other info to give to the police along with the IP, but, if Bob has their phone seized (or remotely compromised) and then the police reading Alice and Bob’s messages from Bob’s phone want to know Alice’s IP address… they can compel a server operator to give it to them. (And it is the same for a user who posts a SimpleX contact link publicly.)


It’s possible that it had some vulnerability which was automatically exploited by one of her majesty’s secret services (perhaps with help from their US counterparts) to make it a component of their covert infrastructure.

Sounds outlandish, but

this was happening in 2010:


(The onlt client that implements material you in a fun and usable way, sync is usable one-handed)

Touchscreen keyboards and their consequences have been a disaster for the human race.


Sure, fuck WhatsApp, but Telegram isn’t even end-to-end encrypted most of the time. Their group chats never are, and their “secret chat” encryption for non-group chats must be explicitly enabled and hardly ever is because it disables some features. And when it is encrypted, it’s with some dubious nonstandard cryptography.

It’s also pseudo open source; they do publish source code once in a while but it never corresponds to the binaries that nearly everyone actually uses.

And the audacity to talk about metadata when Telegram accounts still require a phone number today (as they did five years ago when this post was written) is just… 🤯

State-sponsored exploits against WhatsApp might be more common than against Telegram, or at least we hear about them more, but it’s not because the app is more vulnerable: it’s because governments don’t need to compromise the endpoint to read your Telegram messages: they can just add a new device to your account with an SMS and see everything.

(╯° °)╯︵ ┻━┻

Anything claiming to prioritize privacy yet asking for your phone number (Telegram, WhatsApp, Signal, …) is a farce.


I haven’t had a chance to check anything yet, but given who (Mozilla) is reacting and how, I suspect this is just another case of EU authorities acting to protect their citizens from (American) corporate abuse

Not in this case. I suggest you read the open letter (which is signed by 335 scientists and researchers from 32 countries so far).

Or, do you consider it to be corporate abuse when Mozilla prevents governments from using their certificate authorities to launch MITM attacks and impersonate websites for the purpose of intercepting internet traffic? Because that is what we’re talking about.


This article makes some good points generally, but it is ultimately marketing for a commercial snakeoil service which has a gigantic backdoor in its very threat model: when a tutanota users send an “end to end encrypted email” to a non-tutanota user what actually happens is that they receive a link to a web page which they type the encryption key in to.

Even if the javascript on that page is open source and audited, it is not possible (even for sophisticated users) to verify that the server is actually sending the correct javascript each time that a user accesses it. So, the server can easily target specific users and circumvent their encryption. The same applies to tutanota users emailing eachother when one of them is using the webmail interface.

This effectively reduces the security of their e2ee to “it works as long as the server remains honest”. But, if you fully trust the server to always do what it says it will, why bother with e2ee at all? They may as well just promise not to read your email.

I am removing this from !privacy@lemmy.ml with the reason “advertising for snakeoil”. (If you’re reading this on another instance and the post isn’t deleted, ask your instance admins to upgrade… outdated versions of lemmy had a bug which prevents some moderation actions from federating.)



where you insert yourself as an expert on what Open Source is/not is

this is not really a controversial topic; assuming you were just confused, I linked to the definition and (in another comment you replied to) to the list of governments and other entities which all agree about it. i again encourage you to read those links as it sounds like you haven’t.

since you’ve declined to remove the inaccurate statement “The Software is open-source” from your post here in !privacy@lemmy.ml I am removing the post. (since I am an admin rather than a mod of the community, the moderation action will only federate to instances running the latest version of lemmy, which your instance isn’t, but fyi it should be removed from lemmy.ml and any other instances running updated software.)

fwiw i think this is the first time i’ve used my admin privileges to remove something in a discussion i participated in myself, which tbh feels a little weird, but since this is a clear case of someone declining to remove a post making an objectively false claim, i’m going to.



Still i would argue that it is open source, since it is open for everyone to see.

You are mistaken. Please read The Open Source Definition and the Open-source software wikipedia article, and then kindly edit your post to remove the inaccurate statement “The Software is open-source”.


yes, as i said, it is not free software.

it is also not open source software.

hey @ToxicWaste@lemm.ee can you please edit your post to remove the inaccurate statement “The Software is open-source”? you could say it is “source-visible software” or some other 🤡 term, but “open source” has a definition and this software’s license aint it.


where did you find that gitlab link? it isn’t linked from the project website; looking at the website i would assume it isn’t free software.

edit: oh, i see it isn’t actually free software after all, it is under source ‘source visible’ proprietary license. 🥱




What stops them from being able to? They could actually infer a lot of the metadata just from the encrypted network traffic, without even looking inside the VMs at their execution state. But, they can also see inside, so they can keep the kind of logs (outside the VM) which Signal [says that they] wouldn’t.



They say that they don’t, and I think it is extremely likely that Signal employees are entirely sincere when they say that.

But, even if they truly don’t keep metadata, they can’t actually know what their hosting provider (Amazon) is doing. And, their cryptographic “sealed sender” thing doesn’t really solve the problem. If someone with the right access at Amazon really wants the Signal metadata, they can get it, and if they can, anybody who can coerce, compel, or otherwise compromise those people (or their computers) can get it too.

One can say they’re confident that the kind of adversaries they care to protect against don’t have that kind of capability, but it isn’t reasonable to say that Signal’s no-logging policy protects metadata without adding the caveat that routing all the traffic through Amazon makes the metadata of the protocol’s entire userbase available in a single place for the kind of adversaries that do.



...but participating websites aren't supposed to use it unless you "consent" 🤡
fedilink