• 4 Posts
  • 147 Comments
Joined 9M ago
cake
Cake day: Feb 10, 2024

help-circle
rss

Matrix messaging apps. It’s nice to have modern messaging features, end-to-end encrypted, with no single point of failure, no Google involvement, and no phone numbers. I expect to start recommending it widely when the 2.0 features land in the popular clients.

WireGuard VPN. It’s fast, even on low-power devices.

Self-hosted Mumble. Excellent low-latency voice quality for chatting or gaming with friends.

Radicale, DAVx⁵, and Thunderbird, for calendar and contact sync between mobile and desktop, without handing the data over to Google or anyone else.


The security provided by a browser is constantly changing, as the vulnerabilities, attacks, and countermeasures are constantly changing. It’s a cat-and-mouse game that never ends.

The privacy provided by a browser would be difficult to measure, since it depends a lot on browsing habits, extensions, code changes between versions, etc.

There’s no good way to calculate a metric for either type of protection, and even if there was, the metrics would be obsolete very quickly. For these reasons, I wouldn’t have tried what you attempted here.

However, there is a very simple way to compare the major browsers on privacy and reach a pretty accurate conclusion: Compare the developers’ incentives.


Yes, it’s safe, because no, they don’t relay it. The brilliant thing about it is that it’s all done locally, on your machine.


Cryptocurrencies are not reliably fungible, nor stable, nor widely accepted. They have their uses, but they are not suitable replacements for PayPal and not what OP asked for.


There is no privacy-focused PayPal alternative in the US, in part because US money transfer laws and policies (e.g. Know Your Customer) directly oppose privacy.

However, there are a couple of new projects that might eventually lead to something less bad for privacy than PayPal is:

  • GNU Taler, if they ever get any exchanges, and they either figure out how to mitigate the high fees for wire transfers or use some other settlement method when people on different exchanges make small payments. (Their plan to use batch wire transfers won’t help until the exchanges get a lot of adoption and frequent use. Of course, high fees discourage adoption and use, so this might not ever happen.)
  • FedNow, if banks ever use it to offer appealing person-to-person payment services instead of just using it for themselves and their business customers.

The rest of the sentence you truncated points out forwarding services. Yes, others exist beyond the four I mentioned, of course.

Edit to clarify: Your “it doesn’t” argument is that you can use forwarding from other domains that you own. Indeed you can, but that’s not a counterargument, because those are forwarding services. They do exactly what I described: the same thing as the example forwarding services in my original comment. You still have to maintain the them, as well as maintain the extra domains.


I don’t know if Element Web/Desktop was affected by the vulnerabilities in the title, but another one (also announced today) is fixed in Element Web/Desktop v1.11.81.

https://github.com/element-hq/element-web/security/advisories/GHSA-3jm3-x98c-r34x


The correct fix is to get the site maintainers to stop rejecting email addresses based on the characters they contain. They shouldn’t be doing that. Sadly, some developers believe it’s an appropriate way to deter bots, and it can be difficult to educate them.

If they won’t fix it, the workarounds are to either not use those sites, or to give them a different address. Unfortunately, the latter means having to maintain multiple email accounts, or forwarding services like Addy.io, SimpleLogin, Firefox Relay, or DuckDuckGo Email.


I no longer consider any email app to be okay for privacy if I can’t build it from source code. There are just too many opportunities and incentives for someone to exploit it. That could be the developer, or the maintainer of some obscure code library, or a company that buys one of them out, or an attacker who found a vulnerability. We no longer live in a world where it’s reasonable to think we’ll get privacy from communications software that we can’t inspect.

Thankfully, we also no longer live in a world without options. There are more than a few email apps with nothing to hide. :)


moxtoPrivacy@lemmy.mlfixed
link
fedilink
13
edit-2
1M

People in privacy circles do talk about phone numbers, but it’s usually about them being collected in the first place. Most of us realize that corporate promises to delete them later are easily reneged and impossible to verify, and therefore next to worthless. We need laws forbidding data collection. We don’t have them yet.

By the way, that title is useless to people who are browsing Lemmy to see which posts might interest them.


moxtoPrivacy@lemmy.mlfixed
link
fedilink
27
edit-2
1M

I don’t know why VPN providers promote themselves as like they are going to make your connection more private, everything is already encrypted (except DNS).

It’s true that most popular web sites have moved to HTTPS, but even if all of them had, not all network traffic is web traffic. Also, even if someone uses the network only for web browsing, DNS is not the only privacy-relevant data that gets exchanged outside the HTTPS connection.

You are just shifting the trust from your ISP to the people that run the VPN.

Some people have reason to distrust their ISP more than their VPN provider, so this is a valid use case.

VPN isn’t really comparable to HTTPS. The former protects all traffic, and with a relatively small attack surface, but only up to the VPN edge. The latter protects all the way to the network peer (the web server), but only web traffic, and with a massive attack surface: scores of certificate authorities in countries all over the world, any of which could be compromised to nullify the protection. They address different problems.


In other words, it’s the same effect as when you make separate identities to share with different contacts on any messaging service. SimpleX has adopted that as the normal way to operate.


Worth mentioning just in case you’re not aware: versioning is present not just on the protocol spec, but on individual rooms. That ought to ease any semantics changes that might be needed.


I think this could use some elaboration on what you mean by half dead.


I don’t remember the statement in the bug report verbatim, but it indicated that they intend to fix it, which is about what I had previously seen on other issues that they did subsequently fix. I expect it’s mainly a matter of prioritizing a long to-do list.

I can’t think of a reason why it wouldn’t be possible. The protocol is continually evolving, after all, and they already moved message content to an encrypted channel that didn’t originally exist. Moving other events into it seems like a perfectly sensible next step in that direction.


There are a few that do a good job of protecting our messages with end-to-end encryption, but no single one fits all use cases beyond that, so we have to prioritize our needs.

Signal is pretty okayish at meta-data protection (at the application level), but has a single point of failure/monitoring, requires linking a phone number to your account, can’t be self-hosted in any useful way, and is (practically speaking) bound to services run by privacy invaders like Google.

Matrix is decentralized, self-hostable, anonymous, and has good multi-device support, but hasn’t yet moved certain meta-data into the encrypted channel.

SimpleX makes it relatively easy to avoid revealing a single user ID to multiple contacts (queue IDs are user IDs despite the misleading marketing) and plans to implement multi-hop routing to protect meta-data better than Signal can (is this implemented yet?), but lacks multi-device support, lacks group calls, drops messages if they’re not retrieved within 3 weeks, and has an unclear future because it depends on venture capital to operate and to continue development.

I use Matrix because it has the features that I and my contacts expect, and can route around system failures, attacks, and government interference. This means it will still operate even if political and financial landscapes change, so I can count on at least some of my social network remaining intact for a long time to come, rather than having to ask everyone to adopt a new messenger again at some point. For my use case, these things are more important than hiding which accounts are talking to each other, so it’s a tradeoff that makes sense for me. (Also, Matrix has acknowledged the meta-data problem and indicated that they want to fix it eventually.)

Some people have different use cases, though. Notably, whistleblowers and journalists whose safety depends on hiding who they’re talking to should prioritize meta-data protection over things like multi-device support and long-term network resilience, and should avoid linking identifying info like a phone number to their account.


So you are basically saying that root CAs are unreliable or compromised?

Not exactly. They are pointing out that HTTPS assumes all is well if it sees a certificate from any “trusted” certificate authority. Browsers typically trust dozens of CAs (nearly 80 for Firefox) from jurisdictions all over the world. Anyone with sufficient access to any of them can forge a certificate. That access might come from a hack, a rogue employee, government pressure, a bug, improperly handled backups, or various other means. It can happen, has happened, and will happen again.

HTTPS is kind of mostly good enough for general use, since exploits are not so common as to make it useless, but if a government sees it as an obstacle, all bets are off. It is not comparable to a trustworthy VPN hosted outside of the government’s reach.

Also, HTTPS doesn’t cover all traffic like a properly configured VPN does. Even where it is used and not compromised, it’s not difficult for a well positioned snooper (like an internet provider that has to answer to government) to follow your traffic on the net and deduce what you’re doing.


If you care about keeping your domain enough that you don’t want there to be an excuse for someone to take it from you, then you use your real info, and choose a registrar that only exposes a proxy contact in your WHOIS entry.

If you don’t care about losing your domain, then you can use fake contact info.


All desktop environments are fancy compared to a simple window manager.


The unfortunately paradoxical thing about opt-out services is that using them requires giving out your details, and hoping that they aren’t (deliberately or accidentally) leaked.


CoreLogic defended its practices as legal, saying it’s too difficult to verify consent or anonymise personal data.

And this is what needs changing. It should not be legal for them to have it, nor for anyone to give it to them, in the first place.



“Feel,” “happy,” “comfortable”… Privacy doesn’t care about your feelings.

The motivation to do the work, spend time learning the risks and available mitigations, disrupt existing social relationships in order to adopt better tools, inconvenience friends and family, partially isolate one’s self by avoiding the popular systems… all of these things are part of improving privacy in the real world, and at least for many people, fueled by a person’s feelings. Don’t discount the human factors just because you can’t quantify them.


  1. distributed server network controlled by many entities (resilience)

It only fully meets the first criterion, yes. But personally I give it a bit of credit for the second too, in that it belongs to a non-profit foundation with multiple stakeholders, somewhat like Wikimedia.

These two things are not at all equivalent, or even comparable.


Signal is not my tool of choice, so I’ll answer from a more general perspective:

Having multiple friends and social groups on an e2ee chat system for the past few years feels great. Knowing that our words aren’t being recorded and exploited by half a dozen companies, we no longer feel the need to self-censor. The depth and value of our online conversations have grown noticeably.

Yes, there is more work to do, both at the endpoints and in the protocols. No, not all of us have flipped all the switches to maximize our privacy yet. That’s okay. Migrating is a gradual process. We do it together, helping each other along the way, rather than trying to force it all at once. Every step an improvement.


I think how often this is a problem varies widely from person to person. I don’t remember the last time I gave a mobile number out to a company, but it was more than a few years ago. The last few that strictly required one were non-essential; I just took my business elsewhere.


many results say to install custom ROMs which I can’t since its a US model and the bootloader is locked.

Are you sure it can’t be unlocked?

https://xdaforums.com/t/guide-to-root-galaxy-s22-plus-b-e-n-0-unlock-bootloader-and-flash-official-firmware-noob-friendly.4404351/

Many phones that don’t officially support unlocking can be exploited to do so anyway. Some will lose relatively minor functionality in the process (camera enhancements were lost on mine, but the camera still works fine) but the tradeoff is often worth it.


Is it true that Telegram doesn’t encrypt group chats at all? Maybe that would get their attention?

My biggest criticism of Telegram (but not the only one) is that they use homebrew crypto. Of course, I don’t know if your family would understand why that’s bad.



What are you on about?

When legislation aiming to restrict people’s rights fails to pass, it is very common for legislators/governments to try again shortly thereafter, and then again, and again, until some version of it eventually does pass. With each revision, some wording might be replaced, or weak assurances added, or the most obvious targets changed to placate the loudest critics. It might be broken up in to several parts, to be proposed separately over time. But the overall goal remains the same. This practice is (part of) why vigilance and voting are so important in democracies.

There’s nothing “deep state” about it. It’s plainly visible, on the record, and easily verifiable.

As someone who knows two people that worked for the Swiss government closely

This is an appeal to authority (please look it up) and a laughably weak one at that.

There is no big plan to weaken encryption or anything.

You obviously have not been keeping up with events surrounding this topic over the past 30 years.


Not against a government that can compel the organizations who issue the https certificates and run the https servers. And not against leaks that occur outside of https.


The Tor network cannot protect against that, because the attack circumvents it. Certain tools, like the Tor browser, do have protection against it (as much as they can) when you use them correctly, but they cannot keep users from inadvertently opening a link in some other tool. Nor can they protect against other software on a user’s device, like a spyware keyboard or the OS provider working with law enforcement.


It would be easy to dismiss the headline’s claim because Telegram’s design makes it arguably not a privacy tool in the first place.

However, it is possible that this arrest was chosen in part for that reason, with the knowledge that privacy and cryptography advocates wouldn’t be so upset by the targeting of a tool that is already weak in those areas. This could be an early step in a plan to gradually normalize outlawing cryptographic tools, piece by piece. (Legislators and spy agencies have demonstrated that they want to do this, after all.) With such an approach, the people affected might not resist much until it’s too late, like boiling the proverbial frog.

Watching from the sidelines, it’s impossible to see the underlying motivations or where this is going. I just hope this doesn’t become case law for eventual use in criminalizing solid cryptography.


It is very important to mention that you mean end-to-end encryption. The data is stored encrypted when using cloud chat.

In response, it is very important to mention that point-to-point encryption and encryption at rest are next to meaningless with respect to the chat participants’ privacy. They might be relevant to the case against Durov, but they don’t protect against leaks or compromised servers. Please don’t rely on them for your safety.


It has been a while since I looked at Wire, and I didn’t look very deep, but here’s what I noted:

Self-hosting was unavailable at the time.

I believe they violated their privacy policy a while back, by accepting new owners/investors without notifying their users. That kind of behavior is telling of what to expect from an organization, and potentially dangerous (depending on your threat model) if you’re trusting them with anything, such as…

I have read complaints that they stored cleartext contact lists on the server, but I haven’t verified this myself. (The first two points were already deal-breakers for me, so I didn’t bother.)




That does seem like a decent workaround for the multi-device problem, if you only communicate in small groups and each member only has a couple of devices. Directly addressing each other could get unwieldy fast as a group (or the number of devices) grows, but I’m guessing you’re not in that situation. Nice work.


I feel like this is being unnecessarily harsh to the majority of potential users.

I don’t know why you would think it harsh to point out shortcomings in software. It’s not a matter of opinion. These limitations exist, plain and simple, and some of them are not easily discovered from a quick visit to the SimpleX home page.

By listing them here, it saves everyone else the time and trouble of having to investigate on their own. (Unless they assume I’m lying or don’t know what I’m talking about, but I can’t help them with that.) It might also save some people from starting to build their network of contacts on a particular messenger, only to later discover a deal-breaking problem and have to start all over, asking all their contacts to switch again.

What would you consider ready for general use?

I can’t make a single suggestion to fit everyone else’s needs, because there is no messenger that addresses everyone’s needs. All of them have different tradeoffs, so we have to prioritize the things we want.

For myself and my contacts, Matirx does all the things we must have: Free, anonymous, good crypto, audited, multi-platform, multi-device, not centralized, self-hostable, reasonably easy to use, and delivers messages (without time limits) even when we’re offline. It even supports some nice extras, like screen sharing and voice calls.

Matrix detractors generally complain about certain metadata not being encrypted, which is technically true: A few things like the usernames that have joined a room, and avatars (if you set one), have not yet been moved to the encrypted channel and can therefore be seen by your homeserver admin. Frankly, it’s not a high enough priority for us to be driven away from a tool that meets our needs. Protecting the content is our priority. We could self-host a server to protect the metadata, but we don’t bother, because it’s not part of our threat model.

Would I recommend Matrix for high-risk work, where state authorities finding out who you’re talking to could threaten your safety? No, at least not in its current state. Communications like that demand very specific protections, and those protections don’t exist in any messenger that has the conveniences and features that I expect from a modern chat service. That’s (one of the reasons) why whistleblowers and targeted journalists turn to special tools. Having a separate tool/platform for high-risk work is fine; giving up features to meet those needs is a perfectly appropriate tradeoff.

But again, that metadata issue is not a risk factor for us. We’re certainly not going to reject a uniquely useful chat platform because of it.

Back to your question:

I don’t post on social media telling everyone to use the same tool I do, because I don’t know everyone’s needs, and I do know that a few people have very specific needs that don’t match mine.

However, it turns out that the vast majority of the people I’ve talked to about this stuff have needs similar to mine, so Matrix (the protocol) often ends up at the top of the list of things to consider.

My main reservation in suggesting Matrix for general use right now is that the official reference clients (they’re called Element on every platform) still have some rough edges. For example, occasionally sending messages that cannot be immediately decrypted by the recipient unless they jump through some troubleshooting hoops, and a search feature that isn’t implemented in all clients yet. The underlying bugs have been steadily disappearing, so these issues are becoming less and less common, but since they’re not entirely solved yet, I use an alternative client and avoid suggesting Matrix to mom and dad for now.

I already use it daily with friends (who I can help if a problem comes up) and people who are comfortable with troubleshooting on their own. It’s visibly moving in the right direction.


Their queue IDs are user IDs. Each one points to a specific user. You can call it a queue ID, or an account ID, or a user ID, or an elephant, but that doesn’t change what it is.

They crate a different ID to share with each contact in 1:1 chats, but that doesn’t make them anything less than user IDs. You can do the same thing on any other chat service by creating a different account to reveal to each contact. (This is obviously easier to manage on clients that support multiple accounts, but again, that doesn’t change what the IDs do.)

And in group chats, they don’t even do that; they reveal the same ID to all group members.


On SimpleX? No, there is not. LAN tethering is not multi-device support.