That isn’t what that document says. It says that they can impersonate you in non-E2EE scenarios. The clients I use warn me when a message isn’t properly encrypted so someone without E2EE keys can’t impersonate someone in an E2EE room.
That being said the general concept is a problem. I would love to see progress where all events from a user are signed by a device key and non-forgable. There is some thinking about this with portable identities (such as MSC2787) where you server is basically just storing and forwarding events but the root of trust is your identity and keys that you control. But none of this will land soon, not for many years.
Probably yes, it depends on your threat model.
If you are using E2EE on a matrix.org account then your message content, attachments (images) and most other traffic isn’t accessible to anyone but the people in the chat. However Matrix isn’t the most private option, it has a number of leaks such as reactions and chat topics (these are being worked on but aren’t close to happening).
For most people Matrix is a very private and secure option and the fact that it is federated is a huge plus. If you want something more secure you are probably looking at Signal (which you don’t want to use and isn’t federated) or Simplex Chat (which doesn’t have multi-device support).
require a separate device that looks like a calculator to use online banking
To be fair this actually provides a very high level of security? At least in my experience with AIB (in Ireland) you needed to enter the amount of the transactions and some other core details (maybe part of the recipient’s account number? can’t quite recall). Then you entered your PIN. This signed the transaction which provides very strong verification that you (via the PIN) authorize the specific transaction via a trusted device that is very unlikely to be compromised (unless you give someone physical access to it).
It is obviously quite inconvenient. But provides a huge level of security. Unlike this Safety Net crap which is currently quite easy to bypass.
which is supposed to enforce to run apps in secured phones
The point of the Google Play Integrity API is to ensure that the user is not in control of their phone, but that one of a small number of megacorps are in control.
Can the user pull their data out of apps? Not acceptable. Can the user access the app file itself? Not acceptable. Can the user modify apps? Not acceptable.
Basically it ensures that the user has no control over their own computing.
It used to be common and useful. I did this even after Valve shipped a native Linux TF2 as at the beginning the Wine method gave better results on my hardware. But that time has long passed as Valve has integrated Wine (Proton) and in almost all cases the Linux native builds will outperform Wine (and Steam will let you use the Windows version via Proton if you want even if there is a native Linux build).
So while I suspect that there are still a few people doing this out of momentum, habit or reading old tutorials I am not aware of any good reasons to do this anymore.
There are three parts to the whole push system.
My point is that 1 is the core and already available across devices including over Google’s push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don’t want to self host and still maintains the option to fully self host as desired.
IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.
UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).
I mean it is always better to have more open source. But the point of the multi-hop system is that you don’t need to trust the server. Even if the server was open source:
The open source client is enough to verify this and the security of the whole scheme.
Yeah, I can’t believe how hard targeting other consoles is for basically no reason. I love this Godot page that accurately showcases the difference:
https://docs.godotengine.org/en/stable/tutorials/platform/consoles.html
Currently, the only console Godot officially supports is Steam Deck (through the official Linux export templates).
The reason other consoles are not officially supported are:
- To develop for consoles, one must be licensed as a company. As an open source project, Godot has no legal structure to provide console ports.
- Console SDKs are secret and covered by non-disclosure agreements. Even if we could get access to them, we could not publish the platform-specific code under an open source license.
Who at these console companies think that making it hard to develop software for them is beneficial? It’s not like the SDK APIs are actually technologically interesting in any way (maybe some early consoles were, the last “interesting” hardware is probably the PS2). Even if the APIs were open source (the signatures, not the implementation) every console has DRM to prevent running unsigned games, so it wouldn’t allow people to distribute games outside of the console marker’s control (other than modded systems).
So to develop for the Steam Deck:
To develop for Switch (or any other locked-down console):
What it could be (after you register with Nintendo to get access to the SDK download):
All they need to do is grant an open source license on the API headers. All the rest is done for them and magically they have more games on their platform.
Mullvad is one of the best options if you care about privacy. They take privacy seriously, both on their side and pushing users towards private options. They also support fully anonymous payments. Their price is also incredibly reasonable.
I’m actually working on a VPN product as well. It is a multi-hop system so that we can’t track you. But it isn’t publicly available yet, so in the meantime I happily recommend Mullvad.
These are all good points. This is why it is important to match your recommendations to the person. For example if I know they have Chrome and a Google account I might just recommend using that. Yes, it isn’t end-to-end encrypted and Google isn’t great for privacy but at least they are already managing logins over all of their devices.
In many cases perfect is the enemy of better. I would rather them use any password manager and unique passwords (even “a text file on their desktop”) than them sticking to one password anywhere because other solutions are too complicated.
It depends on your threat model. It does mostly reduce the benefit from 2FA, but you are probably still very safe if you use a random password per site. I mostly use 2FA when forced (other than a few high-value accounts) so I don’t worry about it. For most people having a random password which is auto-filled so that you don’t type it into the wrong site is more than sufficient to keep themselves secure.
Honestly nothing. I recommend this to everyone because it is the easiest way to set up and offers huge advantages.
I think these are the two biggest benefits and every browser password manager will accomplish both.
These are real issues however they are pretty easy to mitigate, and I would say that the upsides of a password manager far outweigh the downsides.
Make sure that you are regularly typing your master password for the first bit. After that you’ll never forget it. You can also help them out by saving a copy of their master password for them at least until they are sure they have memorized it. There are also password managers where you can recovery your account as long as you have the keys cached on at least one device.
This is far, far outweighed by the risk of password reuse. This is because when a single one of the sites you use gets hacked then people will take that credential list and try it on every other site. So with a password manager there is just one target, without it is one of hundreds of sites where you reused your password. Many password managers also have end-to-end encryption so without your password the sync service can’t be hacked (as it doesn’t have access to your passwords).
Basically they license out the system to companies. You can get a rough idea here: https://what3words.com/business
The idea is that by making it free to individuals they build up market familiarity and expectation. Free personal use is just marketing for the paid product. Then they can turn to businesses and convince them that they should offer their system as a service and charge them for it.
The closest alternative is probably Plus Codes. They are driven by Google but are free to use for everything with a pretty plain and simple Terms of Use.
Instead of words they use an alphanumeric encoding. The main downside is that this can be less memorable but the upside is that it works for users of all languages and you can shorten the codes by using a Country or City reference as well as control the precision.
what3words is proprietary and the owner is profit-hungry and litigious, I would recommend avoiding it.
Some basic info: https://en.wikipedia.org/wiki/What3words#Proprietary
The best option is probably using a geo: URL. This should open in all devices in their favourite mapping application. Example. If you want to link to a specific store or similar beyond just a location you can add a “query” which some apps will use to highlight that. Example.
Another decent option is Plus Codes. These are a bit shorter and easier to manage but lack a URL format as far as I can tell. MJ75+P3 Toronto, Ontario.
You can also just link to an alternative service such as Open Street Maps. This avoids Google but still imposes a particular service on others.
I regularly consider doing this. Obviously it is great from a privacy perspective. But I hate dealing with cash, especially change. With cards I just have one thing in my wallet and it just works forever. My bank account is automatically charged at the end of the month. With cash I need to keep refilling my wallet and carry around annoying change.
I would love to have something digital but also private (like Monero). But so far I have been picking convenience over privacy.
This is sort of a scam though. Credit cards give rewards, but then charge the business for the processing fees. So the business needs to raise prices to cover the fees. So really no one is getting that 2% except for the card network. And if you don’t use a card you lose 2%.
It is basically a protection racket. “It would be a shame if you didn’t use our credit card and had to pay 2% more everywhere”
Yes, I know it is complicated. Handling cash also costs non-trivial amounts. I know that the EU has limits on fees (and that is why basically no credit cards have rewards there). I also know that some businesses see the fee as more of a marketing costs because higher spenders tend to use cards and people tend to spend more on cards.
I found https://github.com/cyrinux/push2talk implements this idea for proper PTT on all apps.
Instead of system wide PTT per-app you may consider some software that mutes your mic for all apps as PTT, then just leave the mic “active” per-app.
I don’t know if a tool that will do this but on my mouse I have configured a mic mute toggle. So I push to start and stop. However technically I don’t think there is any restriction to setting up PTT via this mechanism.
It depends a lot on the hash functions. Lots of hashes are believed to be difficult to parallelize on GPUs and memory hard hash functions have different scaling properties. But even then you need to assume that an adversary has lots of computing power and a decent amount of time. These can all be estimated then you give yourself a wide margin.
Yeah, but my point is that I use my master password enough that random characters are still memorable while being faster to type. For me personally there isn’t really a use case where the easier memorability is worth the extra characters to type. But of course everyone is different, so it is good that this system is laid out for them with a great guide.
Yeah, that is what I meant by “strength of the hash”. Probably should have been more clear. Basically the amount of resources it takes to calculate the hash will have to be spent by the attacker for each guess they make. So if it takes 1s and 100MiB of RAM to decrypt your disk it will take the attacker roughly 1s and 100MiB of RAM for each guess. (Of course CPUs will get faster and RAM will get cheaper, but you can make conservative estimates for how long you need your password to be secure.)
It is a good technique to be sure, but I haven’t found it useful in my everyday life. In practice 99% of my passwords are stored in my password manager. I only remember like 3 passwords myself. For those I want them to be easy to type as I do it semi-regularly (whenever I turn on my computer or phone, my phone sometimes re-verifies, …). These may be slightly easier to remember but end up being much longer. I find that I don’t have issues remembering the 3 passwords that I actually regularly type.
In fact I recently switched my computer passwords to be all lowercase, just to make it easier to type. I’ve offset this reduced entropy by making them longer (basically shift+key is similar entropy to key+key and easier to type, especially on phones or on-screen keyboards).
The recommended 6 words produces incredibly strong passwords. The equivalent with all lowercase would be 16.5 characters. Personally I went for 14 characters and in my threat model that is very very secure. But this will also depend on your attack model. If it is a disk encryption password or other case where you expect that the attacker can get the hash then it will depend on the strength of the hash and possible attacker’s computing power. If it is protected by a HSM that you trust you can get away with short PINs because they have strict rate limits. Any decent online service should also have login rate limits reducing required entropy (unless the leak the hash without resetting passwords, then see the above point where the attacker gets the hash). All of my memorized passwords fall into the category of needing very strong security but I still found that remembering a random character password that only only took about a week when entering it once a day.
Technically yes. But the method is by far strong enough that this isn’t an issue. This is sort of always the issue with calculating entropy. We say that password
has less entropy than 8(A>Ni'[
. But that is baking in assumptions about the search space. If password
is a randomly generated string of lower, upper, numbers and symbols it is just as secure as the latter. (808 ≈ 1015 candidates) but if password was generated as just lowercase characters it is far less secure (268 ≈ 1011 candidates) but if it was a random dictionary word it is not very secure at all (≈ 105 candidates) and if it was chosen as one of the most popular passwords it is even less secure. How can one password have different entropy?
The answer is basically it matters how the attacker searches. But in practice the attacker will search the more likely smaller sets first, then expand to the larger. So the added time to search the smaller sets is effectively negligible.
What may be more useful is the “worst case” entropy. Basically the attacker knows exactly what set you picked. For the password
case that is 1 because I just picked the most common password. For the rolling method described above it is 65^6 ≈ 1023 because even if they know the word list they don’t know the rolls. You may be able to go slightly higher by building your own word list, but the gains will probably be fairly small and you would likely get far more value just by rolling one more word on the existing list than spending the time to generate your own.
select a set of words from the list
I would be very careful doing this. It is very easy to introduce significant bias. Humans are terrible at picking random numbers.
If you can’t find dice I would recommend:
I don’t know about YouTube but the chunks are often a fixed length. For example 1 or 2 seconds. So as long as the ad itself is an even number of seconds (which YouTube can require, or just pad the add to the nearest second) so there is no concrete difference between the 1s “content” chunks vs the 1s “ad” chunks.
If you are trying to predict the ad chunks you are probably better off doing things like detecting sudden loudness changes, different colour tones or similar. But this will always be imperfect as these could just be scene changes that happened to be chunk aligned in the content.
It’s definitely an option. It will do the things that you want (as long as your phone is online, but that is the same for any other solution).
Yes, this is because Beeper converts the Signal protocol to the Matrix protocol and vice versa. In order to do this it needs to access the messages. So it needs to decrypt the messages, then re-encrypt them on the other side. This means that the bridge (in this case operated by Beeper) has access to your messages. This is often referred to as “end-to-bridge” encryption, as it isn’t end-to-end anymore.
This is going to be true of any bridge you use that is hosted by a third party. You are always adding one additional trusted party into your communication.
Yes, to practically operate a bridge you need your own Matrix server. This is because the bridge will create a new Matrix user for every remote participant (every phone number you communicate with in this case). Doing this with regular mechanisms would be difficult (as signup is likely restricted in some ways) and inefficient (as each account would need to be checked for new messages separately). Beeper runs their own homeserver so that they can operate their bridges. However Beeper’s bridges are only available to users on the same homeserver (this is not a protocol limitation, just their choice). So in order to use their bridges you need to make an account with them (which you can, it is free IIUC). Beeper also offers custom clients which have special features for interacting with their bridges (for example making it easier to start a conversation with a new phone number).
The alternative would be to run your own server and bridge (or hire someone to it on your behalf).