It used to be common and useful. I did this even after Valve shipped a native Linux TF2 as at the beginning the Wine method gave better results on my hardware. But that time has long passed as Valve has integrated Wine (Proton) and in almost all cases the Linux native builds will outperform Wine (and Steam will let you use the Windows version via Proton if you want even if there is a native Linux build).
So while I suspect that there are still a few people doing this out of momentum, habit or reading old tutorials I am not aware of any good reasons to do this anymore.
There are three parts to the whole push system.
My point is that 1 is the core and already available across devices including over Google’s push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don’t want to self host and still maintains the option to fully self host as desired.
IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.
UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).
I mean it is always better to have more open source. But the point of the multi-hop system is that you don’t need to trust the server. Even if the server was open source:
The open source client is enough to verify this and the security of the whole scheme.
Yeah, I can’t believe how hard targeting other consoles is for basically no reason. I love this Godot page that accurately showcases the difference:
https://docs.godotengine.org/en/stable/tutorials/platform/consoles.html
Currently, the only console Godot officially supports is Steam Deck (through the official Linux export templates).
The reason other consoles are not officially supported are:
- To develop for consoles, one must be licensed as a company. As an open source project, Godot has no legal structure to provide console ports.
- Console SDKs are secret and covered by non-disclosure agreements. Even if we could get access to them, we could not publish the platform-specific code under an open source license.
Who at these console companies think that making it hard to develop software for them is beneficial? It’s not like the SDK APIs are actually technologically interesting in any way (maybe some early consoles were, the last “interesting” hardware is probably the PS2). Even if the APIs were open source (the signatures, not the implementation) every console has DRM to prevent running unsigned games, so it wouldn’t allow people to distribute games outside of the console marker’s control (other than modded systems).
So to develop for the Steam Deck:
To develop for Switch (or any other locked-down console):
What it could be (after you register with Nintendo to get access to the SDK download):
All they need to do is grant an open source license on the API headers. All the rest is done for them and magically they have more games on their platform.
Mullvad is one of the best options if you care about privacy. They take privacy seriously, both on their side and pushing users towards private options. They also support fully anonymous payments. Their price is also incredibly reasonable.
I’m actually working on a VPN product as well. It is a multi-hop system so that we can’t track you. But it isn’t publicly available yet, so in the meantime I happily recommend Mullvad.
These are all good points. This is why it is important to match your recommendations to the person. For example if I know they have Chrome and a Google account I might just recommend using that. Yes, it isn’t end-to-end encrypted and Google isn’t great for privacy but at least they are already managing logins over all of their devices.
In many cases perfect is the enemy of better. I would rather them use any password manager and unique passwords (even “a text file on their desktop”) than them sticking to one password anywhere because other solutions are too complicated.
It depends on your threat model. It does mostly reduce the benefit from 2FA, but you are probably still very safe if you use a random password per site. I mostly use 2FA when forced (other than a few high-value accounts) so I don’t worry about it. For most people having a random password which is auto-filled so that you don’t type it into the wrong site is more than sufficient to keep themselves secure.
Honestly nothing. I recommend this to everyone because it is the easiest way to set up and offers huge advantages.
I think these are the two biggest benefits and every browser password manager will accomplish both.
These are real issues however they are pretty easy to mitigate, and I would say that the upsides of a password manager far outweigh the downsides.
Make sure that you are regularly typing your master password for the first bit. After that you’ll never forget it. You can also help them out by saving a copy of their master password for them at least until they are sure they have memorized it. There are also password managers where you can recovery your account as long as you have the keys cached on at least one device.
This is far, far outweighed by the risk of password reuse. This is because when a single one of the sites you use gets hacked then people will take that credential list and try it on every other site. So with a password manager there is just one target, without it is one of hundreds of sites where you reused your password. Many password managers also have end-to-end encryption so without your password the sync service can’t be hacked (as it doesn’t have access to your passwords).
Basically they license out the system to companies. You can get a rough idea here: https://what3words.com/business
The idea is that by making it free to individuals they build up market familiarity and expectation. Free personal use is just marketing for the paid product. Then they can turn to businesses and convince them that they should offer their system as a service and charge them for it.
The closest alternative is probably Plus Codes. They are driven by Google but are free to use for everything with a pretty plain and simple Terms of Use.
Instead of words they use an alphanumeric encoding. The main downside is that this can be less memorable but the upside is that it works for users of all languages and you can shorten the codes by using a Country or City reference as well as control the precision.
what3words is proprietary and the owner is profit-hungry and litigious, I would recommend avoiding it.
Some basic info: https://en.wikipedia.org/wiki/What3words#Proprietary
The best option is probably using a geo: URL. This should open in all devices in their favourite mapping application. Example. If you want to link to a specific store or similar beyond just a location you can add a “query” which some apps will use to highlight that. Example.
Another decent option is Plus Codes. These are a bit shorter and easier to manage but lack a URL format as far as I can tell. MJ75+P3 Toronto, Ontario.
You can also just link to an alternative service such as Open Street Maps. This avoids Google but still imposes a particular service on others.
I regularly consider doing this. Obviously it is great from a privacy perspective. But I hate dealing with cash, especially change. With cards I just have one thing in my wallet and it just works forever. My bank account is automatically charged at the end of the month. With cash I need to keep refilling my wallet and carry around annoying change.
I would love to have something digital but also private (like Monero). But so far I have been picking convenience over privacy.
This is sort of a scam though. Credit cards give rewards, but then charge the business for the processing fees. So the business needs to raise prices to cover the fees. So really no one is getting that 2% except for the card network. And if you don’t use a card you lose 2%.
It is basically a protection racket. “It would be a shame if you didn’t use our credit card and had to pay 2% more everywhere”
Yes, I know it is complicated. Handling cash also costs non-trivial amounts. I know that the EU has limits on fees (and that is why basically no credit cards have rewards there). I also know that some businesses see the fee as more of a marketing costs because higher spenders tend to use cards and people tend to spend more on cards.
I found https://github.com/cyrinux/push2talk implements this idea for proper PTT on all apps.
Instead of system wide PTT per-app you may consider some software that mutes your mic for all apps as PTT, then just leave the mic “active” per-app.
I don’t know if a tool that will do this but on my mouse I have configured a mic mute toggle. So I push to start and stop. However technically I don’t think there is any restriction to setting up PTT via this mechanism.
It depends a lot on the hash functions. Lots of hashes are believed to be difficult to parallelize on GPUs and memory hard hash functions have different scaling properties. But even then you need to assume that an adversary has lots of computing power and a decent amount of time. These can all be estimated then you give yourself a wide margin.
Yeah, but my point is that I use my master password enough that random characters are still memorable while being faster to type. For me personally there isn’t really a use case where the easier memorability is worth the extra characters to type. But of course everyone is different, so it is good that this system is laid out for them with a great guide.
Yeah, that is what I meant by “strength of the hash”. Probably should have been more clear. Basically the amount of resources it takes to calculate the hash will have to be spent by the attacker for each guess they make. So if it takes 1s and 100MiB of RAM to decrypt your disk it will take the attacker roughly 1s and 100MiB of RAM for each guess. (Of course CPUs will get faster and RAM will get cheaper, but you can make conservative estimates for how long you need your password to be secure.)
It is a good technique to be sure, but I haven’t found it useful in my everyday life. In practice 99% of my passwords are stored in my password manager. I only remember like 3 passwords myself. For those I want them to be easy to type as I do it semi-regularly (whenever I turn on my computer or phone, my phone sometimes re-verifies, …). These may be slightly easier to remember but end up being much longer. I find that I don’t have issues remembering the 3 passwords that I actually regularly type.
In fact I recently switched my computer passwords to be all lowercase, just to make it easier to type. I’ve offset this reduced entropy by making them longer (basically shift+key is similar entropy to key+key and easier to type, especially on phones or on-screen keyboards).
The recommended 6 words produces incredibly strong passwords. The equivalent with all lowercase would be 16.5 characters. Personally I went for 14 characters and in my threat model that is very very secure. But this will also depend on your attack model. If it is a disk encryption password or other case where you expect that the attacker can get the hash then it will depend on the strength of the hash and possible attacker’s computing power. If it is protected by a HSM that you trust you can get away with short PINs because they have strict rate limits. Any decent online service should also have login rate limits reducing required entropy (unless the leak the hash without resetting passwords, then see the above point where the attacker gets the hash). All of my memorized passwords fall into the category of needing very strong security but I still found that remembering a random character password that only only took about a week when entering it once a day.
Technically yes. But the method is by far strong enough that this isn’t an issue. This is sort of always the issue with calculating entropy. We say that password
has less entropy than 8(A>Ni'[
. But that is baking in assumptions about the search space. If password
is a randomly generated string of lower, upper, numbers and symbols it is just as secure as the latter. (808 ≈ 1015 candidates) but if password was generated as just lowercase characters it is far less secure (268 ≈ 1011 candidates) but if it was a random dictionary word it is not very secure at all (≈ 105 candidates) and if it was chosen as one of the most popular passwords it is even less secure. How can one password have different entropy?
The answer is basically it matters how the attacker searches. But in practice the attacker will search the more likely smaller sets first, then expand to the larger. So the added time to search the smaller sets is effectively negligible.
What may be more useful is the “worst case” entropy. Basically the attacker knows exactly what set you picked. For the password
case that is 1 because I just picked the most common password. For the rolling method described above it is 65^6 ≈ 1023 because even if they know the word list they don’t know the rolls. You may be able to go slightly higher by building your own word list, but the gains will probably be fairly small and you would likely get far more value just by rolling one more word on the existing list than spending the time to generate your own.
select a set of words from the list
I would be very careful doing this. It is very easy to introduce significant bias. Humans are terrible at picking random numbers.
If you can’t find dice I would recommend:
I don’t know about YouTube but the chunks are often a fixed length. For example 1 or 2 seconds. So as long as the ad itself is an even number of seconds (which YouTube can require, or just pad the add to the nearest second) so there is no concrete difference between the 1s “content” chunks vs the 1s “ad” chunks.
If you are trying to predict the ad chunks you are probably better off doing things like detecting sudden loudness changes, different colour tones or similar. But this will always be imperfect as these could just be scene changes that happened to be chunk aligned in the content.
I feel the same way. Or felt. It is a wonderful platform that will let anyone upload and share videos at absolutely no cost. Video hosting isn’t as expensive as we are often lead to believe, but it isn’t cheap. Especially if you want to provide a great experience like different resolutions and qualities.
I used to subscribe to YouTube Premium and was quite happy about it. However they slowly made the platform worse and worse. At some point it hurt to give them money, even if the subscription was “worth it”. I just didn’t like giving money to people destroying a great platform.
Luckily YouTube still supports RSS. This means that I can easily mix in other video platforms with no bother. I now subscribe to Nebula and have 35 subscriptions there. I also have a handful of PeerTube, video podcasts and other self-hosted creators. It isn’t the “majority” of my subscriptions (Apparently I have ~200 YouTube channels that I subscribe to, but a huge number of them are dead, second channels or incredibly infrequent.) but it doesn’t matter. All of my subscriptions come to the same “inbox” and it doesn’t really matter what platform they are on.
The answer is yes. The receiver can do whatever they want with the “localpart” of the email address.
However you will need to find a provider that supports it. For available services you are probably looking at one of two options:
To
address however you want.If you want full control you can run your own email server. For example that is what I do. I generate addresses in the form of {description}-{signature}@me.example
. So if they try to remove stuff the signature will fail and the mail will get rejected (well actually just heavily weighted as spam). I do this using Rspamd with a custom rule written in Lua. Full details of this setup are here: https://kevincox.ca/2022/07/07/signed-email-addresses/
Usually when you “delete” data on a storage medium you really just remove a reference to it. The data is still sitting on the disk if you know where to look. TRIM
is a command that tells the storage device “I don’t need this anymore” and usually the hardware will return empty data the next time you read it (really the hardware is doing the same thing of just forgetting that there is data there, it is turtles all the way down, but it will track that this block is supposed to be empty and clear it when you next read it).
However I think this is an unlikely theory. It would require two bugs:
Both of these would be very significant and unlikely to last long without being discovered. Having both be present at the same time therefore seems very improbable to me.
It seems unlikely that this is accidentally reading old encrypted data blocks. The filesystem wouldn’t even try to access data that it hasn’t written to yet. So you would need both filesystem bugs and bugs with encryption key management.
I think the theory that iCloud is accidentally restoring images based on the device ID is much more likely. It is also quite concerning but seems more plausible to me.