• 0 Posts
  • 67 Comments
Joined 5Y ago
cake
Cake day: Oct 02, 2020

help-circle
rss

which is why i’d say clearly identify the source so people can discern.

afaict if you want to fairly criticise kagi its for not moving fast enough on adding clear source referencing & adding methods for users to filter out source types.

is there something i’m missing? are there other reasons to suspect bad faith on kagi’s part?


any nation’s major search engine is going to be turning up propaganda.

imo it’s better to include it & state where it came from so people can discern.

personally i don’t believe the entire population of any country are all bad people and increasingly siloing global citizens from eachother is only going to increase the kinds of toxic nationalism which dangerous regimes use as fuel


i think i can answer this

personally, when i first encountered graphene’s radical statements about the terrible security landscape we’re all subjected to, i reacted quite negatively & assumed they were crazy.

then i actually check the technical details of their claims, and fuck me it turns out to be SCARILY correct.

most people don’t actually bother with the second part. and you end up with a classic “shoot the messenger” scenario, where the bearer of bad news is equated with the bad news itself & punished by the mob (because they feel it’s easier than actually facing the uncomfortable reality of the bad news).

that scenario can only play out for so long before the messenger gets sick of being shot every day & reacts badly to the crowd. then the crowd points at their poor reaction & uses it as further “evidence” against their character.


well said!

imo it’s not a coincidence the public are being steered away from supporting graphene. it’s one thing to see the general public do this (they will do whatever their told), but seeing countless people who supposedly should know better its quite disturbing.


admittedly i’m not up to date on all the drama, but i thought that graphene saw themselves as victims of alt attacks?


some of it is kind of inevitable when you see how far ahead from everyone else they are technically and when people shitting on their work just aren’t at their (technical) level it seems to be very draining. and eventually lead to dramas.


I don’t think they’re disputing any of that if it’s hosted locally (including safely remote accessed by you). i think they’re talking about it being fed to the cloud & commoditised, which is a valid concern imo.


without further explanations of OP’s intent i’m inclined to think this is perhaps the best approach


exactly

default: on

user: explicitly turns off

random “update”: defaults back on

Now wait 1 year


I fucking hate this timeline.

my first thought as well…how did we get to the point that this is a valid topic? (not a comment about you OP, just the state of the world)


can you pls explain what you mean in more depth?

your original post is sufficiently vague that tbh i don’t blame people for assuming you were just bootlicking? [which probably says more about the state of the world than you as an individual, but honestly it’s not clear what you’re trying to say?]

we all know a random citizen/local business presenting an identical calibre of evidence of repeated crimes would be extremely unlikely to routinely receive this degree of resource allocation.

so if it’s an idealised aspirational universal “order” you’re talking about then obviously noone’s buying it - and i don’t think you are either. so what do you mean?


tar pits target the scrapers.

were you talking also about poisoning the training data?

two distinct (but imo highly worthwhile) things

tar pits are a bit like turning the tap off (or to a useless trickle). fortunately it’s well understood how to do it efficiently and it’s difficult to counter.

poisoning is a whole other thing. i’d imagine if nothing comes out of the tap the poison is unlikely to prove effective. there could perhaps be some clever ways to combine poisoning with tarpits in series, but in general they’d be deployed separately or at least in parallel.

bear in mind to meaningfully deploy a tar pit against scrapers you usually need some permissions on the server, it may not help too much for this exact problem in the article (except for some short term fuckery perhaps). poisoning this problem otoh is probably important


tar pits target the scrapers.

were you talking also about poisoning the training data?

two distinct (but imo highly worthwhile) things

tar pits are a bit like turning the tap off (or to a useless trickle). fortunately it’s well understood how to do it efficiently and it’s difficult to counter.

poisoning is a whole other thing. i’d imagine if nothing comes out of the tap the poison is unlikely to prove effective. there could perhaps be some clever ways to combine poisoning with tarpits in series, but in general they’d be deployed separately or at least in parallel.

bear in mind to meaningfully deploy a tar pit against scrapers you usually need some permissions on the server, it may not help too much for this exact problem in the article (except for some short term fuckery perhaps). poisoning this problem otoh is probably important


Imo signal protocol is mostly fairly robust, signal service itself is about the best middle ground available to get the general public off bigtech slop.

It compares favorably against whatsapp while providing comparable UX/onboarding/rendevous, which is pretty essential to get your non-tech friends/family out of meta’s evil clutches.

Just the sheer number of people signal’s helped to protect from eg. meta, you gotta give praise for that.

It is lacking in core features which would bring it to the next level of privacy, anonymity and safety. But it’s not exactly trivial to provide ALL of the above in one package while retaining accessibility to the general public.

Personally, I’d be happier if signal began to offer these additional features as options, maybe behind a consent checkbox like “yes i know what i’m doing (if someone asked you to enable this mode & you’re only doing it because they told you to, STOP NOW -> ok -> NO REALLY, STOP NOW IF YOU ARE BEING ASKED TO ENABLE THIS BY ANYONE -> ok -> alright, here ya go…)”.


i think they mean future devices, not previously sold.

either way the thread is 99% invalid criticism of what is afaict one of the best projects of our generation


Google could snap its fingers tomorrow and lock down the ability to unlock bootloaders.

only valid point in the post afaict


It’s not any more conductive

quick note: you’re likely correct the conductivity may not be higher, but the conductance likely is.

in other words, i second your suggestion of heavier duty foil (for EM reasons, skin effect etc) alongside the mechanical factors you mentioned.


edit: nvm i re-read what you wrote

i agree it does mostly fulfill the criteria for libre software. perhaps not in every way to the same spirit as other projects, but that is indeed a separate discussion.

h̶o̶w̶ ̶m̶a̶n̶y̶ ̶c̶o̶m̶m̶u̶n̶i̶t̶i̶e̶s̶ ̶a̶r̶e̶ ̶d̶o̶i̶n̶g̶ ̶t̶h̶a̶t̶ ̶r̶i̶g̶h̶t̶ ̶n̶o̶w̶?̶ ̶i̶ ̶s̶u̶s̶p̶e̶c̶t̶ ̶y̶o̶u̶ ̶m̶a̶y̶ ̶b̶e̶ ̶d̶r̶a̶s̶t̶i̶c̶a̶l̶l̶y̶ ̶u̶n̶d̶e̶r̶s̶t̶a̶t̶i̶n̶g̶ ̶t̶h̶e̶ ̶b̶a̶r̶r̶i̶e̶r̶s̶ ̶f̶o̶r̶ ̶t̶h̶a̶t̶.̶ ̶b̶u̶t̶ ̶w̶o̶u̶l̶d̶ ̶b̶e̶ ̶d̶e̶l̶i̶g̶h̶t̶e̶d̶ ̶t̶o̶ ̶b̶e̶ ̶p̶r̶o̶v̶e̶n̶ ̶w̶r̶o̶n̶g̶.̶.̶.̶


afaict the topic of the article seems to be focusing on trust as in privacy and confidentiality

for the discussion i think we can extend trust as in also trusting the ethics and motivation of the company producing the “AI”

imo what this overlooks is that a community or privately made “AI” running entirely offline has the capacity to tick those boxes rather differently.

trusting it to be effective is perhaps an entirely different discussion however

feeling like you’ve been listened to can be therapeutic.

actionable advice is an entirely different matter ofc.


ah fair enough. i think that was the initial confusion from myself and perhaps the other user in this discussion. i didn’t realise your use cases.

it’s always a fun topic to discuss and got me thinking about some new ideas :)


cool, sounds like you have most of the principles down.

what i didn’t yet see articulated with chat-e2ee is how the actual code itself verifies itself to the user in the browser? it sounds to me like it assumes the server which serves the code is ‘trusted’, while the theoretically different server(s) which transmits the messages can be ‘untrusted’.


out of interest, do you actually mean no login, or do you mean no email-verified login?


i’m trying to understand your exact scenario.

but in general, the problem is where do you get your original key, or original hash to verify from? if they are both coming from the server, along with the code which processes them, then if the server is compromised, so are you.

thankfully browsers give alot of crypto API lately (as discussed in your link)

but you still need at minimum a secure key, a hash and trusted code to verify the code the server serves you. there are ofc solutions to this problem, but if the server is unstrusted, you absolutely can’t get it from them, which means you have to get it from somewhere else (that you trust).


Thanks for the distinctions and links to the other good discussions you’ve started!

For the invasive bits that are included, it’s easy enough for GrapheneOS to look over the incremental updates in Android and remove the bits that they don’t like.

That’s my approximate take as well, but it wasn’t quite what I was getting at.

What I meant is, to ask ourselves why is that the case? A LOT of it is because google wills it to be so.

Not only in terms of keeping it open, but also in terms of making it easy or difficult - it’s almost entirely up to google how easy or hard it’s going to be. Right now we’re all reasonably assuming they have no current serious incentives to change their mind. After all, why would they? The miniscule % of users who go to the effort of installing privacy enhanced versions of chromium (or android based os), are a tiny drop in the ocean compared to the vast majority of users running vanilla and probably never even heard of privacy enhanced versions.


excellent writeup with some high quality referencing.

minor quibble

Firefox is insecure

i’m not sure many people would disagree with you that FF is less secure than Chromium (hardly a surprise given the disparity in their budgets and resources)

though i’m not sure it’s fair to say FF is insecure if we are by comparison inferring Chromium is secure? ofc Chromium is more secure than FF, as your reference shows.


another minor quibble

projects like linux-libre and Libreboot are worse for security than their counterparts (see coreboot)

does this read like coreboot is proprietary? isn’t it GPL2? i might’ve misunderstood something.


you make some great points about open vs closed source vs proprietary etc. again, it shouldn’t surprise us that many proprietary projects or Global500 funded opensource projects, with considerably greater access to resources, often arrive at more robust solutions.

i definitely agree you made a good case for the currently available community privacy enhanced versions based on open source projects from highly commercial entities (Chromium->Vanadium, Android/Pixel->GrapheneOS) etc. something i think to note here is that without these base projects actually being opensource, i’m not sure eg. the graphene team would’ve been able to achieve the technical goals in the time they have, and likely with even less success legally.

so in essence, in the current forms at least, we have to make some kind of compromise, choosing between something we know is technically more robust and then needing to blindly trust the organisation’s (likely malicious) incentives. therefore as you identify, obviously the best answer is to privacy enhance the project, which does then involve some semi-blind trusting the extent of the privacy enhancement process - assuming good faith in the organisation providing the privacy enhancement: there is still an implicit arms race where privacy corroding features might be implemented at various layers and degrees of opacity vs the inevitably less resourced team trying to counter them.

is there some additional semi-blind ‘faith’ we’re also employing where we are probably assuming the corporate entity currently has little financial incentive in undermining the opensource base project because they can simply bolt on whatever nastiness they want downstream? it’s probably not a bad assumption overall, though i’m often wondering how long that will remain the case.

and ofc on the other hand, we have organisations who’s motivation we supposedly trust (mostly…for now), but we know we have to make a compromise on the technical robustness. eg. while FF lags behind the latest hardening methods, it’s somewhat visible to the dedicated user where they stand from a technical perspective (it’s all documented, somewhere). so then the blind trust is in the purity of the organisation’s incentives, which is where i think the political-motivated wilfully-technically-ignorant mindset can sometimes step in. meanwhile mozilla’s credibility will likely continue to be gradually eroded, unless we as a community step up and fund them sufficiently. and even then, who knows.

there’s certainly no clear single answer for every person’s use-case, and i think you did a great job delineating the different camps. just wanted to add some discussion. i doubt i’m as up to date on these facets as OP, so welcome your thoughts.


I’m sick of privacy being at odds with security

fucking well said.


Sorry for my poor phrasing, perhaps re-read my post? i’m entirely supporting your argument. Perhaps your main point aligns most with my #3? It could be argued they’ve already begun from a position of probable bad faith by taking this data from users in the first place.


TLDR edit: I’m supporting the above comment - ie. i do not support apple’s actions in this case.


It’s definitely good for people to learn a bit about homomorphic computing, and let’s give some credit to apple for investing in this area of technology.

That said:

  1. Encryption in the majority of cases doesn’t actually buy absolute privacy or security, it buys time - see NIST’s criteria of ≥30 years for AES. It will almost certainly be crackable <oneday> either by weakening or other advances… How many people are truly able to give genuine informed consent in that context?

  2. Encrypting something doesn’t always work out as planned, see example:

“DON’T WORRY BRO, ITS TOTALLY SAFE, IT’S ENCRYPTED!!”

Source

Yes Apple is surely capable enough to avoid simple, documented, mistakes such as above, but it’s also quite likely some mistake will be made. And we note, apple are also extremely likely capable of engineering leaks and concealing it or making it appear accidental (or even if truly accidental, leveraging it later on).

Whether they’d take the risk, whether their (un)official internal policy would support or reject that is ofc for the realm of speculation.

That they’d have the technical capability to do so isn’t at all unlikely. Same goes for a capable entity with access to apple infrastructure.

  1. The fact they’ve chosen to act questionably regarding user’s ability to meaningfully consent, or even consent at all(!), suggests there may be some issues with assuming good faith on their part.

can you please explain in a little more depth? are you saying pluton is basically dead in the water and is likely to disappear from implementations in silicon in the near future?


that’s great buddy. but while recapping basic IT facts might make you feel smart on facebook. this is lemmy where the average user 1 is perfectly familiar the principles. here it just telegraphs to us that you didn’t read the fucking article (which would’ve taken less time than spamming the thread & insulting users btw).

1 before the influx of reddit api refugees - on that topic do you ever reflect on how corporate bootlicking might relate to the over-corporatisation of reddit which led to users fleeing? only to come here and do unpaid simping for the corporations, slowly ruining this place too?


perhaps dial back the attitude a bit there? if you think you know better than someone (even if you’re wrong), then you should have no trouble kindly educating instead of insulting them.

you may also wish to revisit your highly questionable claim that graphene properly configured on pixel is less secure than stock rom on some random android device.


+1 for the lockbox idea. with appropriate selection it could also provide (varying degrees of) electromagnetic shielding. useful in general, and increasingly as the line for actual device shutdown becomes more and more blurry.


my guess is its just another flavour of cope.

imo likely because recent history has began to undermine the delusions which were propping up the former flavour.


hey man, i think you may have misinterpreted who i was replying to /what i was saying, or perhaps i didn’t communicate perfectly.

i am 10,000% on your side with this, and very much appreciate your post and appreciate your support in this thread/community on this topic. it’s actually giving me a tiny bit of hope that this community isn’t entirely lost.

i’ve really grown absolutely weary of the ridiculous denialism in society and especially in so-called tech communities on this topic.

the kindest thing i think you could say about the rampant denialism is they emotionally do not want to believe it could be happening, and therefore all rationality has gone out the window.

these threads are always a circle jerk of denialists repeating popular media headlines which say “its not happening”, and then if you read the article IT DOESN’T SAY THAT AT ALL. and these denialists WON’T EVEN FUCKING READ THE ARTICLES THEY POST.

apart from the emotional cope, perhaps also partial exposure to eg. basic consumer stuff like installing steam or downloading a movie, so they assume the bandwidth is too high to exfiltrate audio cos their music/game/movie audio files are big, completely ignoring the fact that the telecomms industry has put many decades and $ into producing efficient voice codecs for around 50 years now. they probably think nyquist is a brand of cough medicine

same goes for all the other erroneous ‘consumer tech’ false facts they parrot back and forth.

eg. the lunacy of saying the tired old statement “if they were listening ALL THE TIME, we’d know” completely ignoring threshold based noise gates have been a thing for well over half a century.

these self-proclaimed know-it-alls can’t even put in 10 minutes reading BASIC topics in an encylopedia to realise this shit was solved over half a century ago. (actually you don’t even need tech knowledge or an encylopedia to imagine such a fundamental thing as…i don’t know…not recording when nothings happening 🤯). they can’t put in even BASIC effort, yet are SOOO smug in not only telling us “its absolutely not happening”, but they actually can’t wait to be rude and ridicule randoms for even asking the question.


correct.

the level of unsubstantiated cope in this thread is mind boggling. from people who should honestly know better.


cos the majority in this thread cannot even read the articles they cite mistakenly thinking it supports their unscientific claims that this topic is decided.

afaict no researcher has formally claimed a full coverage binary analysis.

if you know of such a study please link?

afaict the researchers are very upfront about the limits to the coverage of their studies and the importance of that uncovered ground being covered.

when the researchers themselves are saying the work isn’t over. why are all the super geniuses in this thread so smugly announcing this topic is wrapped up?

i guess they know better than the actual researchers do. amazing, someone should tell them not to worry cos the geniuses in the forums have it all worked out 🤣

[if you’re unable to reply with a direct excerpt from actual formally issued research (not some pop media headline) i will not bother responding]


yeah the level of technical competence on this site has plummeted since the influx of the reddit crowd.

just enough consumer tech enthusiast knowledge to delude themselves they can smugly and self righteously shit on the average non-tech person.

and now they’re the majority, drowning out legitimate curiosity by loudly parroting headlines from articles they didn’t even read. slowly turning lemmy into the regurgitated reddit pop media shithole they wanted to escape.

this topic is especially difficult because of the clear emotional desire for it not to be true. hence the degree of fragile cope in this thread.

thankfully not everyone here is a lost cause, and you’ve been given some good advice on delineating the other possible causes for what you’ve observed. when we do a careful analysis we must ofc consider all possibilities.

what i’ve not seen properly acknowledged in this thread, however, is that the possibility of alternative explanations doesn’t preclude the possibility of voice-based surveillance either.


always listening

i never claimed always, i specifically advised op to refrain from claiming always.

how about putting in a minimum effort and read your own sources before citing them. how can you pretend to represent a sound scientific approach when you misrepresent the scientific claims made in sources you cite


piss easy

many domain experts dedicating significant resources to it’s study

pick one.

when your sources repeatedly don’t say what you claim they say, maybe its time to revisit your claims ;)


Of course a researcher is never sure something is 100% ruled out. That’s part of how academic research works.

once again, that isn’t what they were reported to have said. [and researchers don’t need to repeat the basic precepts of the scientific method in every paper they write, so perhaps its worthwhile to note what they were reported to say about that, rather than write it off as a generic ‘noone can be 100% certain of anything’] it’s a bit rich to blame someone for lacking rigor while repeatedly misrepresenting what your own article even says.

what the article actually said is

because there are some scenarios not covered by their study

and even within the subset of scenarios they did study, the article notes various caveats of the study:

Their phones were being operated by an automated program, not by actual humans, so they might not have triggered apps the same way a flesh-and-blood user would. And the phones were in a controlled environment, not wandering the world in a way that might trigger them: For the first few months of the study the phones were near students in a lab at Northeastern University and thus surrounded by ambient conversation, but the phones made so much noise, as apps were constantly being played with on them, that they were eventually moved into a closet

there’s so much more research to be done on this topic, we’re FAR FAR from proving it conclusively (to the standards of modern science, not some mythical scientifically impossible certainty).

presenting to the public that is a proven science, when the state of research afaict has made no such claim is muddying the waters.

if you’re as absolutely correct as you claim, why misrepresent whats stated in the sources you cite?


no, they don’t

Please be careful with your claims.

In my experience, whenever investigating these claims and refutations we usually find when digging past the pop media headlines into the actual academic claims, that noone has proven it’s not happening. If you know of a conclusive study, please link.

Regarding the article you have linked we don’t even need to dig past the article to the actual academic claims.

The very article you linked states quite clearly:

The researchers weren’t comfortable saying for sure that your phone isn’t secretly listening to you in part because there are some scenarios not covered by their study.

(Genuine question, not trying to be snarky) Will you take a moment to reflect on which factors may have contributed to your eagerness to misrepresent the conclusions of the studies cited in your article?