• 0 Posts
  • 9 Comments
Joined 1Y ago
cake
Cake day: Jul 01, 2023

help-circle
rss

It relies on slow legal mechanisms that vary widely by jurisdiction. It also highlights the huge problem with forcing users to find workarounds for legal manipulation. Instead of employing an “economies of scale” approach and having authorities crack down on obvious bullshit, you have to go through this process or pay someone to do it for you and pay companies for their credit reports on you and pay to file the lawsuit etc. etc.

Additionally, any of these companies can close down and then open back with with a new name at any time and force you to start the process all over again. It’s called a “phoenix company” where I am.

I also consider it pretty likely that trying to remove your information just verifies your information and therefore makes it more valuable for brokers. There’s no reason to assume they handle information ethically and are doing anything more than providing the opt-out for plausible deniability.


This is even worse when we factor that many accessibility issues are addressed through simple measures that many times must be accomplished when basic maintenance is done, like rewiring or fittings renewal.

I completely agree. I would love to have the option to use non-networked solutions. But for multiple reasons, tinkering with the electricity supply and residence is outside my control.

I can still control my networks and lightbulbs though. So here I am, somewhere I never anticipated, looking at networkable lightbulbs and foss repos. Like I said, I’m just happy to have an option.


I’m glad your relatives were able to make permanent modifications to their living spaces that sufficiently accommodated their accessibility needs! Many of us do not share those circumstances, and the number of people with a huge variety of different medical problems is steadily increasing. I, for one, am very happy to have some implementation options to choose from.


If you’re really old, odds are you have experienced physical pains that have made “forgetting to turn off the light/appliance/device” a difficult experience rather than just inconvenient. I never liked the idea of IoT devices until chronic pain fucked up the whole mobility thing for me, now I realise it’s a total necessity. Especially for societies with rapidly growing older demographics, increased rates of chronic illness, and inadequate social and medical systems.


Even as someone who declines all cookies where possible on every site, I have to ask. How do you think they are going to be able to improve their language based services without using language learning models or other algorithmic evaluation of user data?

I get that the combo of AI and privacy have huge consequences, and that grammarly’s opt-out limits are genuinely shit. But it seems like everyone is so scared of the concept of AI that we’re harming research on tools that can help us while the tools which hurt us are developed with no consequence, because they don’t bother with any transparency or announcement.

Not that I’m any fan of grammarly, I don’t use it. I think that might be self-evident though.


Honestly, I’m not sure. Privacy has always been a spectrum, but we’re now living in a world where it’s near impossible to get close to 100% privacy for any action from the start. I suspect the current possible remedies are “ensuring the people and organisations which use/abuse surveillance are heavily regulated and compliance heavily enforced” which ironically requires transparency.

Realistically there needs to be lengthy legal procedures to grant authorities and companies use of such techniques. Legislation like that is complicated and slow to develop though. It also risks pinning the core privacy concepts to specific versions of specific tech, which complicates its enforcement over time.

Even if it is very illegal to do this to someone though, there will always be people who use it for whatever purposes. Obviously making it illegal under wiretapping laws without explicit opt-in consent to do it is something that would need to happen. I’d also like to see mandatory source attribution laws.

That won’t stop everyone though. Which means we maybe need to start looking into comstruction legislation to ensure RF blocking materials are used in external wall construction. If that is an effective remedy to Van Eck phreaking at all. I have no idea what resolution information can be determined from devices that aren’t purpose built broadcasting and receiving devices.

And all of that requires good-will and sensible decisions from the existing legal systems and legislators. Which can’t be completely achieved, and in many cases is… currently very poor.

Tl;dr A very hard problem which will need work from a bunch of different parts of society and likely cannot be completely solved for all people. The only solution for this specific technique right now I think is to go fully off-grid with no electricity. Even then though you’ll still have satellites and drones to intrude.


The fact they’re able to do this is no surprise to me. The fact they’re able to do this on very easily accessible equipment to that degree of accuracy is scary impressive.

While this obviously has huge consequences for privacy, the part that concerns me most is its usage in development of deep fakes. I worry about the consequences of no longer being able to distinguish real video evidence from deliberate manipulation.


The irony of AI-generated responses being difficult to distinguish from the rules educators harassed me to comply with is something I’ve found pretty amusing lately. It’s a bias built into the system, but has the opposite unintended effect of delegitimising actual human opinions. What an own-goal for civilisation.

I am regrettably all too human. I have even been issued hardware keys to prove it!


Mozilla has a huge amount of information already submitted by volunteers to train their own specific-subject LLM.

And as we saw from Meta’s nearly ethical-consideration-devoid CM3Leon (no i will not pronounce it “Chameleon”) paper, you don’t need a huge dataset to train if you supplement with your own preconfigured biases. For better or worse.

Just because something is “AI-powered” doesn’t mean the training datasets have to be acquired without ethics. Even if there is something to be said for making material public and the inevitable consequences it can be used.

I hope whoever gets the job can help pave the way for ethics standards in AI research.