• 0 Posts
  • 11 Comments
Joined 1Y ago
cake
Cake day: Jul 21, 2023

help-circle
rss

I’m blocked from the article, but as someone who used to work in the industry I’m going to hazard what I think is a safe guess, and which is an under discussed aspect of intelligence.

If you put 5 cctv cameras in the worst parts of the city, you can pay someone $20/hour and have them monitored 24/7. The person’s one job is to call in a crime when it occurs and vector in police. As long as they’re not terminally addicted to instagram, you have that area covered.

Bump that up to 10,000 cameras and you run into a problem. You’re not going to hire 2000 people to watch them. You’re going to try to come up with something clever, maybe, that allows you to track back to a crime that was otherwise reported, but real time responses are out the window.

Even those that supported the development of the levels of surveillance that Snowden exposed have to acknowledge that looking at everything means you’re looking at nothing. The signal to noise ratio goes to absolute shit. It’s actually worse than useless because you’re thinking you’re monitoring, but you’re really not because you’re drowning in noise. It’s like they teach every yuppie in B school - if everything is a priority, then nothing is a priority. There’s a known phenomenon in defense and intelligence to center in on the gee whiz aspects of technology and lose sight of the actual mission.

I’m not a conspiracy theorist and as much as I dislike the current government of Israel, I don’t think this was some kind of nefarious plot. I think it was a massive fuck up thats going to have a body count in the tens of thousands and that will change the history of the region for a decade.


Do we really need ‘taken’ in quotes? I’m pretty sure everyone knows by now that stealing data doesn’t actually remove the data, and that identity theft doesn’t mean you no longer have a name.


Honestly, I wouldn’t mind getting a notification when my washer is done. If I’m doing too many things at once, I can forget that I had laundry going and it ends up sitting there until it gets musty and needs a re-wash.

That said, I did disconnect my smart tv from the internet when I found out it was sending data, including captured ambient audio, to the tv manufacturer. I just use an apple tv. I know that I’m still populating data for each of my streaming services, but the tv manufacturer has no need for my watching habits, much less people talking in my living room.

The one that I’ve never figured out was the refrigerator that connects to twitter.


The problem is twofold. The first part is that companies cannot be trusted to act in good faith when it comes to complying with the intent of laws they disagree with. This doesn’t apply to every company, but it applies to enough of them to make life difficult. I think it was Enron who, when ordered to supply prosecutors with emails, opted to print them out and hand over reams of paper that then had to be re-scanned. This is the same approach as companies that require physical mail to delete a record and who only do so for locations where it’s required by law. There’s no reason that it cannot be done more easily with a login and password. When I was deleting my reddit accounts, I had to use a script to delete all of my posts and comments because reddit did not support that functionality.

The second, related problem is that the legislators writing the laws aren’t skilled technologists, and that technology keeps evolving. It’s like having people with no background in finance writing laws to regulate wall street (which also happens). Cynical people might think this is seen as a feature not a bug.


I think my gmail account by itself is of legal age.


Thanks for the clarification!

I remember having to learn about fp representations in a numerical analysis class and some of the things you had to worry about back then, but by the time I ended up doing work where I’d actually have to worry about it, most of the gotchas had been taken care of so I largely stopped paying attention to the topic.


For some floating-point heavy code, it could potentially be major, but not disastrous.

That’s a really interesting point (no pun intended)

I had run into a few situations where a particular computer architecture (eg, the Pentiums for a time) had issues with floating point errors and I remember thinking about them largely the same way. It wasn’t until later that I started working in complexity theory, by which time I completely forgot about those issues.

a one of the earliest discoveries in what would eventually become chaos and complexity theory was the butterfly effect. Edward Lorenz was doing weather modeling back in the 60s. The calculations were complex enough that the model could have to be run over several sessions, starting and stopping with partial results at each stage. Internally, the computer model used six significant figures for floating point data. When Lorenz entered the parameters to continue his runs, he used three sig figs. He found that the trivial difference in sig digs actually led to wildly different results. This is due to the nature of systems that use present states to determine next states and which also have feedback loops and nonlinearities. Like most complexity folks, I learned and told the story many times over the years.

I’ve never wondered until just now whether anyone working on those kinds of models ran into problems with floating point bugs. I can imagine problematic scenarios, but I don’t know if it ever actually happened or if it would have been detected. That would make for an interesting study.


I’ve been out of the builder world for long enough that I didn’t follow the 2018 bug. I’m more from the F00F generation in any case. I also took a VLSI course somewhere in the mid-90s that convinced me to do anything other than design chips. I seem to remember something else from that era where a firmware based security bug related to something I want to say was browser-based, but it wasn’t the CPU iirc.

In any case, I get the point you and others are making about evaluating the risks of a security flaw before taking steps that might hurt performance or worrying about it too much.


From the description, it sounds like you upload a picture, then show a face to a video camera. It’s not like they’re going through FaceID that has anti-spoofing hardware and software. If they’re supporting normal web cams, they can’t check for things like 3d markers

Based on applications that have rolled out for use cases like police identifying suspects, I would hazard a guess that

  1. It’s not going to work as well as they imply
  2. It’s going to perform comically badly in a multi-ethnic real world scenario with unfortunate headlines following
  3. It will be spoofable.

I’m betting this will turn out to be a massive waste of resources, but that never stopped something from being adopted. Even the cops had to be banned by several municipalities because they liked being able to identify and catch suspects, even if it was likely to be the wrong person. In one scenario I read about, researchers had to demonstrate that the software the PD was using identified several prominent local politicians as robbery and murder suspects.



I’m curious - does this kind of report make people less likely to go with an AMD cpu? The last time I was thinking about building a new pc, AMD had just definitively taken the lead in speed per dollar, and I would have gone with one of the higher end chips. I’m not sure whether this would have affected my decision, but I’d probably be concerned with performance degradation as well as the security issue. I’d have waited for the patch to buy a system with updated firmware, but Od still want to see what the impact was as well as learn more about the exploit and whether there were additional issues.

I ended up just getting a steam deck and all of my other computers are macs, so it’s hard to put myself back into the builder’s/buyer’s headspace.