Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
much thanks to @gary_host_laptop for the logo design :)
TLDR: Bot generated random number, happened to be a real person’s phone number
I don’t understand what is “terrifying” about that. Even without the bot, anyone with malicious intent could imagine up a random phone number.
These kind articles with thin content are just used by these news agencies to fit into the “bots are bad” narrative that makes them money. Of course bots are bad in many ways, but not for such flimsy reasons.
Ima be honest, I’m not surprised. Introduce AI in to your critical systems go ahead. Don’t be surprised when it fucks shit up
Bait
If working with AI has taught me anything, ask it absolutely NOTHING involving numbers. It’s fucking horrendous. Math, phone numbers, don’t ask it any of that. It’s just advanced autocomplete and it does not understand anything. Just use a search engine, ffs.
You’d think itd be able to do math right, since ya know, we’ve kinda had calculators woeking for a long time
What models have you tried? I used local Llama 3.1 to help me with university math.
It seemed capable of solving differential equations and doing LaPlace transform. It did some mistakes during the calculations, like a math professor in a hurry.
What I found best, was getting a solution from Llama, and validating each step using WolframAlpha.
Copilot and chatgpt suuuuck at basic maths. I ws doing coupon discount shit, it failed everyone of them. It presented the right formula sometimes but still fucked up really simple stuff.
I asked copilot to reference an old sheet, take column A find its percentage completion in column B and add ten percent to it in the new sheet. I ended up with everything showing 6000% completion.
Copilot is inegrated to excel, its woeful.
Or, and hear me out on this, you could actually learn and understand it yourself! You know? The thing you go to university for?
What would you say if, say, it came to light that an engineer had outsourced the statical analysis of a bridge to some half baked autocomplete? I’d lose any trust in that bridge and respect for that engineer and would hope they’re stripped of their title and held personally responsible.
These things currently are worse than useless, by sometimes being right. It gives people the wrong impression that you can actually rely on them.
Edit: just came across this MIT study regarding the cognitive impact of using LLMs: https://arxiv.org/abs/2506.08872
Now, I’m not saying you’re wrong, but having AI explain a complicated subject in simple terms can be one of the best ways to learn. Sometimes the professor is just that bad and you need a helping hand.
Agreed on the numbers, though. Just use WolframAlpha.
Getting an explanation is one thing, getting a complete solution is another. Even if you then verify with a more suited tool. It’s still not your solution and you didn’t fully understand it.
It was the last remaining exam before my deletion from university. I wish I could attend the lectures, but, due to work, it was impossible. Also, my degree is not fully related to my work field. I work as a software developer, and my degree is about electronics engineering. I just need a degree to get promoted.
I asked my work’s AI to just give me a comma separated list of string that I gave it, then it returned a list of strings with all the strings being “CREDIT_DEBIT_CARD_NUMBER”. The numbers were 12 digits, not 16. I asked 3 times to give me the raw numbers and had to say exactly “these are 12 digits long not 16. Stop obfuscating it” before it gave me the right things.
I’ve even had it be wrong about simple math. It’s just awful.
Either you are bad at chatgpt, or I am a machine whisperer but I have a hard time believing copilot couldnt handle that, I am regularly having it rewrite sql code
I was using Amazon Q, so it could just be the shitty LLM.
Oh yeah, that’s definitely shitty then, copilot does shit like that really easily
Yeah because it’s a text generator. You’re using the wrong tool for the job.
Exactly. But they tout this as “AI” instead of an LLM. I need to improve my kinda ok regex skills. They’re already better than almost anyone else on my team, but I can improve them.
It’s really crappy at trying to address its own mistakes. I find that it will get into an infinite error loop where it hops between 2-4 answers, none of which are correct. Sometimes it helps to explicitly instruct it to format the data provided and not edit it in any way, but I still get paranoid.
I really love this new style of journalism where they bash the AI for hallucinating and making clear mistakes, to then take anything it says about itself at face value.
It’s a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing “terrifying” about this imo.
Right. There’s nothing terrifying about the technology.
What is terrifying is how people treat it.
LLMs will cough up anything they have learned to any user. But they do it while successfully giving all the human social cues of an intelligent human who knows how to keep a secret.
This often creates trust for the computer that it doesn’t deserve yet.
Examples, like this story, that show how obviously misplaced that trust is, can be terrifying to people who fell for modern LLM intelligence signaling.
Today, most chat bots don’t do any permanent learning during chat sessions, but that is gradually changing. This trend should be particularly terrifying to anyone who previously shared (or keeps habitually sharing) things with a chatbot that they probably shouldn’t.
Also, the first five digits were the same between the two numbers. Meta is guilty, but they’re guilty of grifting, not of giving a rogue AI access to some shadow database of personal details… yet? Lol
It’s a case of Gell-Mann Amnesia.
It’s as if some people will believe any grammatically & semantically intelligible text put in front of their faces.
Especially if it’s anti-AI drivel. People eat this crap up.
Some eat up pro-AI drivel, some others anti-AI drivel. Tech bubbles are a wild ride. At least it’s not a bullshit bubble like crypto or web3/nft/metaverse.
It is so awful that it’s funny!
Ah yes, what else to expect from »the most intelligent AI assistant that you can freely use«.