A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn’t great, if contents of the website are behind a paywall maybe copy them into the post
- Don’t promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)
- 0 users online
- 57 users / day
- 383 users / week
- 1.5K users / month
- 5.7K users / 6 months
- 1 subscriber
- 3.12K Posts
- 78K Comments
- Modlog
LLMs are less magical than upper management wants them to be, which is to say they won’t replace the creative staff that makes art and copy and movie scripts, but they are useful as a tool for those creatives to do their thing. The scary thing was not that LLMs can take tons of examples and create a Simpsons version of Cortana, but that our business leaders are super eager to replace their work staff with the slightest promise of automation.
But yes, LLMs are figuring in advancements of science and engineering, including treatments for Alzheimer’s and diabetes. So it’s not just a parlor trick, rather one that has different useful applications that were originally sold to us.
The power problem (LLMs take a lot of power) remains an issue.
I’m unaware of any substantial research on Alzheimer’s or diabetes that has been done using LLMs. As generative models they’re basically just souped up Markov chains. I think the best you could hope for is something like a meta study that is probably a bit worse than the usual kind.
I agree, things that occure the most in the training data set will have the highest weights/probabilities in the Markov chain. So it is useless in finding the one, tiny relation that humans would not see.