Hello folks. I want to hear your opinions about the advances in AI and how it makes you feel. This is a community about privacy, so I already kind of know that you’re against it, at least when AI is implemented in such a way that it violates peoples’ privacy.

I recently attended a work-related event and the conclusion was that AI will come and change everything in our field. A field which has been generally been dominated by human work, although various software has been used for it. Without revealing too much, the event was for people who with texts. I’m a student, but the event was for people working in the field I plan to work in in the future. The speakers did not talk about privacy concerns (not in detail, at least) or things such as micro work (people who get paid very little to clean illegal content in AI training data, for example).

You probably can guess that that I care about privacy: I’m writing this on Lemmy, for a privacy community. I’m a Linux user (the first distro I used was Ubuntu 10.04) and I transitioned to Linux as my daily driver in November last year. I care about the Open Source community (most of the programs I used on Windows were FOSS). I donate to the programs I use. I use a privacy-respecting search engine, use uBlock and Privacy Badger on Firefox. I use a secure instant messenger and detest Facebook. But that’s where it ends, because I use a stock Android phone. But at least I care about these things and I’m eager to learn more. When it comes to privacy, I’m pretty woke, for the lack of a better word.

But AI is coming, or rather, it’s already here. Granted, people who talked at that event were somewhat biased, as they worked in the AI industry, so even if they weren’t marketing ChatGPT, they were trying to hype up the industry. But apparently, AI can already help so called knowledge workers. It can help in brainstorming and generating ideas. It can produce translations, it can summarize texts, it can give tips…

The bottom line seems to be that I need to start using AI, because either I will use it and keep my job in the future, or I will not use it and risk being made redundant by AI at some point in time.

But I want to get other perspectives. What are your views on AI, and has it affected your job, and if so, how? I know some people have said here that AI is just a bunch of algorithms and that it’s just hype and that the bubble will burst eventually. But until it does, it seems it’ll have a pretty big impact on how things work. Can we choose to ignore it?

Sims
link
fedilink
19M

On the big scale there’s only one main concern for the current system: Can people adapt to new knowledge/functions as fast as changes occur. If they cannot, the labor market collapse. It took a while to adapt to cars but people succeeded, and someone might suggest that we can do the same now. I doubt that.

We are already in a very accelerated world compared to then, and whats worse is that the AI boom has just started and will accelerate faster and faster. ALL levels of the entire AI tech stack is accelerating. Hardware, algorithms, models, cognitive networks (agents), and a shitload of new papers every single day. All big tech are using current AI on all levels to accelerate development of the next AI, which will dev the successor etc.

Besides that, a lot of other global events will push the system towards a transition to something different.

Slowness in adopting AI in bizz are perhaps the only delay workers can hope for, so imho you can only prolong your current job and try to adapt as far as you can, not keep it. Timeline is difficult tho.

The Doctor
link
fedilink
19M

I think it’s interesting that limited AI technology has made it to street level. There was talk of keeping it entirely in-house as a “secret sauce” for competitive advantage (I used to work for one of the companies that was working on large-scale practical LLM), so when OpenAI started gaining notice it raised an eyebrow.

Security-wise it’s a pretty big step backward, because the code it hashes together tends to have older vulns in it. It’s not like secure software development practices are commonly employed right now anyway. I’m not sure when that’s going to become a huge problem, but it’s just a matter of time.

One privacy compromising problem has already been stumbled over (ChatGPT could be tricked into dumping its memory buffers containing other conversations into a chat session) and there will undoubtedly be more in the future. This also has implications for business uses (because folks are already putting sensitive client information into chats with LLMs, which means it’s going to leak eventually).

I really hope that entirely self-hosted LLMs become common and easy to deploy. If nothing else, they’re great for analyzing and finding stuff in your personal data that other forms of search aren’t well suited for. Then again, I hoard data so maybe I’m projecting a little here.

As for my job, I’m of two minds about it. LLMs can already be used for generating boilerplate for scripts, Terraform plans, and things like that (but then again, keeping a code repo of your own boilerplate files is a thing, or at least it used to be). It might be useful for rubber ducking problems (see also, privacy compromise).

It wouldn’t surprise me if LLMs become a big reason for layoffs, if they’re not already. LLMs don’t have to be paid, don’t have tax overhead, don’t get sick, don’t go BOFH, and don’t unionize. The problem with automating yourself out of a job is that you no longer have a job, after all. So I think it’s essential for mighty nerds to invest the time into learning a trade or two just in case (I definitely am - companies might be shooting themselves in the foot by laying off their sysadmins, but if it means bigger profits for shareholders they’ve demonstrated that they’re more than happy to do so).

I’m in life sciences and AI was recently disallowed for grant writing or papers because of IP concerns. Additionally the chance of it hallucinating fake papers while unable to evaluate the real ones it trawls through make it difficult at professional level. ML is very helpful in certain design/prediction/ measurement areas but I’m not worried that these type of AI will steal a job. I am a bit worried that learning via these ai will cause issues though.

I’m a developer and AI is already changing what we do, because we do AI sometimes, not how we do it though.

What I do for work is very niche, so imagining exactly how it will be affected is kind of difficult. There is design work above me, which very well could be affected. I kind of get the impression that the advancements of AI will possibly lock out any kind of lateral moves that I might be able to make…

Automation would be a bigger concern for what I currently do, but the robots still have a ways to go (I hope).

I think AI will help alot with the boring stuff, and leave the bigger/interesting/more creative work to us. It will take some to work all this out, though.

When i went to school as a kid, the degree i got at university didn’t exist yet. When i finished university the job i now have didn’t exist yet. The world has always, and will always, change.

I work in university admissions and the programs require a motivation letter. While absolutely hating writing Cover letters or motivations myself, I do see the advantages for admission (although I absolutely hate the system).

Mainly it is a great way to give applicants with weaker grades a shot. And a good motivation letter where I get a feeling for who they are will put them almost always automatically higher in my recommendations. However, I am so sick of the same ChatGPT motivation. And it is always the same. Oh you honed your ability to do this? Your drawn because of that? I have read your letter 50 times before. And I don’t mean the contents. Let’s be real, most do not have an inspiring story about why they want to study, and that is okay, the program sounding good is a perfectly valid reason. But show me who you are (or what you want me to think who you are). I really developed an adverse reaction to these AI letters. I hate them because I know I’m reading a robots “thoughts”. By all means use the tools available to polish but don’t polish out your personality.

This will lead to motivation letters being abolished. And while for most people that’s great and a CV should speak for itself it will remove chances to get into a prestigious program for people who are not perfect or had the luck to grow up rich.

That whole motivation letter thing honestly sounds more like AI exposing a flaw in the education system and less like a problem with AI in general.

You might frame it as people who are not perfect getting a chance but I would frame it as people who are better at words than at exams getting an edge. The genius but socially awkward person still has no chance because the exams bored them to tears and their anxiety prevented them from writing the letter still won’t get in.

Boredom is an excuse, reality is no matter where or what you work as there will be boring things involved at some point to some degree. We are hundreds of years past when nobles would sponsor some eclectic dude to do weird science/art just to say they were that weirdos sponsor. You have to be able to work past boredom to function in society.

A “genius” who can’t even write a letter isn’t meaningful. How can they communicate their ideas and thoughts if they can’t write a letter? If Newton never published Principia would we know him? No, we’d have to wait for the guy who could talk and write.

I’m in the medical device field, and user error is the most common patient killer. No matter how many treatment recommendations you put into the UI, Dr. Smartass overrides it all and then you have a casualty. Can’t wait for AI to fix stupid.

At the radiology clinic where my dad worked, they had a trial with image recognition trained on detecting stuff in MRI images. The AI would draw a red cirlce around every suspicious place it detected.

What they noticed is that the doctors started to only look at the red cirlces and would miss a lot more of the non-obvious nuances. Which resulted in more completely wrong diagnosis and a lower diagnosis quality overall.

So I doubt that it will fix stupid for now. Even if it is implemented as a sanity check review, after the doctor has done his work, they might get more sloppy when relying on the AI check to catch their oversight.

Afaik the best way to improve quality of a doctors work is longer education and more worktime per patient. Or more rigorous processes where multiple doctors have to give their independent analysis on any patient. But any of that is too expensive for profit oriented commercial clinics.

Sadly it is more economically viable to diagnose as quickly as possible, let some patients die due to errors and fight a lawsuit, then to employ twice as many highly skilled doctors.

I’m already using AI for coding. It helps me find AND fix bugs much faster, while teaching me exactly what I did wrong and why the solution works. It’s insane.

I think the only thing that could really stop or slow down AI’s impact on jobs would be some sort of large economic crash, war, or a major supply chain issue with computing parts. It’s proving to have actual, real world use cases now in many lines of work. And the sky’s the limit

I think what might stop it is the end of the free cloud AIs when the ones running those realize they are losing money that way. AI uses up a ridiculous amount of computing resources for what it does so unless we manage to optimize it better soon it might go away again in many areas where it is not really needed and/or be replaced with more traditional approaches to solve the same problems.

I simply cannot see how using a non-locally running and basically contained AI would work with the secrecy requirements in the (wider) engineering fields. There would certainly be situations where it could help, e.g. the mentioned translation work. Sure, you’d still need an actual human to check what the AI produced, but I can see time savings in those areas.

Many programs used in those fields already use algorithms, rule and filter sets in the daily workflow, so maybe that could be further improved. But overall? No, very unlikely to work.

I’m waiting for ChatGPT to start slipping in product recommendations/mentions into its responses. It’s only a matter of time before ads ruin whatever good is in AI.

Thry wont put that into the api and i exclusivly use foss tools that utilise the api. For just regular chat i can reccommend betterchatgpt github will host an instance for u for free. Api does have some costs but u get the newer models far cheaper than chatgpt+

Create a post

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

  • Posting a link to a website containing tracking isn’t great, if contents of the website are behind a paywall maybe copy them into the post
  • Don’t promote proprietary software
  • Try to keep things on topic
  • If you have a question, please try searching for previous discussions, maybe it has already been answered
  • Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
  • Be nice :)

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

  • 0 users online
  • 57 users / day
  • 383 users / week
  • 1.5K users / month
  • 5.7K users / 6 months
  • 1 subscriber
  • 2.96K Posts
  • 74.6K Comments
  • Modlog