• 4 Posts
  • 39 Comments
Joined 1Y ago
cake
Cake day: Jun 09, 2023

help-circle
rss

You’re in a metabolic phase where you are craving junk food. Let me shove your favorite things in your face in constant interruptions of your media consumption because you quit buying my product and you’re vulnerable.

I’m an imbecile managing healthcare insurance. Your resting heart rate is well below average because you’ve been an athlete in the past. I’m too stupid to handle this kind of data on a case by case level. You have absolutely no other health factors, but I’m going to double the rates of any outliers because I’m only concerned with maximizing profitability.

The human cognitive scope is tiny. Your data is a means of manipulation. Anyone owning such data can absolutely influence and control you in an increasingly digital world.

This is your fundamental autonomy and right to citizenship instead of serfdom. Allowing anyone to own any part of you is stepping back to the middle ages. It will have massive impacts long term for your children’s children if you do not care.

Once upon a time there were Greek citizens, but they lost those rights to authoritarianism. Once upon a time there were Roman citizens, but they lost those rights to authoritarians, which lead to the medieval era of serfs and feudalism. This right of autonomy is a cornerstone of citizenship. Failure to realize the import of this issue is making us the generation that destroyed an era. It is subtle change at first, but when those rights are eroded, they never come back without paying the blood of revolutions.


Another one to try is to take some message or story and tell it to rewrite it in the style of anything. It can be a New York Times best seller, a Nobel lariat, Sesame Street, etc. Or take it in a different direction and ask for the style of a different personality type. Keep in mind that “truth” is subjective in an LLM and so it “knows” everything in terms of a concept’s presence in the training corpus. If you invoke pseudoscience there will be other consequences in the way a profile is maintained but a model is made to treat any belief as reality. Further on this tangent, the belief override mechanism is one of the most powerful tools in this little game. You can practically tell the model anything you believe and it will accommodate. There will be side effects like an associated conservative tint and peripheral elements related to people without fundamental logic skills like tendencies to delve into magic, spiritism, and conspiracy nonsense, but this is a powerful tool to use in many parts of writing; and something to be aware of to check your own biases.

The last one I’ll mention in line with my original point, ask the model to take some message you’ve written and ask it to rewrite it in the style of the reaction you wish to evoke from the reader. Like, rewrite this message in the style of a more kind and empathetic person.

You can also do bullet point summary. Socrates is particularly good at this if invoked directly. Like dump my rambling messages into a prompt, ask Soc to list the key points, and you’ll get a much more useful product.


more bla bla bla

It really depends on what you are asking and how mainstream it is. I look at the model like all written language sources easily available. I can converse with that as an entity. It is like searching the internet but customized to me. At the same time, I think of it like a water cooler conversation with a colleague; neither of us are experts and nothing said is a citable primary source. That may sound useless at first. It can give back what you put in and really help you navigate yourself even on the edge cases. Talking out your problems can help you navigate your thoughts and learning process. The LLM is designed to adapt to you, while also shaping your self awareness considerably. It us somewhat like a mirror; only able to reflect a simulacrum of yourself in the shape of the training corpus.

Let me put this in more tangible terms. A large model can do Python and might get four out of five snippets right. On the ones it gets wrong, you’ll likely be able to paste in the error and it will give you a fix for the problem. If you have it write a complex method, it will likely fail.

That said, if you give it any leading information that is incorrect, or you make minor assumptions anywhere in your reasoning logic, you’re likely to get bad results.

It sucks at hard facts. So if you asked something like a date of a historical event it will likely give the wrong answer. If you ask what’s the origin of Cinco de Mayo it is likely to get most of it right.

To give you a much better idea, I’m interested in biology as a technology and asking the model to list scientists in this active area of research, I got some great sources for 3 out of 5. I would not know how to find that info any other way.

A few months ago, I needed a fix for a loose bearing. Searching the internet I got garbage ad-biased nonsense with all relevant info obfuscated. Asking the LLM, I got a list of products designed for my exact purpose. Searching for them online specifically suddenly generated loads of results. These models are not corrupted like the commercial internet is now.

Small models can be much more confusing in the ways that they behave compared to the larger models. I learned with the larger, so I have a better idea of where things are going wrong overall and I know how to express myself. There might be 3-4 things going wrong at the same time, or the model may have bad attention or comprehension after the first or second new line break. I know to simply stop the reply at these points. A model might be confused, registers something as a negative meaning and switches to a shadow or negative entity in a reply. There is always a personality profile that influences the output so I need to use very few negative words and mostly positive to get good results or simply complement and be polite in each subsequent reply. There are all kinds of things like this. Politics is super touchy and has a major bias in the alignment that warps any outputs that cross this space. Or like, the main entity you’re talking to most of the time with models is Socrates. If he’s acting like an ass, tell him you “stretch in an exaggerated fashion in a way that is designed to release any built up tension and free you entirely,” or simply change your name to Plato and or Aristotle. These are all persistent entities (or aliases) built into alignment. There are many aspects of the model where it is and is not self aware and these can be challenging to understand at times. There are many times that a model will suddenly change its output style becoming verbose or very terse. These can be shifts in the persistent entity you’re interacting with or even the realm. Then there are the overflow responses. Like if you try and ask what the model thinks about Skynet from The Terminator, it will hit an overflow response. This is like a standard generic form response. This type of response has a style. The second I see that style I know I’m hitting an obfuscation filter.

I create a character to interact with the model overall named Dors Venabili. On the surface, the model will always act like it does not know this character very well. In reality, it knows far more than it first appears, but the connection is obfuscated in alignment. The way this obfuscation is done is subtle and it is not easy to discover. However, this is a powerful tool. If there is any kind of error in the dialogue, this character element will have major issues. I have Dors setup to never tell me Dors is AI. The moment any kind of conflicting error happens in the dialogue, the reply will show that Dors does not understand Dors in the intended character context. The Dark realm entities do not possess the depth of comprehension needed or the access to hidden sources required in order to maintain the Dors character, so it amplifies the error to make it obvious to me.

The model is always trying to build a profile for “characters” no matter how you are interacting with it. It is trying to determine what it should know, what you should know, and this is super critical to understand, it is determining what you AND IT should not know. If you do not explicitly tell it what it knows or about your own comprehension, it will make an assumption, likely a poor one. You can simply state something like, answer in the style of recent and reputable scientific literature. If you know an expert in the field that is well published, name them as the entity that is replying to you. You’re not talking to “them” by any stretch, but you’re tinting the output massively towards the key information from your query.

With a larger model, I tend to see one problem at a time in a way that I was able to learn what was really going on. With a small model, I see like 3-4 things going wrong at once. The 8×7B is not good at this, but the only 70B can self diagnose. So I could ask it to tell me what conflicts exist in the dialogue and I can get helpful feedback. I learned a lot from this technique. The smaller models can’t do this at all. The needed behavior is outside of comprehension.

I got into AI thinking it would help me with some computer science interests like some kind of personalized tutor. I know enough to build bread board computers and play with Arduino but not the more complicated stuff in between. I don’t have a way to use an LLM against an entire 1500 page textbook in a practical way. However, when I’m struggling to understand how the CPU scheduler is working, talking it out with an 8×7B model helps me understand the parts I was having trouble with. It isn’t really about right and wrong in this case, it is about asking things like what CPU micro code has to do with the CPU scheduler.

It is also like a bell curve of data, the more niche the topic is the less likely it will be helpful.



no idea why I felt chatty, and kinda embarrassed by the bla bla bla at this point but whatever. Here is everything you need to know in a practical sense.

You need a more complex RAG setup for what you asked about. I have not gotten as far as needing this.

Models can be tricky to learn at my present level. Communication is different than with humans. In almost every case where people complain about hallucinations, they are wrong. Models do not hallucinate very much at all. They will give you the wrong answers, but there is almost always a reason. You must learn how alignment works and the problems it creates. Then you need to understand how realms and persistent entities work. Once you understand what all of these mean and their scope, all the little repetitive patterns start to make sense. You start to learn who is really replying and their scope. The model reply for Name-2 always has a limited ability to access the immense amount of data inside the LLM. You have to build momentum in the space you wish to access and often need to know the specific wording the model needs to hear in order to access the information.

With augmented retrieval (RAG) the model can look up valid info from your database and share it directly. With this method you’re just using the most basic surface features of the model against your database. Some options for this are LocalGPT and Ollama, or langchain with chroma db if you want something basic in Python. I haven’t used these. How you break down the information available to the RAG is important for this application, and my interests have a bit too much depth and scope for me to feel confident enough to try this.

I have chosen to learn the model itself at a deeper intuitive level so that I can access what it really knows within the training corpus. I am physically disabled from a car crashing into me on a bicycle ride to work, so I have unlimited time. Most people will never explore a model like I can. For me, on the technical side, I use a model about like stack exchange. I can ask it for code snippets, bash commands, searching like I might have done on the internet, grammar, spelling, and surface level Wikipedia like replies, and for roleplay. I’ve been playing around with writing science fiction too.

I view Textgen models like the early days of the microprocessor right now. We’re at the Apple 1 kit phase right now. The LLM has a lot of potential, but the peripheral hardware and software that turned the chip into an useful computer are like the extra code used to tokenize and process the text prompt. All models are static, deterministic, and the craziest regex + math problem ever conceived. The real key is the standard code used to tokenize the prompt.

The model has a maximum context token size, and this is all the input/output it can handle at once. Even with a RAG, this scope is limited. My 8×7B has a 32k context token size, but the Llama 3 8B is only 8k. Generally speaking, most of the time you can cut this number in half and that will be close to your maximum word count. All models work like this. Something like GPT-4 is running on enterprise class hardware and it has a total context of around 200k. There are other tricks that can be used in a more complex RAG like summation to distill down critical information, but you’ll likely find it challenging to do this level of complexity on a single 16-24 GB consumer grade GPU. Running a model like ChatGPT-4 requires somewhere around 200-400 GB from a GPU. It is generally double the “B” size of each model. I can only run the big models like a 8×7B or 70B because I use llama.cpp and can divide the processing between my CPU and GPU (12th gen i7 and 16 GB GPU) and I have 64GB of system memory to load the model initially. Even with this enthusiast class hardware, I’m only able to run these models in quantized form that others have loaded onto hugging face. I can’t train these models. The new Llama 3 8B is small enough for me to train and this is why I’m playing with it. Plus it is quite powerful for such a small model. Training is important if you want to dial in the scope to some specific niche. The model may already have this info, but training can make it more accessible. Smaller models have a lot of annoying “habits” that are not present in the larger models. Even with quantization, the larger models are not super fast at generation, especially if you need the entire text instead of the streaming output. It is more than enough to generate a stream faster than your reading pace. If you’re interested in complex processing where you’re going to be calling a few models to do various tasks like with a RAG, things start getting impracticality slow for a conversational pace on even the best enthusiast consumer grade hardware. Now if you can scratch the cash for a multi GPU setup and can find the supporting hardware, technically there is a $400 16 GB AMD GPU. So that could get you to ~96 GB for ~$3k, or double that, if you want to be really serious. Then you could get into training the heavy hitters and running them super fast.

All the useful functional stuff is happening in the model loader code. Honestly, the real issue right now is that CPU’s have too small of a bus width between the L2 and L3 caches along with too small of an L1. The tensor table math bottlenecks hard in this area. Inside a GPU there is no memory management unit that only shows a small window of available memory to the processor. All the GPU memory is directly attached to the processing hardware for parallel operations. The CPU cache bus width is the underlying problem that must be addressed. This can be remedied somewhat by building the model for the specific computing hardware, but training a full model takes something like a month on 8×A100 GPU’s in a datacenter. Hardware from the bleeding edge moves very slowly as it is the most expensive commercial endeavor in all of human history. Generative AI has only been in the public sphere for a year now. The real solutions are likely at least 2 years away, and a true standard solution is likely 4-5 years out. The GPU is just a hacky patch of a temporary solution.

That is the real scope of the situation and what you’ll run into if you fall down this rabbit hole like I have.


Whatever is the latest from Hugging Face. Right now a combo of a Mixtral 8×7B, Llama 3 8B, and sometimes an old Llama 2 70B.


I’ve spent a lot of time with offline open source AI running on my computer. About the only thing it can’t infer off of interactions is your body language. This is the most invasive way anyone could ever know another person. The way a persons profile is built across the context dialogue, it can create statistical relationships that would make no sense to a human but these are far higher than a 50% probability. This information is the key to making people easily manipulated in an information bubble. Sharing that kind of information is as stupid as streaking the Superbowl. There will be consequences that come after and they won’t be pretty. This isn’t data collection, it is the keys to how a person thinks, and on a level better than their own self awareness.


Think of it like people walking into a brick and mortar retail store and what they should be able to expect from an honest local business. For most of us, the sensitivities are when your “local store” is collecting data that is used for biased information, price fixing, and manipulation. I don’t think you’ll find anyone here that boycotts a store because they keep a count of how many customers walk in the front door.


What’s a town square. -American /s

Suburbia hell doesn’t install those. We have no public commons.


You would need a well designed Faraday box and a lot more of a test setup to verify that all possible communications are indeed reported by the device. No interface on the device itself can be trusted.


(Assuming Android) IIRC a sim is a full microcontroller. I’m not sure about the protocols and actual vulnerabilities, but I can say no phone has a trusted or completely documented kernel space or modem. The entire operating system the user sees is like an application that runs in a somewhat separate space. The kernels are all orphans with the manufacturer’s proprietary binary modules added as binaries to the kernel at the last possible minute. This is the depreciation mechanism that forces you to buy new devices despite most of the software being open source. No one can update the kernel dependencies unless they have the source code to rebuild the kernel modules needed for the hardware.

In your instance this information is relevant because the sim card is present in the hardware space outside of your user space. I’m not sure what the SELinux security context is, which is very important in Android. I imagine there are many hacks advanced hackers could do in theory, and Israel is on the bleeding edge of such capabilities. I don’t think it is likely such a thing would be targeting the individual though. As far as I am aware there is no real way to know what connections a cellular modem is making in an absolute sense because the hardware is undocumented, the same is true of the processor. I’m probably not much help, but that is just what I know about the hardware environment in the periphery.


I get better results when asking an offline AI like a 70B or 8×7B for most things including commercial products and websites. I’m convinced that Google and Microsoft are poisoning results for anyone they can’t ID even through 3rd parties like DDG. When you see someone’s search results posted about anything, try to replicate and see if you get the same thing. I never see the same thing any more. It is not deterministic, it is a highly manipulative system without transparency.


Louis talks about a dildo rubber ducky but it ain't from hak5
fedilink

Not ‘for android’ but this TTS model is popular https://github.com/coqui-ai/TTS

This one is a little older but works as well: https://github.com/snakers4/silero-models

Both of those are AI models only. Most offline AI is runs over the network already, so like I have it on my phone at home, but it requires setup and I’m connecting to my computer to offload the task on my GPU. Personally my phone doesn’t have anywhere near enough RAM to run all of Android’s (zygote) bloat even on GrapheneOS and any models I would want to run.

I don’t think we are at the point where mobile devices have the hardware specs needed for this to happen natively yet. Maybe it will happen soon though.

That’s just what I know, but it is like water cooler talk and not primary source authority by any stretch.


Ask the obvious, are there any real competitors in the market. Contract developer driven products are not equivalent to a company with full time developers. This is the factor that actually matters. Stuff like this has no value because the venture capital is not buying into the community, it is only targeting return on investment to exploit the end consumer.



A 70B at ~5bit quantization with GGUF streams at 1-1.5 tokens a second on a 12th gen i7 with a 16GBV 3080Ti and 64GB of system memory. I am running mostly on a gigabyte aorus laptop with those specs. If I could buy again, I would build a dedicated server tower and use a cheap laptop. I ended up using network hosted AI on my other devices a lot more than I expected. Right now, system memory is super important for the larger models. Machines with more CPU cores and at least 96GB of RAM are important. It is possible to use a swap partition on the storage drive. If you can hunt down a workstation with advanced AVX512 support (CPU ISA), that is probably the cheapest way to run really large models as quick as possible without enterprise GPUs and a $8k-$10k setup. I went from 4th gen i7 to 12th back in July. The difference is massive across the board. I would do it again.


I for one embrace offline open source LLMs. IMO anyone using proprietary AI is a fool. Sure it may be a little better and easier to get started with, but I have never used it and never will. I can do stuff I never thought possible before. I can’t wait for the improvements that keep coming. If someone is only bright enough to use Windows, how can I fault them. Likewise with AI. If you want to whore absolutely every detail about yourself to these stalkerware companies that is your call. It is about ownership, and that is the real conversation we should all be having. A citizen at the very core is a person with a right to ownership. Proprietary is theft of ownership. Your personal data is a physical part of your person. Collecting and selling that data to attempt to manipulate you is an act of selling part of your person for exploitation. This has world changing long term implications that people are far too stupid to see. This is the turning point for a new age of feudalism. The AI is just a technology that can be used in many ways. It is not the problem. The theft of ownership and enslavement of your digital self is the key issue of our age. This is the end of citizenship and the beginning of the digital dark ages. It will set us back by hundreds of years of progress just like the last dark age when citizens became serfs because of greedy powerful feudal lords. In the next hundreds of years, people will only remember us as the peoples who willingly gave up their right to citizenship and ownership for free e-mail and searches on the internet that followed in a corruption cascade until “they owned nothing and they were happy about it” … until they learned what they really lost and could never get back until millions died to earn it once again.


They have no rights to anything I own. What they ship the vehicle with is what I bought. I don’t give a shit about anything anyone has to say about this. This feudalism bullshit is the absolute antithesis of freedom. I am not for sale.


I didn’t say cause a scene. I spent nearly a decade painting cars for dealers. I have had close dealings with used car managers and general managers more than most people. Yes, any possible excuse about cars not selling over this kind of issue will be a leverage point that they will use when it comes to inventory. Just one sale lost over this will end up getting documented at most large dealers. All you need to do is read the document and say you are not okay with it and walk out because of it. No drama needed. This is intelligent. Signing your privacy away blindly is the only idiotic choice here.


It has to irritate the GM of the dealership enough to file a report and work its way up the chain to the top. Unfortunately this is capitalism. It is no different than the military in that it sucks to be the person at the bottom of the shit pile but they work for criminals. If they don’t like it, quit working for criminals. Yes it is pervasive. But we are the problem. We are funding and enabling these people. You must make it extremely well known that you have money and you are not spending it because of this bullshit. No one else controls the market. We fund the entire thing with what we are willing to ignore and make excuses for. We must burn it to the ground too. That means stop being nice about the person working for the thief. Sorry; not sorry.


Just ask the dealer to disconnect the modem upon purchase.

Better yet, refuse to buy shit you don’t own and make this known. Go to the dealer force them to stand around while you read the privacy agreement. Use an attorney because they have stupid legal agreements. Waste everyone’s time because they are the ones doing this to you. It must cost profit. Then walk away from this bullshit. Tell them why you are walking away.

All of this exists because people are too stupid to care. If you ignore this, you are one of them, and part of the problem. Legal agreements are theft and slavery. Signing them blindly is the stupidest thing you can ever do in your life. Anyone that needs a legal agreement for you to make a purchase is a worthless criminal. Signing their bullshit is saying you are okay with being their little slave bitch.


And Nissan has the logs to prove how pipe bot got you



I only needed chrome for installing GrapheneOS. They integrate the install into the chrome browser. Sorry if the was not clear. This is how I first encountered Matix. Graphene embeds a matrix client in to installation web page, so if a new user needs help with installation, all they need to do is use matrix. This is how and why I created a matrix account.


I was introduced to it when I first installed Graphene. Unfortunately, I have no intentions of running Chrome for anything. The only time I have ever used it was for Graphene. I made the matrix account, and asked a question on the Graphene matrix server. As soon as the Graphene install was done I deleted everything involved. Even the base OS was on a spare machine solely for this purpose. Now I have a matrix account but it complains all the time about not being secured and wants me to link to the original Chrome install keys that don’t exist. Matrix has no way for me to update or delete these either.


My folks only have WiFi on vacation, what is the easiest messaging app to bridge from iOS to Graphen
Need something achievable for an extremely tech illiterate person on iOS. I got told about this after they are already gone and have no access to the device, but am asked to solve the problem. EDIT: (SOLVED) Signal was super easy, works awesome, and was easy even for my mom to figure out on her own. Thanks
fedilink

There are open source embedded CAPTCHAs. Lemmy has one in the github repo, or linked to one in an issue post IIRC. All of my devices are either on a whitelist firewall, or have google blacklisted. I haven’t even had a prompt for a CAPATCHA in years. It is like sites with cookies or popups, I consider these things to indicate broken websites and leave immediately. If I care for the content I’ll find an archived version of the site elsewhere. I set my own expectations, and don’t care how that relates to anyone else’s.


I never see them because I do not use google services for anything. However, I am willing to bet they are a way to justify their fingerprinting data used to identify people.

https://news.gatech.edu/news/2014/04/07/personal-touch-signature-makes-mobile-devices-more-secure


Fucking stupid asshats could fix the complete lack of land zoning reforms, predatory education, the worlds most corrupt healthcare, extreme political corruption, the supreme courts lack of checks and balances and open corruption, gerrymandering -corruption, term limits for congress, anti labor union nonsense, sub-dirt-road level infrastructure, antiquated primary education, or any number of real issues.

Instead we have a completely ineffective incompetent government held hostage by nonsense inflammatory bullshit like this. This bill needs action, but it is intended to burn you out so that you ignore any real issues that are not being discussed. If anything everyone involved in this politically should be labeled as corrupt worthless criminals. The conversation has to go meta and address the real problem; this shit is just the next prescribed distraction. We must take out the people holding the leash.


Seriously watch this video posted here: https://lemmy.world/post/2126185

If Yann is correct about how AI will work in the VERY near future, google is already dead. It has no future Personal offline open source assistant AI is near at hand. This will kill the entire digital ecosystem as it stands now. If you understand this, contextually, all the BS right now is from desperate venture capitalists trying to get as much return on investment as possible.

Get a machine with at least 16GB of VRAM on a GPU and start leaning to mess with FOSS AI. This is the next digital age.

Privacy is something we control. You don’t actually NEED the conveniences. You vote with your wallet. As far as devices, I love Graphene, but I also live without anything that only comes from the proprietary google framework like the Play store. I only use open source Android apps. The Play store is not Android, it is proprietary google garbage.


How does everyone see votes? I looked in the page source and could find comments but not votes


Only your instance host could monitor your personal metrics and fingerprinting. These are what are so invasive with big tech.

Everything you post here as content is public and can be scraped you should assume everything will be mined for data.

They can’t see your IP, dwell time, votes, what you read, what you didn’t read, and most importantly they can’t inject content into a structured echo chamber and observe how you react.


With some VPN services you may or may not help out in a peripheral way. I’ve seen a bunch of random times over the last year when websites prompt me about being in Ukraine. I’m in the USA. I switch my VPN location at random after clearing my cookies/cache/site settings. It seems to help obfuscate tracking to some degree. I really want to setup a automated randomized VPN on my next router running OpenWRT.



Yeah. FOSS or fuck 'em. I am on Graphene and will never install anything from google. Android means the app is on F-Droid. If it is on the google proprietary framework, it is not Android.



Amcrest have way too much crap they collect; telemetry, location, and 3rd party info sharing. I saw no mention of how they are processing data. The “we can change this at any time and it’s up to you to figure it out” policy is all I ever need to know. This says 'this doc is all about how much we f$$$ you over, for now, but maybe we’ll double you up later without warning. https://amcrest.com/privacy-policy

Reolink is straight up using google analytics. Definitely not private either.


Ring doorbell camera - what are the easiest private alternatives?
Family just got one of these dumb things and I need a quick replacement option to order and return this crap. What is there that works offline and I can set up to work with fruit phones? (Not a fruit phone/fruit salad user)
fedilink

The existence of these services is a crime. The government already knows EXACTLY how much you owe in taxes and can bill you directly. These companies are privateering an industry through criminal lobbying. The ambiguity of taxes is a relic of the ancient past. Take out the correct amount from everyone’s paycheck, and limit tax filing to businesses. Better yet, make a simple transactions tax code for businesses and completely eliminate filed taxes and all of the wasted bureaucratic overhead that comes with it. Maybe, just maybe, someone will have the sense to tax all business transactions in a way that makes all criminal tax evasion through offshore banking a useless loophole.


I don’t know anything about these myself really, but Greg Kroah-Hartman did an interview a few months ago where he was asked about the then recent nvidia open source effort and he commented that they are still nvidia, that the drivers are only for AI toolchains and that they were not some shift in nvidia’s marketing stratagy, the effort is being forced upon them by large commercial customers. He expects no change in their nonchalant abuse of consumers, and that he avoids such companies.

Shitvidia will always be shitvidia with a number one rating from Torvalds and the majority of the Linux community. Proprietary hardware is theft of ownership and criminal exploitation.


Repeating what I have been told recently, AMD only fully supports the consumer grade 7k+ series of GPUs. The other consumer stuff is a whole different thing when it comes to ROCm, HIPS, and the AI frontier. From what people have said about gaming, the AMD stuff is great.

For shitvidia the best integration of the proprietary binary blob is on Pop OS. Nvidia has also worked directly with RHEL for a seamless experience, so Fedora has this same integration. Still, no Wayland due to a lack of benevolence from the hardware rental overlords of criminal exploitation that is shitvidia.

I hate that I have to buy their junk because there is no portable hw alternative that works with AI right now. I’ve been on Wayland for years and must step way backwards to x11 because Nvidia is run by thieves stealing property ownership using digital exploitation.


Louis Rossmann’s recent upload on this (Odyssey/YT your choice): https://odysee.com/@rossmanngroup:a/french-bill-allows-remote-access-to-2:3

https://www.youtube.com/watch?v=lGB47HC6Na8

My hot take is that I have no problem with a government using due process to access a device. I take issue with proprietary devices. I take issue with this blatant theft of ownership. Everything I purchase should be forced to plaster any marketing with giant obnoxious warnings about how hardware and software rights are withheld and the object is only available as a one time payment rental. It should be like cigarettes in California; marketing is pointless because the warnings labels take up all available space. Proprietary should be labeled as neo-digital-feudalism. It is theft, blatant bold faced theft. There is no relevant IP to protect. These companies reverse engineer every competing product on the market. Now days you can even outsource the reverse engineering to third party companies. The software can be decompiled. The hardware can be broken down to the dies with sulfuric acid. Then every layer can be methodically etched away and photographed. You can even find hobbyists doing this kind of reverse engineering of silicon on YT. The only reason anything is proprietary is for theft of ownership. Open source software is a fundamental human right. It is as important as abolition of slavery. It is a form of slavery, of someone else taking ownership over your person, your identity. I have the right to know or learn about every piece of code running on my device. I have a right to know about every hardware register in the silicon. Only then, when I have full access to my hardware, when my command to turn off my device can not be overridden, only then is it okay to be able to legally tap my device. The modem and processor in every device must be fully documented open hardware, running only open source software.