This is a very desktop-centric view. Distros like Fedora target servers, desktops, and other use cases, and each has a separate, competing set of expectations.
For example, servers want everything using a common set of libraries so security patches only need to happen in one place. Desktops, however, are more interested in getting the latest updates quickly with minimal effort, and security is a bit less important. And then you have cases like kiosks where security and rapid updates aren’t particularly important, they want an immutable image so the thing has minimal chances of breaking.
My main complaint at a high level is that things like flatpaks tend to use a lot more disk space, but storage space is quite cheap relatively speaking, so that’s a pretty minor concern. My more practical issue is that switching from repo-managed to flatpaks is a bit of a pain since many times that release channel isn’t well tested, but that’s not a technical issue with it though. The practical result is a weird mix of repo and flatpak, which is harder to maintain than just doing one or the other.
This is one of the things I never really got about the Linux community, like first people are really eager to invite Windows users over to the Linux side (which are regular desktop users for the most part), then these people add in some criticism about matters like these but it’s always met with “but server usage”. Why is the average desktop user expected to care about servers?
For servers there’s Docker/Kubernetes/Podman, which is well-established and serves a similar purpose as Flatpak on the desktop. Servers were actually first with the increase in popularity of containers.
90 % or more of my desktop (Fedora Kinoite and Silverblue) apps are Flatpaks already. I only have four rpm-ostree overlays (native packages) left: android-tools, brasero/k3b, syncthing (I could switch to SyncThingy for a Flatpak) and virt-manager/virtualbox
With Flatpak there is “flatpak override” which gives you the ability to grant additional permissions or restrict them even further. E. g. I use it to connect KeePassXC with Firefox or to disallow access to the X server to force almost all apps to use Wayland instead of X. It also allows me to prevent apps from creating and writing into arbitrary directories in my home.
Once I reinstall my home server, all its server software will be containerised as well (five years ago I didn’t see the necessity yet). I am tired of having to manage dependencies with every (Nextcloud) upgrade. I want something that can auto update itself completely with minimal or no breakage, just like my desktops.
For servers there’s Docker/Kubernetes/Podman, which is well-established and serves a similar purpose as Flatpak on the desktop
Originally, containerization was for security sandboxing, so if one service gets exploited, the attacker would only have access to the underprivileged container runtime, not the wider system, and simplifying deployment was a nice side-effect. I have serious concerns that the shift to containers means security updates will be applied less frequently because it needs to be done for a lot more services.
For example, the app I work on gets delayed security updates because we have to make similar changes for each of our microservices, which is a fair amount of effort and a relatively low-priority task. If we had a cluster of similarly-configured servers, it would be as simple as updating system libraries, but since everything is wrapped in a container, those each need to be rebuilt and redeployed. As it stands, our SW stack has a number of security advisories flagged by our container hosting service (none seem realistically exploitable), but they are still largely being ignored because of the amount of effort required to keep them updated.
Flatpak/containers are very similar to the old discussion around static vs dynamic linking, except there’s a level of containerization to mitigate exploits escaping the sandbox.
Once I reinstall my home server, all its server software will be containerised
Same. However, that’s because I care a bit more about ease of (re)deployment and less about exploits, because my home server isn’t particularly critical, and certainly not a big target for attackers. I am more likely to migrate to new hardware than to need to pass a security audit.
The fact that apps can be deployed at different paces definitely is a real double edged sword. On one hand it prevent an app who prioritizes a fix low on the list from slowing down other apps on the same system, meaning everything should be able to update ASAP. It also means that the slower updating ones have less community/business pressure telling them to get fixed.
@sugar_in_your_tea@fossisfun It’s also just straight-up more daunting to update an application running inside a container, and a lot harder to troubleshoot when it goes wrong.
Yeah, I can see that, especially for an end-user. But as a developer deploying my code somewhere, it’s not that much different, provided logging is configured.
This is a very desktop-centric view. Distros like Fedora target servers, desktops, and other use cases, and each has a separate, competing set of expectations.
For example, servers want everything using a common set of libraries so security patches only need to happen in one place. Desktops, however, are more interested in getting the latest updates quickly with minimal effort, and security is a bit less important. And then you have cases like kiosks where security and rapid updates aren’t particularly important, they want an immutable image so the thing has minimal chances of breaking.
My main complaint at a high level is that things like flatpaks tend to use a lot more disk space, but storage space is quite cheap relatively speaking, so that’s a pretty minor concern. My more practical issue is that switching from repo-managed to flatpaks is a bit of a pain since many times that release channel isn’t well tested, but that’s not a technical issue with it though. The practical result is a weird mix of repo and flatpak, which is harder to maintain than just doing one or the other.
This is one of the things I never really got about the Linux community, like first people are really eager to invite Windows users over to the Linux side (which are regular desktop users for the most part), then these people add in some criticism about matters like these but it’s always met with “but server usage”. Why is the average desktop user expected to care about servers?
For servers there’s Docker/Kubernetes/Podman, which is well-established and serves a similar purpose as Flatpak on the desktop. Servers were actually first with the increase in popularity of containers.
90 % or more of my desktop (Fedora Kinoite and Silverblue) apps are Flatpaks already. I only have four rpm-ostree overlays (native packages) left: android-tools, brasero/k3b, syncthing (I could switch to SyncThingy for a Flatpak) and virt-manager/virtualbox
With Flatpak there is “flatpak override” which gives you the ability to grant additional permissions or restrict them even further. E. g. I use it to connect KeePassXC with Firefox or to disallow access to the X server to force almost all apps to use Wayland instead of X. It also allows me to prevent apps from creating and writing into arbitrary directories in my home.
Once I reinstall my home server, all its server software will be containerised as well (five years ago I didn’t see the necessity yet). I am tired of having to manage dependencies with every (Nextcloud) upgrade. I want something that can auto update itself completely with minimal or no breakage, just like my desktops.
Originally, containerization was for security sandboxing, so if one service gets exploited, the attacker would only have access to the underprivileged container runtime, not the wider system, and simplifying deployment was a nice side-effect. I have serious concerns that the shift to containers means security updates will be applied less frequently because it needs to be done for a lot more services.
For example, the app I work on gets delayed security updates because we have to make similar changes for each of our microservices, which is a fair amount of effort and a relatively low-priority task. If we had a cluster of similarly-configured servers, it would be as simple as updating system libraries, but since everything is wrapped in a container, those each need to be rebuilt and redeployed. As it stands, our SW stack has a number of security advisories flagged by our container hosting service (none seem realistically exploitable), but they are still largely being ignored because of the amount of effort required to keep them updated.
Flatpak/containers are very similar to the old discussion around static vs dynamic linking, except there’s a level of containerization to mitigate exploits escaping the sandbox.
Same. However, that’s because I care a bit more about ease of (re)deployment and less about exploits, because my home server isn’t particularly critical, and certainly not a big target for attackers. I am more likely to migrate to new hardware than to need to pass a security audit.
The fact that apps can be deployed at different paces definitely is a real double edged sword. On one hand it prevent an app who prioritizes a fix low on the list from slowing down other apps on the same system, meaning everything should be able to update ASAP. It also means that the slower updating ones have less community/business pressure telling them to get fixed.
@sugar_in_your_tea @fossisfun It’s also just straight-up more daunting to update an application running inside a container, and a lot harder to troubleshoot when it goes wrong.
Yeah, I can see that, especially for an end-user. But as a developer deploying my code somewhere, it’s not that much different, provided logging is configured.