Hastily read around in the related issue-threads and seems like on it’s own the vm.max_map_count doesn’t do much… as long as apps behave. It’s some sort of “guard rail” which prevents processes of getting too many “maps”. Still kinda unclear what these maps are and what happens is a process gets to have excessive amounts.
According to kernel-doc/Documentation/sysctl/vm.txt:
This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries.
While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation.
The default value is 65530.
Lowering the value can lead to problematic application behavior because the system will return out of memory errors when a process reaches the limit. The upside of lowering this limit is that it can free up lowmem for other kernel uses.
Raising the limit may increase the memory consumption on the server. There is no immediate consumption of the memory, as this will be used only when the software requests, but it can allow a larger application footprint on the server.
So, on the risk of higher memory usage, application can go wroom-wroom? That’s my takeaway from this.
My read is that it matters for servers where a large number of allocations could indicate a bug/denial of service, so it’s better to crash the process.
That’s not relevant on a gaming system, since you want one process to be able to use all the resources.
On one hand, I’d assume Valve knows what they’re doing, but also setting the value that high seems like it’s effectively removing the guardrail alltogether. Is that safe, also what is the worst that can happen if an app starts using maps in the billions?
The whole point is to prevent one process from using too much memory. The whole point of the Steam Deck is to have one process use all the memory.
So it makes sense to keep it relatively low for servers where runaway memory use is a bug that should crash the process, but not in a gaming scenario where high memory usage is absolutely expected.
no arguments there. Still, I kinda feel that raising the limit high enough to effectively turn off the limit is probably bit overboard. But, if it works, it works, but the kernel devs probably put the limit in place for a reason too.
@Malix@psycho_driver From what I can remember this limitation (which either Fedora or Nobara overturned 1yr ago) was set way before video games that take up a lot of memory were a thing.
I got curious and decided to check this out. This value was set to the current one in 2009: https://github.com/torvalds/linux/commit/341c87bf346f57748230628c5ad6ee69219250e8 The reasoning makes sense, but I guess is not really relevant to our situation, and according to the newest version of the comment 2^16 is not a hard limit anymore.
Hastily read around in the related issue-threads and seems like on it’s own the vm.max_map_count doesn’t do much… as long as apps behave. It’s some sort of “guard rail” which prevents processes of getting too many “maps”. Still kinda unclear what these maps are and what happens is a process gets to have excessive amounts.
That said: https://access.redhat.com/solutions/99913
So, on the risk of higher memory usage, application can go wroom-wroom? That’s my takeaway from this.
edit: ofc. I pasted the wrong link first. derrr.
edit: Suse’s documentation has some info about the effects of this setting: https://www.suse.com/support/kb/doc/?id=000016692
My read is that it matters for servers where a large number of allocations could indicate a bug/denial of service, so it’s better to crash the process.
That’s not relevant on a gaming system, since you want one process to be able to use all the resources.
no, it’ll go vroom-vroom
Just checked and the steam deck has it set to 2147483642. My gentoo systems are 65530.
On one hand, I’d assume Valve knows what they’re doing, but also setting the value that high seems like it’s effectively removing the guardrail alltogether. Is that safe, also what is the worst that can happen if an app starts using maps in the billions?
The whole point is to prevent one process from using too much memory. The whole point of the Steam Deck is to have one process use all the memory.
So it makes sense to keep it relatively low for servers where runaway memory use is a bug that should crash the process, but not in a gaming scenario where high memory usage is absolutely expected.
deleted by creator
OOM killer is what happens. But that can happen with the default setting as well.
no arguments there. Still, I kinda feel that raising the limit high enough to effectively turn off the limit is probably bit overboard. But, if it works, it works, but the kernel devs probably put the limit in place for a reason too.
@Malix @psycho_driver From what I can remember this limitation (which either Fedora or Nobara overturned 1yr ago) was set way before video games that take up a lot of memory were a thing.
I got curious and decided to check this out. This value was set to the current one in 2009: https://github.com/torvalds/linux/commit/341c87bf346f57748230628c5ad6ee69219250e8 The reasoning makes sense, but I guess is not really relevant to our situation, and according to the newest version of the comment 2^16 is not a hard limit anymore.