The third mitigating feature the article forgot to mention is that tmpfs can get paged out to the swap partition. If you drop a large file there and forget it, it will all end up in the swap partition if applications are demanding more memory.
The mentioned periodic clean up of tmp files is not enabled out-of-the-box in case of a upgrade from previous Debian versions, see https://www.debian.org/releases/trixie/release-notes/issues.... .
Actually quite handy and practical to know about, specifically in the context of a "low end box" where I personally would prefer that RAM exist for my applications and am totally fine with `/tmp` tasks being a bit slow (lets be real, the whole box is "slow" anyway and slow here is some factor of "vm block device on an ssd" rather than 1990s spinning rust).
I'm surprised to discover that it was not already the case for a long time for tmpfs to be used for /tmp, and that change is nice.
But the auto-cleanup feature looks awful to me. Be it desktop or servers, machine with uptime of more than a year, I never saw the case of tmp being filled just by forgotten garbage. Only sometimes filled by unzipping a too big file or something like that. But it is on the spot.
It used to be the place where you could store cache or other things like that that will hold until next reboot. It looks so arbitrary and source of random unexpected bugs to have files being automatically deleted there after random time.
I don't know where this feature comes from, but when stupid risky things like this are coming, I would easily bet that it is again a systemd "I know best what is good for you" broken feature shoved through our throats...
And if coming from systemd, expect that one day it will accidentally delete important filed from you, something like following symlinks to your home dir or your nvme EFI partition...
If I am satisfied with my disk speed, why would I want to use system memory? What are the specific use cases where this is warranted?
I havent used a non-tmpfs (disk-based) /tmp in over 15 years
Didnt need it on NetBSD, memory could go to zero and system would (thrash but) not crash. When I switched to Linux the OOM issue was a shock at first but I learned to avoid it
I use small form factor computers, with userland mounted in and running from memory, no swap; I only use longterm storage for non-temporary data
https://www.kingston.com/unitedkingdom/en/blog/pc-performanc...
I'm still a fan of poly instantiated /tmp and PrivateTmp (systemd). This may confuse/annoy admins who are not aware of namespaces, but I know that it definitely closes the attack vector of /tmp abuse by bad actors.
https://www.redhat.com/en/blog/polyinstantiating-tmp-and-var...
File is tmpfs will swap out if your system is under memory pressure.
If that happens, reading the file back is DRAMATICALLY slower than if you had just stored the file on disk in the first place.
This change is not going to speed things up for most users, it will slow things. Instead of caching important files, you waste memory on useless temporary files. Then the system swaps it out, so you can get cache back, and then it's really slow to read back.
This change is a mistake.
Why is there no write through unionfs in Linux? Feels like a very useful tool to have. Does no one else need this? Have half a mind to write one with an NFS interface.
EDIT: Thank you, jaunty. But all of these are device level. Even bcachefs was block device level. It doesn't allow union over a FUSE FS etc. It seems strange to not have it at the filesystem level.
I feel like this is mixing agendas. Is the goal freeing up /temp more regularly (so you don’t inadvertently rely on it, to save space, etc) or is the goal performance? I feel like with modern nvme (or just ssd) the argument for tmpfs out of the box is a hard one to make, and if you’re under special circumstances where it matters (eg you actually need ram speeds or are running on an SD or eMMC) then you would know to use a tmpfs yourself.
(Also, sorry but this article absolutely does not constitute a “deep dive” into anything.)
Using the example from the article, extracting an archive. Surely that use case is entity not possible using in-memory? What happens if you're dealing with a not-unreasonable 100gb archive?
Who runs around with 100gb+ of swap?!
The part that's more likely to bite people here and that's easily overlooked is that files in /var/tmp will survive a reboot but they'll still be automatically deleted after 30 days.
I read the article and I really don't understand. Linux already buffers files into RAM if there's any unused, why would you do this?
'systemctl mask tmp.mount' - the most important command to run in these situations.
It's a really bad idea to put /tmp into memory. Filesystems already use memory when necessary and spill to the filesystem when memory is under pressure. If they don't do this correctly (which they do) then fix your filesystem! That will benefit everything.
This is precisely what /dev/shm is for... And it can be used explicitly without any gotchas. If someone really wanted tmp to be in memory (to reduce SSD endurance writes or speed it up nominally) they can edit their mounts.
This feels like a very unnecessary change and nothing in that article made a convincing argument for the contrary.
As someone who sort of needs to juice all my ram, this is annoying but at least it can be turned off.
I thought /dev/shm was ramdisk?
Does /dev/shm stay? Surely it does but it is also capped at 50% RAM. Does that mean /dev/shm + /tmp can now get to 100% RAM? Or do they share the same ram budget?
Why this change? Writing to it will be faster than disk but idk if am is a precious commodity I’d rather it was just a part of the disk I was writing to.
We did this song and dance in RHEL. It's fine. Just use /var/tmp if you need persistent tmp storage. Gnome and X and tmux will not make you swap and if they do run xfce instead.
Swap on servers somewhat defeats the purpose of ECC memory: your program state is now subject to complex IO path that is not end-to-end checksum protected. Also you get unpredictable performance.
So typically: swap off on servers. Do they have a server story?