Yep, folks, it’s that time again – time for one of old Grandpa Adam’s tall tales of root causing adventure…
There’s a sort of catch-22 situation in Fedora that has been a personal bugbear for a very long time. It mainly affects Branched releases – each new Fedora release, when it has branched from Rawhide, but before it has been released. During this period the Bodhi update system is in effect, meaning all new packages have to go through Bodhi review before they are included in the composes for the release. This means, in theory, we should be able to make sure nothing really broken lands in the release. However, there’s a big class of really important updates we have never been able to test properly at all: updates that affect the installer.
The catch-22 is this – release engineering only builds install media from the ‘stable’ package set, those packages that have gone through review. So if a package under review breaks the installer, we can’t test whether it breaks the installer unless we push it stable. Well, you can, but it’s quite difficult – you have to learn how to build an installer image yourself, then build one containing the packages from the update and test it. I can do that, but most other people aren’t going to bother.
I’ve filed bugs and talked to people about ways to resolve this multiple times over many years, but a few months back I just got sick of the problem and decided to fix it myself. So I wrote an openQA update test which automates the process: it builds an installer image, with the packages from the update available to the installer image build tool. I also included a subsequent test which takes that image and runs an install with it. Since I already had the process for doing this manually down pat, it wasn’t actually very difficult.
Only…when I deployed the test to the openQA staging instance and actually tried it out, I found the installer image build would frequently fail in a rather strange way.
The installer image build process works (more or less) by creating a temporary directory, installing a bunch of packages to it (using dnf’s feature of installing to an alternative ‘root’), fiddling around with that environment a bit more, creating a disk image whose root is that temporary directory, then fiddling with the image a bit to make it into a bootable ISO. (HANDWAVE HANDWAVE). However, I was finding it would commonly fail in the ‘fiddling around with the environment’ stage, because somehow some parts of the environment had disappeared. Specifically, it’d show this error:
FileNoteFoundError: [Errno 2] No such file or directory: '/var/tmp/lorax.q8xfvc0p/installroot//proc/modules'
lorax was, at that point, trying to touch that directory (never mind why). That’s the
/proc/modules inside the temporary root, basically. The question was, why was it disappearing? And why had neither myself nor bcl (the lorax maintainer) seen it happening previously in manual use, or in official composes?
I tried reproducing it in a virtual machine…and failed. Then I tried again, and succeeded. Then I ran the command again…and it worked! That pattern turned out to repeat: I could usually get it to happen the first time I tried it in a VM, but any subsequent attempts in the same VM succeeded.
So this was seeming really pretty mysterious. Brian couldn’t get it to happen at all.
At this point I wrote a dumb, short Python script which just constantly monitored the disappearing location and told me when it appeared and disappeared. I hacked up the openQA test to run this script, and upload the result. Using the timestamps, I was able to figure out exactly what bit of lorax was running when the directory suddenly disappeared. But…I couldn’t immediately see why anything in that chunk of lorax would wind up deleting the directory.
At this point, other work became more important, and I wound up leaving this on the back burner for a couple of months. Then I came back to it a couple days ago. I picked back up where I left off, and did a custom build of lorax with some debug logging statements strewn around the relevant section, to figure out really precisely where we were when things went wrong. But this turned out to be a bit of a brick wall, because it turned out that at the time the directory disappeared, lorax was just…running
mksquashfs. And I could not figure out any plausible reason at all why a run of
mksquashfs would cause the directory to vanish.
After a bit, though, the thought struck me – maybe it’s not lorax itself wiping the directory out at all! Maybe something else is doing it. So I thought to look at the system logs. And lo and behold, I found my smoking gun. At the exact time my script logged that the directory had disappeared, this message appeared in the system log:
Jan 18 01:57:30 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com systemd: Starting Cleanup of Temporary Directories...
now, remember our problem directory is in
/var/tmp. So this smells very suspicious indeed! So I figured out what that service actually is – to do this, you just grep for the description (“Cleanup of Temporary Directories”) in
/usr/lib/systemd/system – and it turned out to be
/usr/lib/systemd/system/systemd-tmpfiles-clean.service, which is part of systemd’s
systemd-tmpfiles mechanism, which you can read up on in great detail in
man systemd-tmpfiles and
I had run into it a few times before, so I had a vague idea what I was dealing with and what to look for. It’s basically a mechanism for managing temporary files and directories: you can write config snippets which systemd will read and do stuff like creating expected temporary files or directories on boot (this lets packages manage temporary directories without doing it themselves in scriptlets). I poked through the docs again and, sure enough, it turns out another thing the system can do is delete temporary files that reach a certain age:
Age The date field, when set, is used to decide what files to delete when cleaning. If a file or directory is older than the current time minus the age field, it is deleted. The field format is a series of integers each followed by one of the following suffixes for the respective time units: s, m or min, h, d, w, ms, and us, meaning seconds, minutes, hours, days, weeks, milliseconds, and microseconds, respectively. Full names of the time units can be used too.
systemd-tmpfiles-clean.service does that job. So I went looking for
tmpfiles.d snippets that cover
/var/tmp, and sure enough, found one, in Fedora’s stock config file
q /var/tmp 1777 root root 30d
30d there is the ‘age’ field. So this tells the tmpfiles mechanism that it’s fine to wipe anything under
/var/tmp which is older than 30 days.
Of course, naively we might think our directory won’t be older than 30 days – after all, we only just ran lorax! But remember, lorax installs packages into this temporary directory, and files and directories in packages get some of their time attributes from the package. So we (at this point, Brian and I were chatting about the problem as I poked it) looked into how systemd-tmpfiles defines age, precisely:
The age of a file system entry is determined from its last modification timestamp (mtime), its last access timestamp (atime), and (except for directories) its last status change timestamp (ctime). Any of these three (or two) values will prevent cleanup if it is more recent than the current time minus the age field.
So since our thing is a directory, its
atime are relevant. So Brian and I both looked into those. He did it manually, while I hacked up my check script to also print the
atime of the directory when it existed. And sure enough, it turned out these were several months in the past – they were obviously related to the date the
filesystem package (from which
/proc/modules comes) was built. They were certainly longer than 30 days ago.
Finally, I looked into what was actually running
systemd-tmpfiles-clean.service; it’s run on a timer,
systemd-tmpfiles-clean.timer. That timer is set to run the service 15 minutes after the system boots, and every day thereafter.
So all of this hooked up nicely into a convincing story. openQA kept running into this problem because it always runs the test in a freshly-booted VM – that ’15 minutes after boot’ was turning out to be right in the middle of the image creation. My manual reproductions were failing on the first try for the same reason – but then succeeding on the second and subsequent tries because the cleaner would not run again until the next day. And Brian and I had either never or rarely seen this when we ran lorax manually for one reason or another because it was pretty unlikely the “once a day” timer would happen to wake up and run just when we had lorax running (and if it did happen, we’d try again, and when it worked, we’d figure it was just some weird transient failure). The problem likely never happens in official composes, I think, because the tmpfiles timer isn’t active at all in the environment lorax gets run in (haven’t double-checked this, though).
Brian now gets to deal with the thorny problem of trying to fix this somehow on the lorax side (so the tmpfiles cleanup won’t remove bits of the temporary tree even if it does run while lorax is running). Now I know what’s going on, it was easy enough to work around in the openQA test – I just have the test do
systemctl stop systemd-tmpfiles-clean.timer before running the image build.