AdamW on Linux and more (Posts about relval) https://www.happyassassin.net/categories/relval.atom 2023-06-20T12:09:42Z Adam Williamson Nikola openQA and Autocloud result submission to ResultsDB https://www.happyassassin.net/posts/2017/02/06/openqa-and-autocloud-result-submission-to-resultsdb/ 2017-02-06T21:18:23Z 2017-02-06T21:18:23Z Adam Williamson <p></p><p>So I've just arrived back from a packed two weeks in Brno, and I'll probably have some more stuff to post soon. But let's lead with some big news!</p> <p>One of the big topics at <a href="https://devconf.cz/">Devconf</a> and around the RH offices was the ongoing effort to modernize both Fedora and RHEL's overall build processes to be more flexible and involve a lot more testing (or, as some people may have put it, "<a href="https://en.wikipedia.org/wiki/Continuous_integration">CI</a> CI CI"). A lot of folks wearing a lot of hats are involved in different bits of this effort, but one thing that seems to stay constant is that <a href="https://fedoraproject.org/wiki/ResultsDB">ResultsDB</a> will play a significant role.</p> <p>ResultsDB started life as the result storage engine for AutoQA, and the concept and name was preserved as AutoQA was replaced by <a href="https://fedoraproject.org/wiki/Taskotron">Taskotron</a>. Its current version, however, is designed to be a scalable, capable and generic store for test results from any test system, not just Taskotron. Up until last week, though, we'd never quite got around to hooking up any other systems to it to demonstrate this.</p> <p>Well, that's all changed now! In the course of three days, <a href="https://fedoraproject.org/wiki/User:Jsedlak">Jan Sedlak</a> and I got both Fedora's <a href="https://openqa.fedoraproject.org">openQA instance</a> and <a href="https://apps.fedoraproject.org/autocloud/">Autocloud</a> reporting to ResultsDB. As results come out of both those systems, <a href="http://www.fedmsg.com">fedmsg</a> consumers take the results, process them into a common format, and forward them to ResultsDB. This means there are groups with <a href="https://taskotron.fedoraproject.org/resultsdb/results?groups=ff6e4a45-0eff-5338-9210-7ca878b9fdce">results from both systems for the same compose together</a>, and you'll find metadata in very similar format attached to the results from both systems. This is all deployed in production right now - the results from every daily compose from both openQA and Autocloud are being forwarded smoothly to ResultsDB.</p> <p>To aid in this effort I wrote a thing we're calling <a href="https://pagure.io/taskotron/resultsdb_conventions">resultsdb_conventions</a> for now. I think of it as being a code representation of some 'conventions' for formatting and organizing results in ResultsDB, as well as a tool for conveniently reporting results in line with those conventions. The attraction of ResultsDB is that it's very little more than a RESTful API for a database; it enforces a pretty bare minimum in terms of required data for each result. A result must provide only a test name, an 'item' that was tested, and a status ('outcome') from a choice of four. ResultsDB <em>allows</em> a result to include as much more data as it likes, in the form of a freeform key:value data store, but it does not <em>require</em> any extra data to be provided, or impose any policy on its form.</p> <p>This makes ResultsDB flexible, but also means we will need to establish conventions where appropriate to ensure related results can be conveniently located and reasoned about. <code>resultsdb_conventions</code> is my initial contribution to this effort, originally written just to reduce duplication between the openQA and Autocloud result submitters and ensure they used a common layout, but intended to perhaps cover far more use cases in the future.</p> <p>Having this data in ResultsDB is likely to be practically useful either immediately or in the very near future, but we're also hoping it acts as a demonstration that using ResultsDB to consolidate results from multiple test sources is not only possible but quite easy. And I'm hoping <code>resultsdb_conventions</code> can be a starting point for a discussion and some consensus around what metadata we provide, and in what format, for various types of result. If all goes well, we're hoping to hook up <em>manual</em> test result submission to ResultsDB next, via the <code>relval-ng</code> project that's had some discussion on the QA mailing lists. Stay tuned for more on that!</p> <p></p> <p></p><p>So I've just arrived back from a packed two weeks in Brno, and I'll probably have some more stuff to post soon. But let's lead with some big news!</p> <p>One of the big topics at <a href="https://devconf.cz/">Devconf</a> and around the RH offices was the ongoing effort to modernize both Fedora and RHEL's overall build processes to be more flexible and involve a lot more testing (or, as some people may have put it, "<a href="https://en.wikipedia.org/wiki/Continuous_integration">CI</a> CI CI"). A lot of folks wearing a lot of hats are involved in different bits of this effort, but one thing that seems to stay constant is that <a href="https://fedoraproject.org/wiki/ResultsDB">ResultsDB</a> will play a significant role.</p> <p>ResultsDB started life as the result storage engine for AutoQA, and the concept and name was preserved as AutoQA was replaced by <a href="https://fedoraproject.org/wiki/Taskotron">Taskotron</a>. Its current version, however, is designed to be a scalable, capable and generic store for test results from any test system, not just Taskotron. Up until last week, though, we'd never quite got around to hooking up any other systems to it to demonstrate this.</p> <p>Well, that's all changed now! In the course of three days, <a href="https://fedoraproject.org/wiki/User:Jsedlak">Jan Sedlak</a> and I got both Fedora's <a href="https://openqa.fedoraproject.org">openQA instance</a> and <a href="https://apps.fedoraproject.org/autocloud/">Autocloud</a> reporting to ResultsDB. As results come out of both those systems, <a href="http://www.fedmsg.com">fedmsg</a> consumers take the results, process them into a common format, and forward them to ResultsDB. This means there are groups with <a href="https://taskotron.fedoraproject.org/resultsdb/results?groups=ff6e4a45-0eff-5338-9210-7ca878b9fdce">results from both systems for the same compose together</a>, and you'll find metadata in very similar format attached to the results from both systems. This is all deployed in production right now - the results from every daily compose from both openQA and Autocloud are being forwarded smoothly to ResultsDB.</p> <p>To aid in this effort I wrote a thing we're calling <a href="https://pagure.io/taskotron/resultsdb_conventions">resultsdb_conventions</a> for now. I think of it as being a code representation of some 'conventions' for formatting and organizing results in ResultsDB, as well as a tool for conveniently reporting results in line with those conventions. The attraction of ResultsDB is that it's very little more than a RESTful API for a database; it enforces a pretty bare minimum in terms of required data for each result. A result must provide only a test name, an 'item' that was tested, and a status ('outcome') from a choice of four. ResultsDB <em>allows</em> a result to include as much more data as it likes, in the form of a freeform key:value data store, but it does not <em>require</em> any extra data to be provided, or impose any policy on its form.</p> <p>This makes ResultsDB flexible, but also means we will need to establish conventions where appropriate to ensure related results can be conveniently located and reasoned about. <code>resultsdb_conventions</code> is my initial contribution to this effort, originally written just to reduce duplication between the openQA and Autocloud result submitters and ensure they used a common layout, but intended to perhaps cover far more use cases in the future.</p> <p>Having this data in ResultsDB is likely to be practically useful either immediately or in the very near future, but we're also hoping it acts as a demonstration that using ResultsDB to consolidate results from multiple test sources is not only possible but quite easy. And I'm hoping <code>resultsdb_conventions</code> can be a starting point for a discussion and some consensus around what metadata we provide, and in what format, for various types of result. If all goes well, we're hoping to hook up <em>manual</em> test result submission to ResultsDB next, via the <code>relval-ng</code> project that's had some discussion on the QA mailing lists. Stay tuned for more on that!</p> <p></p> wikitcms, relval, fedfind and testdays moved to Pagure https://www.happyassassin.net/posts/2016/10/14/wikitcms-relval-fedfind-and-testdays-moved-to-pagure/ 2016-10-14T14:54:56Z 2016-10-14T14:54:56Z Adam Williamson <p></p><p>Today I moved several of my pet projects from <a href="https://www.happyassassin.net/cgit/">the cgit instance on this server</a> to <a href="http://pagure.io/">Pagure</a>. You can now find them here:</p> <ul> <li><a href="https://pagure.io/fedora-qa/python-wikitcms">(python-)wikitcms</a></li> <li><a href="https://pagure.io/fedora-qa/relval">relval</a></li> <li><a href="https://pagure.io/fedora-qa/fedfind">fedfind</a></li> <li><a href="https://pagure.io/fedora-qa/testdays">testdays</a></li> </ul> <p>The home page URLs for each project on this server - e.g. https://www.happyassassin.net/fedfind - also now redirect to the Pagure project pages.</p> <p>I also deleted some other repos that were hosted in my cgit instance entirely, because I don't think they were any longer of interest to anyone and I didn't want to maintain them. Those were mostly related to Fedlet, which I haven't been working on for 2-3 years now.</p> <p>For now the repos for the three main projects - wikitcms, relval and fedfind - remain in my cgit instance, containing just a single text file documenting the move to Pagure; in a month or so I will remove these repositories and decommission the cgit instance. So, update your checkouts! :)</p> <p>This saves me maintaining the repos, provides pull review and issue mechanisms, and it's a good thing to have all Fedora-ish code projects in Pagure in general, I think.</p> <p>Many thanks to <a href="http://blog.pingoured.fr/">pingou</a> and everyone else who works on Pagure, it's a great project!</p> <p></p> <p></p><p>Today I moved several of my pet projects from <a href="https://www.happyassassin.net/cgit/">the cgit instance on this server</a> to <a href="http://pagure.io/">Pagure</a>. You can now find them here:</p> <ul> <li><a href="https://pagure.io/fedora-qa/python-wikitcms">(python-)wikitcms</a></li> <li><a href="https://pagure.io/fedora-qa/relval">relval</a></li> <li><a href="https://pagure.io/fedora-qa/fedfind">fedfind</a></li> <li><a href="https://pagure.io/fedora-qa/testdays">testdays</a></li> </ul> <p>The home page URLs for each project on this server - e.g. https://www.happyassassin.net/fedfind - also now redirect to the Pagure project pages.</p> <p>I also deleted some other repos that were hosted in my cgit instance entirely, because I don't think they were any longer of interest to anyone and I didn't want to maintain them. Those were mostly related to Fedlet, which I haven't been working on for 2-3 years now.</p> <p>For now the repos for the three main projects - wikitcms, relval and fedfind - remain in my cgit instance, containing just a single text file documenting the move to Pagure; in a month or so I will remove these repositories and decommission the cgit instance. So, update your checkouts! :)</p> <p>This saves me maintaining the repos, provides pull review and issue mechanisms, and it's a good thing to have all Fedora-ish code projects in Pagure in general, I think.</p> <p>Many thanks to <a href="http://blog.pingoured.fr/">pingou</a> and everyone else who works on Pagure, it's a great project!</p> <p></p> Fedora 24 and Rawhide: What's goin' on (aka why is everything awful) https://www.happyassassin.net/posts/2016/02/29/fedora-24-and-rawhide-whats-goin-on-aka-why-is-everything-awful/ 2016-02-29T16:32:10Z 2016-02-29T16:32:10Z Adam Williamson <p></p><p>Hi folks!</p> <p>Welp, I was doing a Fedora 24 status update in the QA meeting this morning, and figured a quick(ish) summary of what all is going on in Fedora 24 and Rawhide right now might also be of interest to a wider audience.</p> <p>So, uh, the executive summary is: stuff's busted. Lots of stuff is busted. We are aware of this, and fixing it. Hold onto your hats.</p> <h2>glibc langpacks</h2> <p>A <a href="https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/K56XYH5AK4SBFQD36IFS2KDWUN2NY44E/">rather big change</a> landed in Fedora 24 and Rawhide last week. glibc locales are now split into subpackages using the 'langpack' mechanism that yum introduced and dnf supports. This lets us drop a somewhat ugly hack we were using to remove unneeded locales from space-sensitive images (Cloud and container images). However, it broke...quite a lot of stuff.</p> <h3>Locales lost on update/upgrade</h3> <p>As it initially landed, Fedora 24 / Rawhide users who updated to the new glibc packages <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1312956">lose all their locales</a>; after the update you may have no locales except C and C.UTF-8. Various apps have trouble when you have a locale configured but not available, including but probably not limited to ssh and gnome-terminal. If you've hit this, you probably will want to do something like <code>dnf install glibc-langpack-en</code> at least (substitute your actual locale group for <code>en</code>). If you just want to have all locales back (so you can test apps in other locales and so on), you can do <code>dnf install glibc-all-langpacks</code>.</p> <h3>Installer doesn't run any more</h3> <p>anaconda tries to set <code>os.environ["LANG"]</code> as the default locale when starting up. There's also no dependency or lorax configuration to pull any glibc langpacks into the installer environment. The result is that in recent Rawhide and F24 nightly installer images, <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1312607">anaconda blows up during startup</a>, trying to set a locale that isn't installed.</p> <h3>Live and cloud images don't build</h3> <p>This is actually a consequence of the previous issue. Cloud images are built via anaconda. As of last week, with the Pungi 4 switchover, live images are also now built using anaconda (via <a href="https://lorax.readthedocs.org/en/latest/livemedia-creator.html">livemedia-creator</a>). anaconda hits the locale bug in both those workflows, <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1312951">blows up</a>, and consequently we get no live or cloud images in current Rawhide and Fedora 24 nightly composes.</p> <h2>Pungi 4</h2> <p>Speaking of <a href="https://pagure.io/pungi/">Pungi 4</a>...yep, as of the middle of last week, Fedora 24 and Rawhide composes are being done with that tool, as I've been talking about for a while. You can see evidence of this in the <a href="https://download.fedoraproject.org/pub/fedora/linux/development/">Rawhide and Branched trees</a>. They now look more like release trees, with variant directories at the top level and all the regular images produced daily (well, they <em>should</em> be, except see above for why half of them are missing). If you've got scripts or anything which expect a certain layout of these trees, you're probably going to have to update them.</p> <p>Up until glibc threw a spanner in the works this seems to have turned out quite well, but there are a few known consequences so far. There <em>was</em> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1311795">a bug</a> with the Server DVD installer image not booting properly due to an incorrect <code>inst.stage2</code> kernel parameter, but that seems to be fixed now.</p> <h3>No name resolution on live images</h3> <p>If you manage to find a Pungi 4-created live image (from the few days before glibc broke 'em) and get it to boot, you'll probably find networking is busted. In fact basic connectivity works, but name resolution doesn't. This is because <code>/etc/resolv.conf</code> is a dangling symlink. This is the <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1313085">latest incarnation</a> of a <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1116651">longstanding</a>...<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1264364">disagreement</a> among the systemd developers, NetworkManager developers, and everyone else unfortunate enough to get caught up in the crossfire. No doubt it'll get bodged up somehow this time, too, soon enough. You can easily resolve the problem manually with <code>rm -f /etc/resolv.conf; ln -s /var/run/NetworkManager/resolv.conf /etc/resolv.conf</code>. The change here isn't Pungi 4 per se, but the fact that under the Pungi 4 regime, live images are now created by livemedia-creator rather than livecd-creator. livecd-creator stuffed a <code>/etc/resolv.conf</code> into the live image it created, which avoided this bug by preventing systemd-tmpfiles from creating it as a dangling symlink on boot. livemedia-creator does not do this, so when the live image boots <code>/etc/resolv.conf</code> does not exist, systemd creates it as a dangling symlink, and NetworkManager refuses to replace the dangling symlink with its own symlink.</p> <h3>Rawhide / Branched reports missing depcheck information</h3> <p>The 'Rawhide report' and 'Branched report' emails are still going out, but they're now generated by a <a href="https://pagure.io/compose-utils">new tool</a> and look a bit different. I kinda like the added information, but some people don't like the new format so much; send patches ;) It is known that at present the new reports are missing information on broken dependencies, and releng are working to get this back ASAP.</p> <h3>compose check report emails not appearing</h3> <p>I've mentioned this before, but briefly, the 'compose check report' emails sent out by <a href="https://git.fedorahosted.org/cgit/fedora-qa.git/tree/check-compose">my tool</a> aren't happening at all at the moment. The process for producing them runs through <a href="https://www.happyassassin.net/cgit/fedfind">fedfind</a> and needed rather a lot of rework for the new Pungi 4-ish world. I have code that works now and am aiming to get it deployed this week. Right now all the reports would basically say "all the tests failed and half the images are missing" due to the above-mentioned problems anyhow.</p> <p>Long term I'd like to move the image checks from <code>check-compose</code> into <code>compose-utils</code> and thus have the 'missing expected images' and 'image diff to previous compose' bits appear in the 'Rawhide report' / 'Branched report' emails; <code>check-compose</code> would then just generate an openQA test report, basically. Doing that cleanly requires a <a href="https://github.com/release-engineering/productmd/pull/15">change to the productmd metadata format</a>, though, which I need to work through with pungi and productmd folks.</p> <h3>Release validation test events not happening</h3> <p>Also due to the compose process changes, we can't really create <a href="https://fedoraproject.org/wiki/QA/SOP_Release_Validation_Test_Event">release validation events</a> at present. Well, we could create nightly ones, but the image download tables would be missing, and we'd have to do it manually; the stuff for creating them automatically is kind of outdated now (it relied on some assumptions about the compose process which no longer really hold true). We can't do Alpha TCs and RCs (and thus the events for them) until we work out with releng how we want to handle TCs and RCs with Pungi 4.</p> <p>This week I'm aiming to at least update <a href="https://www.happyassassin.net/cgit/wikitcms/">python-wikitcms</a> and <a href="https://www.happyassassin.net/cgit/relval">relval</a> so we can have proper nightly validation events again and they'll have correct download links. Probably this will just involve changing the page names a bit to add the 'respin' component of Pungi 4 nightly compose IDs (so we'll have e.g. <code>Test Results:Fedora 24 Branched 20160301.0 Installation</code> or <code>Test Results:Fedora 24 Branched 20160301.n.0 Installation</code> instead of <code>Test Results:Fedora 24 Branched 20160301 Installation</code>) and tweaking wikitcms a bit to add the 'respin' concept to its event/page versioning design, and writing a fedmsg consumer which replaces the <code>relval nightly --if-needed</code> mode to create the nightly events every so often.</p> <p>It'll probably take a bit longer to figure out what we want to do for non-nightly composes.</p> <h2>Other bits: Wayland and SELinux</h2> <p>We also have a couple of other fairly prominent issues related to other changes.</p> <h3>Lives don't boot or don't work properly with SELinux in enforcing mode</h3> <p>A <a href="https://github.com/systemd/systemd/pull/2508">change to systemd</a> seems to result in <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1308771">several things in /run being mislabelled</a> in Fedora live images. (Yeah, yeah, systemd and SELinux...please put down the comment box and step away from the keyboard, trolls, moderation is in effect). With SELinux in enforcing mode (the default), this seems to result in Workstation lives not booting (it sits there looping over failing to set up the live user's session, basically). KDE lives boot, but then lots of stuff is broken (you can't reboot, for instance, and probably lots of other bits, that's just the one our tests noticed). I didn't check other desktops yet.</p> <p>You can work around this one quite easily by booting with <code>enforcing=0</code>.</p> <h3>Installer doesn't run on Workstation lives</h3> <p>Workstation live images for F24 and Rawhide were flipped over to running on Wayland by default (in most cases) quite recently. Unfortunately, the live installer relies on using <a href="http://linuxcommand.org/man_pages/consolehelper8.html">consolehelper</a> to run as root, but <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1274451">consolehelper doesn't work on Wayland</a>. So if you find a recent Rawhide / F24 Workstation nightly live, and you get it to boot, and you ignore the fact that networking is busted, you won't be able to install it (bet you were just <em>dying</em> to do that after this blog post, weren't you?) unless you just run <code>/usr/sbin/liveinst</code> directly as root. Well, I mean, I'm not guaranteeing you'll actually be able to install it if you do that. I haven't got that far in testing yet.</p> <h2>IN CONCLUSION</h2> <p>So, um, yeah. We know everything's busted. We know! We're sorry. It's all gettin' fixed. Return to your homes, and your Fedora 23 installs. :)</p> <p></p> <p></p><p>Hi folks!</p> <p>Welp, I was doing a Fedora 24 status update in the QA meeting this morning, and figured a quick(ish) summary of what all is going on in Fedora 24 and Rawhide right now might also be of interest to a wider audience.</p> <p>So, uh, the executive summary is: stuff's busted. Lots of stuff is busted. We are aware of this, and fixing it. Hold onto your hats.</p> <h2>glibc langpacks</h2> <p>A <a href="https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/K56XYH5AK4SBFQD36IFS2KDWUN2NY44E/">rather big change</a> landed in Fedora 24 and Rawhide last week. glibc locales are now split into subpackages using the 'langpack' mechanism that yum introduced and dnf supports. This lets us drop a somewhat ugly hack we were using to remove unneeded locales from space-sensitive images (Cloud and container images). However, it broke...quite a lot of stuff.</p> <h3>Locales lost on update/upgrade</h3> <p>As it initially landed, Fedora 24 / Rawhide users who updated to the new glibc packages <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1312956">lose all their locales</a>; after the update you may have no locales except C and C.UTF-8. Various apps have trouble when you have a locale configured but not available, including but probably not limited to ssh and gnome-terminal. If you've hit this, you probably will want to do something like <code>dnf install glibc-langpack-en</code> at least (substitute your actual locale group for <code>en</code>). If you just want to have all locales back (so you can test apps in other locales and so on), you can do <code>dnf install glibc-all-langpacks</code>.</p> <h3>Installer doesn't run any more</h3> <p>anaconda tries to set <code>os.environ["LANG"]</code> as the default locale when starting up. There's also no dependency or lorax configuration to pull any glibc langpacks into the installer environment. The result is that in recent Rawhide and F24 nightly installer images, <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1312607">anaconda blows up during startup</a>, trying to set a locale that isn't installed.</p> <h3>Live and cloud images don't build</h3> <p>This is actually a consequence of the previous issue. Cloud images are built via anaconda. As of last week, with the Pungi 4 switchover, live images are also now built using anaconda (via <a href="https://lorax.readthedocs.org/en/latest/livemedia-creator.html">livemedia-creator</a>). anaconda hits the locale bug in both those workflows, <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1312951">blows up</a>, and consequently we get no live or cloud images in current Rawhide and Fedora 24 nightly composes.</p> <h2>Pungi 4</h2> <p>Speaking of <a href="https://pagure.io/pungi/">Pungi 4</a>...yep, as of the middle of last week, Fedora 24 and Rawhide composes are being done with that tool, as I've been talking about for a while. You can see evidence of this in the <a href="https://download.fedoraproject.org/pub/fedora/linux/development/">Rawhide and Branched trees</a>. They now look more like release trees, with variant directories at the top level and all the regular images produced daily (well, they <em>should</em> be, except see above for why half of them are missing). If you've got scripts or anything which expect a certain layout of these trees, you're probably going to have to update them.</p> <p>Up until glibc threw a spanner in the works this seems to have turned out quite well, but there are a few known consequences so far. There <em>was</em> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1311795">a bug</a> with the Server DVD installer image not booting properly due to an incorrect <code>inst.stage2</code> kernel parameter, but that seems to be fixed now.</p> <h3>No name resolution on live images</h3> <p>If you manage to find a Pungi 4-created live image (from the few days before glibc broke 'em) and get it to boot, you'll probably find networking is busted. In fact basic connectivity works, but name resolution doesn't. This is because <code>/etc/resolv.conf</code> is a dangling symlink. This is the <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1313085">latest incarnation</a> of a <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1116651">longstanding</a>...<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1264364">disagreement</a> among the systemd developers, NetworkManager developers, and everyone else unfortunate enough to get caught up in the crossfire. No doubt it'll get bodged up somehow this time, too, soon enough. You can easily resolve the problem manually with <code>rm -f /etc/resolv.conf; ln -s /var/run/NetworkManager/resolv.conf /etc/resolv.conf</code>. The change here isn't Pungi 4 per se, but the fact that under the Pungi 4 regime, live images are now created by livemedia-creator rather than livecd-creator. livecd-creator stuffed a <code>/etc/resolv.conf</code> into the live image it created, which avoided this bug by preventing systemd-tmpfiles from creating it as a dangling symlink on boot. livemedia-creator does not do this, so when the live image boots <code>/etc/resolv.conf</code> does not exist, systemd creates it as a dangling symlink, and NetworkManager refuses to replace the dangling symlink with its own symlink.</p> <h3>Rawhide / Branched reports missing depcheck information</h3> <p>The 'Rawhide report' and 'Branched report' emails are still going out, but they're now generated by a <a href="https://pagure.io/compose-utils">new tool</a> and look a bit different. I kinda like the added information, but some people don't like the new format so much; send patches ;) It is known that at present the new reports are missing information on broken dependencies, and releng are working to get this back ASAP.</p> <h3>compose check report emails not appearing</h3> <p>I've mentioned this before, but briefly, the 'compose check report' emails sent out by <a href="https://git.fedorahosted.org/cgit/fedora-qa.git/tree/check-compose">my tool</a> aren't happening at all at the moment. The process for producing them runs through <a href="https://www.happyassassin.net/cgit/fedfind">fedfind</a> and needed rather a lot of rework for the new Pungi 4-ish world. I have code that works now and am aiming to get it deployed this week. Right now all the reports would basically say "all the tests failed and half the images are missing" due to the above-mentioned problems anyhow.</p> <p>Long term I'd like to move the image checks from <code>check-compose</code> into <code>compose-utils</code> and thus have the 'missing expected images' and 'image diff to previous compose' bits appear in the 'Rawhide report' / 'Branched report' emails; <code>check-compose</code> would then just generate an openQA test report, basically. Doing that cleanly requires a <a href="https://github.com/release-engineering/productmd/pull/15">change to the productmd metadata format</a>, though, which I need to work through with pungi and productmd folks.</p> <h3>Release validation test events not happening</h3> <p>Also due to the compose process changes, we can't really create <a href="https://fedoraproject.org/wiki/QA/SOP_Release_Validation_Test_Event">release validation events</a> at present. Well, we could create nightly ones, but the image download tables would be missing, and we'd have to do it manually; the stuff for creating them automatically is kind of outdated now (it relied on some assumptions about the compose process which no longer really hold true). We can't do Alpha TCs and RCs (and thus the events for them) until we work out with releng how we want to handle TCs and RCs with Pungi 4.</p> <p>This week I'm aiming to at least update <a href="https://www.happyassassin.net/cgit/wikitcms/">python-wikitcms</a> and <a href="https://www.happyassassin.net/cgit/relval">relval</a> so we can have proper nightly validation events again and they'll have correct download links. Probably this will just involve changing the page names a bit to add the 'respin' component of Pungi 4 nightly compose IDs (so we'll have e.g. <code>Test Results:Fedora 24 Branched 20160301.0 Installation</code> or <code>Test Results:Fedora 24 Branched 20160301.n.0 Installation</code> instead of <code>Test Results:Fedora 24 Branched 20160301 Installation</code>) and tweaking wikitcms a bit to add the 'respin' concept to its event/page versioning design, and writing a fedmsg consumer which replaces the <code>relval nightly --if-needed</code> mode to create the nightly events every so often.</p> <p>It'll probably take a bit longer to figure out what we want to do for non-nightly composes.</p> <h2>Other bits: Wayland and SELinux</h2> <p>We also have a couple of other fairly prominent issues related to other changes.</p> <h3>Lives don't boot or don't work properly with SELinux in enforcing mode</h3> <p>A <a href="https://github.com/systemd/systemd/pull/2508">change to systemd</a> seems to result in <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1308771">several things in /run being mislabelled</a> in Fedora live images. (Yeah, yeah, systemd and SELinux...please put down the comment box and step away from the keyboard, trolls, moderation is in effect). With SELinux in enforcing mode (the default), this seems to result in Workstation lives not booting (it sits there looping over failing to set up the live user's session, basically). KDE lives boot, but then lots of stuff is broken (you can't reboot, for instance, and probably lots of other bits, that's just the one our tests noticed). I didn't check other desktops yet.</p> <p>You can work around this one quite easily by booting with <code>enforcing=0</code>.</p> <h3>Installer doesn't run on Workstation lives</h3> <p>Workstation live images for F24 and Rawhide were flipped over to running on Wayland by default (in most cases) quite recently. Unfortunately, the live installer relies on using <a href="http://linuxcommand.org/man_pages/consolehelper8.html">consolehelper</a> to run as root, but <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1274451">consolehelper doesn't work on Wayland</a>. So if you find a recent Rawhide / F24 Workstation nightly live, and you get it to boot, and you ignore the fact that networking is busted, you won't be able to install it (bet you were just <em>dying</em> to do that after this blog post, weren't you?) unless you just run <code>/usr/sbin/liveinst</code> directly as root. Well, I mean, I'm not guaranteeing you'll actually be able to install it if you do that. I haven't got that far in testing yet.</p> <h2>IN CONCLUSION</h2> <p>So, um, yeah. We know everything's busted. We know! We're sorry. It's all gettin' fixed. Return to your homes, and your Fedora 23 installs. :)</p> <p></p> Identifying Fedora media redux: What is Atomic? https://www.happyassassin.net/posts/2015/09/18/identifying-fedora-media-redux-what-is-atomic/ 2015-09-18T11:11:18Z 2015-09-18T11:11:18Z Adam Williamson <p></p><p>You may remember <a href="https://www.happyassassin.net/2015/09/17/identifying-fedora-media/">my post</a> from yesterday, on whether a 'Fedora Atomic' flavor made sense, and how I think about identifying Fedora media for my <a href="https://www.happyassassin.net/fedfind">fedfind</a> project.</p> <p>Well, after talking it over with a few folks this morning and thinking some more, I've come to the conclusion that post was kind of wrong, on the major question: what <em>is</em> Atomic?</p> <p>In that last post I came to the conclusion that Atomic was a <em>deployment method</em>. I still think <em>deployment method</em> is a perfectly viable concept, but I no longer think Atomic really <em>is</em> one. Rather, <code>ostree</code> is a deployment method, and Atomic really is - as I previously was considering it - something more like a <em>payload</em>.</p> <p>Reading the <a href="http://www.projectatomic.io/">Project Atomic site</a>, it seems I was kind of working from an outdated understanding of the 'Atomic' concept. I'd always understood it as being more or less a branding of <code>ostree</code>, i.e. an 'Atomic' system was just one that used the <code>ostree</code> deployment method. But it seems the concept has now been somewhat changed upstream. To quote the <a href="http://www.projectatomic.io/docs/introduction/">Introduction to Project Atomic</a>:</p> <p>"The core of Project Atomic is the Project Atomic Host. This is a lightweight operating system that has been assembled out of upstream RPM content. ...</p> <p>Project Atomic builds on these features, using the following components, which have been tailored for containerized-application management:</p> <ul> <li>Docker, an open source project for creating lightweight, portable, self-sufficient application containers.</li> <li>Kubernetes, an open source project that allows you to manage a cluster of Linux containers as a single system.</li> <li>rpm-ostree, an open source tool for managing bootable, immutable, versioned filesystem trees from upstream RPM content.</li> <li>systemd, an open source system and service manager for Linux."</li> </ul> <p>There is still some conceptual confusion about <em>exactly</em> what Atomic means - for instance, there's a concept called <em>Atomic Apps</em>, and it is apparently fine to deploy <em>Atomic Apps</em> on systems which are not <em>Atomic Hosts</em>. But after discussing it with some folks this morning, I think it's reasonable to take the following as a principle:</p> <p><strong>Any Fedora image with 'Atomic' in its name deploys as an Atomic Host as defined by Project Atomic.</strong></p> <p>Assuming that rule holds, for 'image identification' purposes under the fedfind concepts covered in the previous post, <code>atomic</code> really does work best as something in the line of a <code>flavor</code>, <code>loadout</code> and/or <code>payload</code>. It also <strong>implies</strong> the <em>deployment method</em> <code>ostree</code>, but I think I'm OK with leaving that concept out of fedfind for now as it's functionally useless: as all Atomic images have <code>ostree</code> deployment and all non-Atomic images have <code>rpm</code> deployment, distinguishing the properties is no use at this point. I'm going to hold it in reserve for the future, in case we get other images that use non-RPM <em>deployment methods</em> but are not Atomic hosts.</p> <p>For now I think I'll more or less go back to the <em>subflavor</em> concept I initially threw out in favour of <em>deployment</em>, and for current purposes, all Atomic images will have <em>flavor</em> <code>cloud</code> and <em>subflavor</em> <code>atomic</code>. The alternative is to have a concept that's a bit more parallel to <em>flavor</em> and <em>loadout</em> which feeds into <em>payload</em>, but for now keeping the interface stable seems better.</p> <p></p> <p></p><p>You may remember <a href="https://www.happyassassin.net/2015/09/17/identifying-fedora-media/">my post</a> from yesterday, on whether a 'Fedora Atomic' flavor made sense, and how I think about identifying Fedora media for my <a href="https://www.happyassassin.net/fedfind">fedfind</a> project.</p> <p>Well, after talking it over with a few folks this morning and thinking some more, I've come to the conclusion that post was kind of wrong, on the major question: what <em>is</em> Atomic?</p> <p>In that last post I came to the conclusion that Atomic was a <em>deployment method</em>. I still think <em>deployment method</em> is a perfectly viable concept, but I no longer think Atomic really <em>is</em> one. Rather, <code>ostree</code> is a deployment method, and Atomic really is - as I previously was considering it - something more like a <em>payload</em>.</p> <p>Reading the <a href="http://www.projectatomic.io/">Project Atomic site</a>, it seems I was kind of working from an outdated understanding of the 'Atomic' concept. I'd always understood it as being more or less a branding of <code>ostree</code>, i.e. an 'Atomic' system was just one that used the <code>ostree</code> deployment method. But it seems the concept has now been somewhat changed upstream. To quote the <a href="http://www.projectatomic.io/docs/introduction/">Introduction to Project Atomic</a>:</p> <p>"The core of Project Atomic is the Project Atomic Host. This is a lightweight operating system that has been assembled out of upstream RPM content. ...</p> <p>Project Atomic builds on these features, using the following components, which have been tailored for containerized-application management:</p> <ul> <li>Docker, an open source project for creating lightweight, portable, self-sufficient application containers.</li> <li>Kubernetes, an open source project that allows you to manage a cluster of Linux containers as a single system.</li> <li>rpm-ostree, an open source tool for managing bootable, immutable, versioned filesystem trees from upstream RPM content.</li> <li>systemd, an open source system and service manager for Linux."</li> </ul> <p>There is still some conceptual confusion about <em>exactly</em> what Atomic means - for instance, there's a concept called <em>Atomic Apps</em>, and it is apparently fine to deploy <em>Atomic Apps</em> on systems which are not <em>Atomic Hosts</em>. But after discussing it with some folks this morning, I think it's reasonable to take the following as a principle:</p> <p><strong>Any Fedora image with 'Atomic' in its name deploys as an Atomic Host as defined by Project Atomic.</strong></p> <p>Assuming that rule holds, for 'image identification' purposes under the fedfind concepts covered in the previous post, <code>atomic</code> really does work best as something in the line of a <code>flavor</code>, <code>loadout</code> and/or <code>payload</code>. It also <strong>implies</strong> the <em>deployment method</em> <code>ostree</code>, but I think I'm OK with leaving that concept out of fedfind for now as it's functionally useless: as all Atomic images have <code>ostree</code> deployment and all non-Atomic images have <code>rpm</code> deployment, distinguishing the properties is no use at this point. I'm going to hold it in reserve for the future, in case we get other images that use non-RPM <em>deployment methods</em> but are not Atomic hosts.</p> <p>For now I think I'll more or less go back to the <em>subflavor</em> concept I initially threw out in favour of <em>deployment</em>, and for current purposes, all Atomic images will have <em>flavor</em> <code>cloud</code> and <em>subflavor</em> <code>atomic</code>. The alternative is to have a concept that's a bit more parallel to <em>flavor</em> and <em>loadout</em> which feeds into <em>payload</em>, but for now keeping the interface stable seems better.</p> <p></p> Identifying Fedora media https://www.happyassassin.net/posts/2015/09/17/identifying-fedora-media/ 2015-09-17T23:49:33Z 2015-09-17T23:49:33Z Adam Williamson <p></p><p><strong>EDIT</strong>: With the immensely greater wisdom that comes with being a day older, I'm now more or less convinced this whole post is wrong. But instead of editing it and confusing people who read the original, I've left it here and written a <a href="https://www.happyassassin.net/2015/09/18/identifying-fedora-media-redux-what-is-atomic/">follow-up</a>. Please read that.</p> <p>Thanks to <a href="http://blog.linuxgrrl.com/2015/09/15/fedora-atomic-logo-idea/">mizmo</a> for inspiring this post.</p> <h3>On 'Fedora Atomic' as a flavor</h3> <p>Mo wrote that the Cloud WG has decided that, from Fedora 24 onwards, they will focus on <a href="http://www.projectatomic.io/">Atomic</a>-based images as their primary deliverables. In response to this, Mo and Matt Miller were kicking around the idea of - as I understand it - effectively rebranding the 'Cloud' flavor of Fedora as the 'Atomic' flavor.</p> <p>This immediately struck me as problematic, and after a bit of thought I was able to identify why.</p> <p>The current Fedora flavors - Cloud, Server, and Workstation - can be characterized as <em>contexts</em> in which you might want to deploy Fedora. "I want to deploy Fedora on a Cloud!", or "I want to deploy Fedora on a Workstation!", or "I want to deploy Fedora on a Server!" Yes, there's arguably a <em>bit</em> of overlap between 'Cloud' and 'Server', but we do have a reasonable answer to that ('Cloud' is about new-fangled cattle, 'Server' is about old-fashioned pets).</p> <p>Atomic is not a context in which you might deploy Fedora. You can't say "I want to deploy Fedora on an Atomic!", it just doesn't work. Atomic is rather what I'm referring to as a <em>deployment method</em>. It's a mechanism by which the system is deployed and updated. Another <em>deployment method</em> - the counterpart to 'Atomic' - would be 'RPM'. Atomic happens to a deployment method that's quite appropriate for Cloud deployments, but that doesn't mean you can just do <code>s/Cloud/Atomic/g</code> and everything will make sense.</p> <p>So for me the idea is simply not conceptually compatible with how we define the flavors. We might even, for instance, want to build 'Atomic Workstation' instances of Fedora. I believe there's even been interest in doing that, before.</p> <p>Mo's post suggests that we might treat 'cloud' rather like we currently treat 'ARM', as a sort of orthogonal concept to the flavors, with a kind of parallel identity on the download site. I would suggest that it would make more sense to do exactly the opposite: it's <em>Atomic</em> that should be given a sort of parallel existence. We might want to have an Atomic landing page which focused on all the Atomic implementations of Fedora. But it's not, for me, a flavor.</p> <h3>Thinking about identifying Fedora media</h3> <p>So you might be aware, I have a little project called <a href="https://www.happyassassin.net/fedfind">fedfind</a>, which is broadly about finding Fedora media. About half of the work fedfind does is <em>finding</em> the media. The other half of the work it does is <em>identifying</em> them. Fedora 23 Beta RC1 has 59 'images', approximately (depending on whether you count <code>boot.iso</code> files separately from their hardlinked alternate names). Trying to come up with a set of attributes that would serve as a sort of conceptual framework for identifying images has been an interesting (and ongoing!) challenge. In fact, as a result of thinking about the 'Atomic' proposal, I've just revised fedfind's approach a little. So, I thought I'd write quickly about how fedfind does it.</p> <p>I first thought about this stuff before starting fedfind, when I wrote an <a href="https://fedoraproject.org/wiki/User:Adamwill/Draft_fedora_image_naming_policy">image naming policy</a>. fedfind initially followed that policy precisely. Events have overtaken it a bit, though, so the current implementation isn't quite the same. And today, I've added the <em>deployment method</em> concept as an attribute in fedfind's system. The changes described here are sitting in my editor right now, but should show up in git master soon!</p> <p>Some of the attributes fedfind uses are fairly obvious (<em>arch</em> is just the arch, and <em>release</em>, <em>milestone</em> and <em>compose</em> are just version identifiers), so I'll focus on the more interesting ones. fedfind provides a couple of properties - <em>desc</em> and <em>shortdesc</em> - for images which can be used as descriptions; taken together, all these properties allow us to give a unique identity to every one of the 59 images in 23 Beta RC1, as well as covering all historical releases (in my checks anyway). <em>shortdesc</em> contains <em>payload</em>, <em>deployment</em> (if not 'rpm'), <em>imagetype</em>, and <em>imagesubtype</em> (if present); <em>desc</em> adds <em>arch</em>. For any given release's images, there should be no duplicated <em>desc</em>s.</p> <h4>imagetype and imagesubtype</h4> <p>These identify the 'type' of the image specifically. What's the difference between the Fedora 22 x86_64 Workstation live image and the Fedora 22 x86_64 Workstation network install image? They're the same version, same arch, same <em>payload</em> (see below) - but different <em>imagetype</em>. A few images also have an <em>imagesubtype</em>. For instance, we provide two types of Vagrant image, for use with libvirt and virtualbox; these have <code>vagrant</code> as the <em>imagetype</em> and <code>libvirt</code> or <code>virtualbox</code> as the <em>imagesubtype</em>.</p> <h4>flavor, loadout and payload</h4> <p>It's intended that most every fedfind image should have either a <em>flavor</em> or a <em>loadout</em>; for convenience, whichever one it has is also available as the <em>payload</em>. This can be thought of as 'what's the actual content of the image'. Flavors are <code>workstation</code>, <code>server</code>, and <code>cloud</code>; loadouts are things like <code>kde</code>, <code>xfce</code>, <code>mate</code> etc. (for live images and ARM disk images), plus some oddballs. There is a <code>minimal</code> for ARM disk images, and <code>source</code> and <code>desktop</code> for older releases where we had source CDs/DVDs and 'Desktop' live images. There's one fairly special case: pre-Fedora.next DVDs and network install images, <code>boot.iso</code>s, and current nightly images have no <em>flavor</em> or <em>loadout</em>, but have the special <em>payload</em> <code>generic</code>.</p> <p>Current fedfind releases have a concept of <em>subflavor</em>, which is only used to give some Cloud images a <em>subflavor</em> of <code>atomic</code>. After thinking about this (above) it seemed completely wrong, so today I've changed things around so the <em>subflavor</em> property is gone, and instead we have...</p> <h4>deployment</h4> <p>As discussed above, this is the 'deployment method'. Currently it's either <code>rpm</code> or <code>atomic</code>.</p> <p></p> <p></p><p><strong>EDIT</strong>: With the immensely greater wisdom that comes with being a day older, I'm now more or less convinced this whole post is wrong. But instead of editing it and confusing people who read the original, I've left it here and written a <a href="https://www.happyassassin.net/2015/09/18/identifying-fedora-media-redux-what-is-atomic/">follow-up</a>. Please read that.</p> <p>Thanks to <a href="http://blog.linuxgrrl.com/2015/09/15/fedora-atomic-logo-idea/">mizmo</a> for inspiring this post.</p> <h3>On 'Fedora Atomic' as a flavor</h3> <p>Mo wrote that the Cloud WG has decided that, from Fedora 24 onwards, they will focus on <a href="http://www.projectatomic.io/">Atomic</a>-based images as their primary deliverables. In response to this, Mo and Matt Miller were kicking around the idea of - as I understand it - effectively rebranding the 'Cloud' flavor of Fedora as the 'Atomic' flavor.</p> <p>This immediately struck me as problematic, and after a bit of thought I was able to identify why.</p> <p>The current Fedora flavors - Cloud, Server, and Workstation - can be characterized as <em>contexts</em> in which you might want to deploy Fedora. "I want to deploy Fedora on a Cloud!", or "I want to deploy Fedora on a Workstation!", or "I want to deploy Fedora on a Server!" Yes, there's arguably a <em>bit</em> of overlap between 'Cloud' and 'Server', but we do have a reasonable answer to that ('Cloud' is about new-fangled cattle, 'Server' is about old-fashioned pets).</p> <p>Atomic is not a context in which you might deploy Fedora. You can't say "I want to deploy Fedora on an Atomic!", it just doesn't work. Atomic is rather what I'm referring to as a <em>deployment method</em>. It's a mechanism by which the system is deployed and updated. Another <em>deployment method</em> - the counterpart to 'Atomic' - would be 'RPM'. Atomic happens to a deployment method that's quite appropriate for Cloud deployments, but that doesn't mean you can just do <code>s/Cloud/Atomic/g</code> and everything will make sense.</p> <p>So for me the idea is simply not conceptually compatible with how we define the flavors. We might even, for instance, want to build 'Atomic Workstation' instances of Fedora. I believe there's even been interest in doing that, before.</p> <p>Mo's post suggests that we might treat 'cloud' rather like we currently treat 'ARM', as a sort of orthogonal concept to the flavors, with a kind of parallel identity on the download site. I would suggest that it would make more sense to do exactly the opposite: it's <em>Atomic</em> that should be given a sort of parallel existence. We might want to have an Atomic landing page which focused on all the Atomic implementations of Fedora. But it's not, for me, a flavor.</p> <h3>Thinking about identifying Fedora media</h3> <p>So you might be aware, I have a little project called <a href="https://www.happyassassin.net/fedfind">fedfind</a>, which is broadly about finding Fedora media. About half of the work fedfind does is <em>finding</em> the media. The other half of the work it does is <em>identifying</em> them. Fedora 23 Beta RC1 has 59 'images', approximately (depending on whether you count <code>boot.iso</code> files separately from their hardlinked alternate names). Trying to come up with a set of attributes that would serve as a sort of conceptual framework for identifying images has been an interesting (and ongoing!) challenge. In fact, as a result of thinking about the 'Atomic' proposal, I've just revised fedfind's approach a little. So, I thought I'd write quickly about how fedfind does it.</p> <p>I first thought about this stuff before starting fedfind, when I wrote an <a href="https://fedoraproject.org/wiki/User:Adamwill/Draft_fedora_image_naming_policy">image naming policy</a>. fedfind initially followed that policy precisely. Events have overtaken it a bit, though, so the current implementation isn't quite the same. And today, I've added the <em>deployment method</em> concept as an attribute in fedfind's system. The changes described here are sitting in my editor right now, but should show up in git master soon!</p> <p>Some of the attributes fedfind uses are fairly obvious (<em>arch</em> is just the arch, and <em>release</em>, <em>milestone</em> and <em>compose</em> are just version identifiers), so I'll focus on the more interesting ones. fedfind provides a couple of properties - <em>desc</em> and <em>shortdesc</em> - for images which can be used as descriptions; taken together, all these properties allow us to give a unique identity to every one of the 59 images in 23 Beta RC1, as well as covering all historical releases (in my checks anyway). <em>shortdesc</em> contains <em>payload</em>, <em>deployment</em> (if not 'rpm'), <em>imagetype</em>, and <em>imagesubtype</em> (if present); <em>desc</em> adds <em>arch</em>. For any given release's images, there should be no duplicated <em>desc</em>s.</p> <h4>imagetype and imagesubtype</h4> <p>These identify the 'type' of the image specifically. What's the difference between the Fedora 22 x86_64 Workstation live image and the Fedora 22 x86_64 Workstation network install image? They're the same version, same arch, same <em>payload</em> (see below) - but different <em>imagetype</em>. A few images also have an <em>imagesubtype</em>. For instance, we provide two types of Vagrant image, for use with libvirt and virtualbox; these have <code>vagrant</code> as the <em>imagetype</em> and <code>libvirt</code> or <code>virtualbox</code> as the <em>imagesubtype</em>.</p> <h4>flavor, loadout and payload</h4> <p>It's intended that most every fedfind image should have either a <em>flavor</em> or a <em>loadout</em>; for convenience, whichever one it has is also available as the <em>payload</em>. This can be thought of as 'what's the actual content of the image'. Flavors are <code>workstation</code>, <code>server</code>, and <code>cloud</code>; loadouts are things like <code>kde</code>, <code>xfce</code>, <code>mate</code> etc. (for live images and ARM disk images), plus some oddballs. There is a <code>minimal</code> for ARM disk images, and <code>source</code> and <code>desktop</code> for older releases where we had source CDs/DVDs and 'Desktop' live images. There's one fairly special case: pre-Fedora.next DVDs and network install images, <code>boot.iso</code>s, and current nightly images have no <em>flavor</em> or <em>loadout</em>, but have the special <em>payload</em> <code>generic</code>.</p> <p>Current fedfind releases have a concept of <em>subflavor</em>, which is only used to give some Cloud images a <em>subflavor</em> of <code>atomic</code>. After thinking about this (above) it seemed completely wrong, so today I've changed things around so the <em>subflavor</em> property is gone, and instead we have...</p> <h4>deployment</h4> <p>As discussed above, this is the 'deployment method'. Currently it's either <code>rpm</code> or <code>atomic</code>.</p> <p></p> Flock 2015 report, and Fedora nightly compose testing https://www.happyassassin.net/posts/2015/08/21/flock-2015-report-and-fedora-nightly-compose-testing/ 2015-08-21T16:44:41Z 2015-08-21T16:44:41Z Adam Williamson <p></p><p>Hi, folks! I've been waiting to write my post-Flock report until I had some fun stuff to show off, because that's more exciting than just a bunch of 'I went to this talk and then talked to this person', right?</p> <h3>Fedora nightly compose testing</h3> <p>So let me get to the shiny first! Without further ado:</p> <ul> <li><a href="https://lists.fedoraproject.org/pipermail/devel/2015-August/213679.html">Fedora Rawhide 20150821 compose check report</a></li> <li><a href="https://lists.fedoraproject.org/pipermail/devel/2015-August/213680.html">Fedora 23 Branched 20150821 compose check report</a></li> </ul> <p>Cool, right? That's what I've been working on this whole week since Flock. All the bits are now basically in place such that, each night, openQA will run on the Branched and Rawhide nightly composes when they're done, and when openQA is done, the compose reports will be mailed out.</p> <h3>Flock report</h3> <p>The details behind that get quite long, so before I hit that, here's a quick round-up of other stuff I did at Flock! I'm not going to cover the talks and sessions many others have already blogged about (the keynotes, etc.) as it seems redundant, but I'll mention some stuff that hasn't really been covered yet.</p> <p><a href="https://jskladan.wordpress.com/">Josef</a> ran a workshop on getting started with openQA. It was a bit tricky, though, due to poor networking on site; the people trying to follow along and deploy their own Docker-based openQA instances couldn't quite get all the way. So we turned the last bit of the talk into a live demo using <a href="https://openqa.happyassassin.net">my openQA instance</a> instead, and created a new test case LIVE ON STAGE. We didn't quite get it all the way done before getting kicked out by a wedding party, but I finished it up shortly after the session. Josef did a great job of explaining the basics of setting up openQA and creating tests, and I hope we'll have a few more people following the openQA stuff now.</p> <p>Mike McLean did a great <a href="http://flock2015.sched.org/event/6c081a6c232a559796f9dfce3aaaba15">talk</a> on Koji 2.0, which has been kinda under the radar (at least for me) compared to Bodhi 2, but sounds like it'll come with a lot of really significant improvements and a better design. As someone who's spent a lot of time staring at <code>kojihub.py</code> lately, I can only say it'd be welcome...</p> <p>Denise Dumas gave the <a href="http://flock2015.sched.org/event/99525762a4e5197d5fa7fe59c96aa989">now-traditional What Red Hat Wants</a> talk, which I'm really glad is happening now. I'm totally behind the idea that we're up-front about the relationship between Red Hat and Fedora, instead of some silly arrangement where Red Hat pretends Fedora just 'happens' and is a totally community-based distro; it's much better for RH to be saying a couple of years in advance 'hey, this is where we'd like to see things going', rather than every so often a bunch of Features/Changes 'mysteriously' appearing four months out from a release and lots of people suddenly caring a lot about them (but it just all being a BIG COINCIDENCE!)</p> <p>Paul Frields did a nice talk on working remotely, which had a lot of great ideas that I don't do at all (hi Paul, it's 4:30pm and I'm writing this in my dressing gown...) - but it was great to compare notes with a bunch of other folks and think about other ways of doing things.</p> <p>I did a lightning talk on <a href="https://www.happyassassin.net/fedlet-a-fedora-remix-for-bay-trail-tablets/">Fedlet</a>, showing it off running and talking a bit about what Fedlet involves and how well (or not) it runs. Folks seemed interested, and a few people came by to play with my fedlet afterwards.</p> <p>Stephen Gallagher ran a <a href="https://fedorahosted.org/rolekit">rolekit</a> hackfest. I was hoping to use it to come up with an openQA role, but failed for a couple of reasons: Stephen doesn't recommend creating new roles right now as the format is likely to change a lot quite soon, and since I last worked on the package openQA has added a few more dependencies which need packaging. But I did manage to move forward with work on the package a bit, which was useful. In the session Stephen explained the rolekit design and current state to people, and talked about various work that needs doing on it; hopefully he'll get some more help with it soon!</p> <p>Of course, as always, there was lots of hallway track and social stuff. We had a couple of excellent poker games - good to see the FUDCon/Flock poker tradition continues strong - and played some <a href="http://www.explodingkittens.com/">Exploding Kittens</a>, which is a lot of fun. My favourite bit is the NOPE cards. As many others have said, the <a href="http://www.museumofplay.org/">Strong Museum</a> was awesome - got to play a bunch of pinball, and see Will Wright's notebooks(!) and John Romero's Apple ][(!!!!).</p> <h3>Fedora compose testing: development details and The Future</h3> <p>So, back to the 'compose CI' stuff I spent a lot of time talking about/working on!</p> <p>A lot of what I did at Flock centred around the big topic you can call '<a href="https://en.wikipedia.org/wiki/Continuous_integration">CI</a> for Fedora'. We still have lots of plans afoot for big, serious test and task automation based on <a href="https://taskotron.fedoraproject.org/">Taskotron</a>, which is now getting really close to the point where you'll see a lot more cool stuff using it. But in the meantime, the 'skunkworks' openQA project we spun up during the Fedora 22 cycle has grown quite a bit, and the <a href="https://www.happyassassin.net/fedfind/">fedfind</a> project I mostly built to back openQA has grown quite a lot of interesting capabilities.</p> <p>So while we were talking about properly engineered plans for the future, I realized I could probably hack together some stupidly-engineered stuff that would work right now! In Kevin Fenzi's <a href="http://www.flocktofedora.net/schedule/">Rawhide session</a> I threw out a few ideas and then figured that, hell, I should just do them.</p> <p>So I started out by <a href="https://www.happyassassin.net/cgit/fedfind/commit/?id=9ed7afd955933a89d23d41b8d4d35090398c2a9a">teaching fedfind some new tricks</a>. It can now 'diff' two releases: that is, it can tell you what images are in one, but not the other. It can also check a release for 'expected' images - basically it has some knowledge about what images we'd most want to be present all the time, and it can tell you if any are missing. (<strong>FIXME</strong>: I didn't know which of the Cloud images were the most important, so right now it has no 'expected' Cloud images: if some Cloud-y people want to tell me which images are most important, I can add them).</p> <p>Then I wrote a <a href="https://git.fedorahosted.org/cgit/fedora-qa.git/tree/check-compose">little script called check-compose</a> which produces a handy report from that information. It also looks for openQA tests for the compose it's checking, and includes a list of failures if it finds any. It can email the report and also write the results in JSON format (which seemed like a good idea in case we want to look back at them in some programmatic way later on). The 'compose check reports' that have been showing up this week (and that I linked above) are the output of the script.</p> <p>I had all of that basically done by Tuesday, so what have I been wasting the rest of my week on? Read on!</p> <p>What was missing was the 'C' part of 'CI'. There was nothing that would actually run the compose report at appropriate times, and we weren't actually running openQA tests nightly. For the past few days I've been kind of faking things up by manually kicking off openQA jobs and firing off the compose report when they're done. This kind of <a href="https://en.wikipedia.org/wiki/The_Turk">mechanical Turk</a> CI doesn't really work in the long run! So for the last few days I've worked on that.</p> <p>We were not actually scheduling nightly openQA runs at all. The <a href="https://phab.qadevel.cloud.fedoraproject.org/diffusion/OPENQA/browse/develop/tools/openqa_trigger/openqa_trigger.py">openQA trigger script</a> has an <code>all</code> mode which is intended to do that, but we weren't running it. I suggested we turn it back on, but I also wanted to fix one big problem it had: it didn't know whether the composes were actually done. It just got today's date and tried to run on the nightlies for it. If they weren't actually done whenever the script ran, you got no tests.</p> <p>This definitely hooks in with one of the big topics at Flock: <a href="https://fedorahosted.org/pungi/">Pungi</a> 4, which is the pending major revision. Pungi is the tool which runs Fedora composes. Well, that's not quite right: there's actually a <a href="https://git.fedorahosted.org/cgit/releng/tree/scripts/pungify">couple</a> of releng <a href="https://git.fedorahosted.org/cgit/releng/tree/scripts/run-pungi">scripts</a> which produce the composes (the first of those is for nightlies, the second is for TCs/RCs). They run pungi and do lots of other stuff too, because currently pungi only actually does some of the work involved in a compose (a lot of the images are just built by the trigger scripts firing off Koji tasks and other...stuff). The current revision of the compose process is something of a mess (it's grown chaotically as we added ARM images and Cloud images and Docker images and Atomic images and flavors and all the rest of it). With the Pungi 4 revision and associated changes to the releng process, it should be trivial to follow the compose process.</p> <p>Right now, though, it isn't. Nightly composes and TC/RC composes are very different. TCs/RCs don't emit information on their progress really at all. Nightlies emit some <a href="http://www.fedmsg.com/">fedmsg</a> signals, but crucially, there's no signal when the Koji builds complete: you get a signal when they <em>start</em>, but not when they're <em>done</em>.</p> <p>So it was time to teach fedfind some new tricks! I decided not to go the fedmsg route yet since it's not sufficient at present. Instead I <a href="https://www.happyassassin.net/cgit/fedfind/commit/?id=60967e2ab532c613341624363a7abb61b6bbf8a8">taught it to tell if composes are complete</a> in lower-tech ways. For the Pungi part of the process it looks for a file the script creates when it's done. For Koji tasks, it finds all the Koji tasks that looks like they're a part of this nightly, and only considers the nightly 'done' when there are at least <em>some</em> tasks (so it doesn't report 'done' before the process starts at all) and none of the tasks is 'open' (meaning running or not yet started).</p> <p>So now we could make the openQA trigger script or the compose-check script wait for a compose to actually exist before running against it! Great. Only now I had a different problem: the openQA trigger script was set up to run for both nightlies. This is fine if it's not waiting - it just goes ahead and fires one, then the other. But how to make it work with waiting?</p> <p>This one had to go through a couple of revisions. My first thought was "I have a problem. I know! I'll use threads", and <a href="http://nedbatchelder.com/blog/201204/two_problems.html">we all know how that joke goes</a>. Sure enough, all three of the revisions of this approach (using <a href="https://docs.python.org/2/library/threading.html">threading</a>, <a href="https://docs.python.org/2/library/multiprocessing.html">multiprocessing</a> and <code>multiprocessing.dummy</code>) turned out to have problems. I eventually decided it wasn't worth carrying on fighting with that, and came up with some different approaches. One is a <a href="https://phab.qadevel.cloud.fedoraproject.org/D516">low-tech round-robin waiting approach</a>, where the trigger script alternates between checking for Branched and Rawhide. The other is even simpler: by just <a href="https://phab.qadevel.cloud.fedoraproject.org/D525">adding a few capabilities</a> to the mode where the trigger runs on a single compose, we can simply schedule two separate runs of that mode each night, one for Rawhide, one for Branched. That keeps the code simple and means either one can get all the way through the 'find compose, schedule jobs, run jobs, run compose report' process without waiting for the other.</p> <p>And that, finally, is about where we're at right now! I'm hoping one or the other openQA change will be approved on Monday and then we can have this whole process running unattended each night - which will more or less finally implement some more of the near-legendary <a href="https://fedoraproject.org/wiki/Israwhidebroken.com_Proposal">is Rawhide broken?</a> proposal. Up till then I'll keep running the compose reports by hand.</p> <p>Along the way I did some other messing around in fedfind, mostly to do with optimizing how it does Koji queries (and fixing some bugs). For all of a day or so, it used multiprocessing to run queries in parallel; I decided multithreading just wasn't worth it for the moderate performance increase, though, so I switched to using a batched query mode provided by xmlrpclib, which speeds things up a little less but keeps the code simpler. I also implemented a query cache, and spent an entire goddamn afternoon coming up with a reasonable way to make it handle fuzzy matches (when e.g. we run a query for 'all open or successful tasks', then run a query for 'all successful live CD tasks', we can derive the results for the latter from the former and not waste time talking to the server again). But I got there in the end, I think.</p> <p>It was quite a lot of work, in the end, but I'm pretty happy with the result. I'm really, really looking forward to the releng improvements, though. fedfind is more or less just the things releng is aiming to do, only implemented (unavoidably) stupidly and from the wrong end. As I understand it, releng's medium-term goals are:</p> <ul> <li>all composes to contain sufficient metadata on what's actually in them</li> <li>compose processes for nightlies to be the same as that for TCs/RCs</li> <li>compose process to notify properly at all stages via fedmsg</li> <li><a href="https://fedoraproject.org/wiki/ReleaseEngineering/ComposeDB">ComposeDB</a> to track what composes actually exist and where they are</li> </ul> <p>right now we don't really have any of those things, and so fedfind exists to reconstruct all that information painfully, from the other end. It will definitely be a relief when we can get all that information out of sane systems, and I don't have to maintain a crazy ball of magic knowledge, Koji queries and rsync scrapes any longer. For now, though, the whole crazy ball of wax seems to actually work. I'm really glad that folks like <a href="https://www.scrye.com/wordpress/nirik/">Kevin</a>, <a href="https://www.ausil.us/">Dennis</a>, <a href="http://nullr0ute.com/">Peter</a>, <a href="http://threebean.org/blog/">Ralph</a>, <a href="http://pseudogen.blogspot.ca/">Adam</a> and others are all thinking down the same general lines: I'm hopeful that with Pungi, ComposeDB (when it happens), and further work on Taskotron and openQA and even my stupid little scripts, we'll have continuously (see what I did there?!) better stories to tell as we move on for the next few releases.</p> <p></p> <p></p><p>Hi, folks! I've been waiting to write my post-Flock report until I had some fun stuff to show off, because that's more exciting than just a bunch of 'I went to this talk and then talked to this person', right?</p> <h3>Fedora nightly compose testing</h3> <p>So let me get to the shiny first! Without further ado:</p> <ul> <li><a href="https://lists.fedoraproject.org/pipermail/devel/2015-August/213679.html">Fedora Rawhide 20150821 compose check report</a></li> <li><a href="https://lists.fedoraproject.org/pipermail/devel/2015-August/213680.html">Fedora 23 Branched 20150821 compose check report</a></li> </ul> <p>Cool, right? That's what I've been working on this whole week since Flock. All the bits are now basically in place such that, each night, openQA will run on the Branched and Rawhide nightly composes when they're done, and when openQA is done, the compose reports will be mailed out.</p> <h3>Flock report</h3> <p>The details behind that get quite long, so before I hit that, here's a quick round-up of other stuff I did at Flock! I'm not going to cover the talks and sessions many others have already blogged about (the keynotes, etc.) as it seems redundant, but I'll mention some stuff that hasn't really been covered yet.</p> <p><a href="https://jskladan.wordpress.com/">Josef</a> ran a workshop on getting started with openQA. It was a bit tricky, though, due to poor networking on site; the people trying to follow along and deploy their own Docker-based openQA instances couldn't quite get all the way. So we turned the last bit of the talk into a live demo using <a href="https://openqa.happyassassin.net">my openQA instance</a> instead, and created a new test case LIVE ON STAGE. We didn't quite get it all the way done before getting kicked out by a wedding party, but I finished it up shortly after the session. Josef did a great job of explaining the basics of setting up openQA and creating tests, and I hope we'll have a few more people following the openQA stuff now.</p> <p>Mike McLean did a great <a href="http://flock2015.sched.org/event/6c081a6c232a559796f9dfce3aaaba15">talk</a> on Koji 2.0, which has been kinda under the radar (at least for me) compared to Bodhi 2, but sounds like it'll come with a lot of really significant improvements and a better design. As someone who's spent a lot of time staring at <code>kojihub.py</code> lately, I can only say it'd be welcome...</p> <p>Denise Dumas gave the <a href="http://flock2015.sched.org/event/99525762a4e5197d5fa7fe59c96aa989">now-traditional What Red Hat Wants</a> talk, which I'm really glad is happening now. I'm totally behind the idea that we're up-front about the relationship between Red Hat and Fedora, instead of some silly arrangement where Red Hat pretends Fedora just 'happens' and is a totally community-based distro; it's much better for RH to be saying a couple of years in advance 'hey, this is where we'd like to see things going', rather than every so often a bunch of Features/Changes 'mysteriously' appearing four months out from a release and lots of people suddenly caring a lot about them (but it just all being a BIG COINCIDENCE!)</p> <p>Paul Frields did a nice talk on working remotely, which had a lot of great ideas that I don't do at all (hi Paul, it's 4:30pm and I'm writing this in my dressing gown...) - but it was great to compare notes with a bunch of other folks and think about other ways of doing things.</p> <p>I did a lightning talk on <a href="https://www.happyassassin.net/fedlet-a-fedora-remix-for-bay-trail-tablets/">Fedlet</a>, showing it off running and talking a bit about what Fedlet involves and how well (or not) it runs. Folks seemed interested, and a few people came by to play with my fedlet afterwards.</p> <p>Stephen Gallagher ran a <a href="https://fedorahosted.org/rolekit">rolekit</a> hackfest. I was hoping to use it to come up with an openQA role, but failed for a couple of reasons: Stephen doesn't recommend creating new roles right now as the format is likely to change a lot quite soon, and since I last worked on the package openQA has added a few more dependencies which need packaging. But I did manage to move forward with work on the package a bit, which was useful. In the session Stephen explained the rolekit design and current state to people, and talked about various work that needs doing on it; hopefully he'll get some more help with it soon!</p> <p>Of course, as always, there was lots of hallway track and social stuff. We had a couple of excellent poker games - good to see the FUDCon/Flock poker tradition continues strong - and played some <a href="http://www.explodingkittens.com/">Exploding Kittens</a>, which is a lot of fun. My favourite bit is the NOPE cards. As many others have said, the <a href="http://www.museumofplay.org/">Strong Museum</a> was awesome - got to play a bunch of pinball, and see Will Wright's notebooks(!) and John Romero's Apple ][(!!!!).</p> <h3>Fedora compose testing: development details and The Future</h3> <p>So, back to the 'compose CI' stuff I spent a lot of time talking about/working on!</p> <p>A lot of what I did at Flock centred around the big topic you can call '<a href="https://en.wikipedia.org/wiki/Continuous_integration">CI</a> for Fedora'. We still have lots of plans afoot for big, serious test and task automation based on <a href="https://taskotron.fedoraproject.org/">Taskotron</a>, which is now getting really close to the point where you'll see a lot more cool stuff using it. But in the meantime, the 'skunkworks' openQA project we spun up during the Fedora 22 cycle has grown quite a bit, and the <a href="https://www.happyassassin.net/fedfind/">fedfind</a> project I mostly built to back openQA has grown quite a lot of interesting capabilities.</p> <p>So while we were talking about properly engineered plans for the future, I realized I could probably hack together some stupidly-engineered stuff that would work right now! In Kevin Fenzi's <a href="http://www.flocktofedora.net/schedule/">Rawhide session</a> I threw out a few ideas and then figured that, hell, I should just do them.</p> <p>So I started out by <a href="https://www.happyassassin.net/cgit/fedfind/commit/?id=9ed7afd955933a89d23d41b8d4d35090398c2a9a">teaching fedfind some new tricks</a>. It can now 'diff' two releases: that is, it can tell you what images are in one, but not the other. It can also check a release for 'expected' images - basically it has some knowledge about what images we'd most want to be present all the time, and it can tell you if any are missing. (<strong>FIXME</strong>: I didn't know which of the Cloud images were the most important, so right now it has no 'expected' Cloud images: if some Cloud-y people want to tell me which images are most important, I can add them).</p> <p>Then I wrote a <a href="https://git.fedorahosted.org/cgit/fedora-qa.git/tree/check-compose">little script called check-compose</a> which produces a handy report from that information. It also looks for openQA tests for the compose it's checking, and includes a list of failures if it finds any. It can email the report and also write the results in JSON format (which seemed like a good idea in case we want to look back at them in some programmatic way later on). The 'compose check reports' that have been showing up this week (and that I linked above) are the output of the script.</p> <p>I had all of that basically done by Tuesday, so what have I been wasting the rest of my week on? Read on!</p> <p>What was missing was the 'C' part of 'CI'. There was nothing that would actually run the compose report at appropriate times, and we weren't actually running openQA tests nightly. For the past few days I've been kind of faking things up by manually kicking off openQA jobs and firing off the compose report when they're done. This kind of <a href="https://en.wikipedia.org/wiki/The_Turk">mechanical Turk</a> CI doesn't really work in the long run! So for the last few days I've worked on that.</p> <p>We were not actually scheduling nightly openQA runs at all. The <a href="https://phab.qadevel.cloud.fedoraproject.org/diffusion/OPENQA/browse/develop/tools/openqa_trigger/openqa_trigger.py">openQA trigger script</a> has an <code>all</code> mode which is intended to do that, but we weren't running it. I suggested we turn it back on, but I also wanted to fix one big problem it had: it didn't know whether the composes were actually done. It just got today's date and tried to run on the nightlies for it. If they weren't actually done whenever the script ran, you got no tests.</p> <p>This definitely hooks in with one of the big topics at Flock: <a href="https://fedorahosted.org/pungi/">Pungi</a> 4, which is the pending major revision. Pungi is the tool which runs Fedora composes. Well, that's not quite right: there's actually a <a href="https://git.fedorahosted.org/cgit/releng/tree/scripts/pungify">couple</a> of releng <a href="https://git.fedorahosted.org/cgit/releng/tree/scripts/run-pungi">scripts</a> which produce the composes (the first of those is for nightlies, the second is for TCs/RCs). They run pungi and do lots of other stuff too, because currently pungi only actually does some of the work involved in a compose (a lot of the images are just built by the trigger scripts firing off Koji tasks and other...stuff). The current revision of the compose process is something of a mess (it's grown chaotically as we added ARM images and Cloud images and Docker images and Atomic images and flavors and all the rest of it). With the Pungi 4 revision and associated changes to the releng process, it should be trivial to follow the compose process.</p> <p>Right now, though, it isn't. Nightly composes and TC/RC composes are very different. TCs/RCs don't emit information on their progress really at all. Nightlies emit some <a href="http://www.fedmsg.com/">fedmsg</a> signals, but crucially, there's no signal when the Koji builds complete: you get a signal when they <em>start</em>, but not when they're <em>done</em>.</p> <p>So it was time to teach fedfind some new tricks! I decided not to go the fedmsg route yet since it's not sufficient at present. Instead I <a href="https://www.happyassassin.net/cgit/fedfind/commit/?id=60967e2ab532c613341624363a7abb61b6bbf8a8">taught it to tell if composes are complete</a> in lower-tech ways. For the Pungi part of the process it looks for a file the script creates when it's done. For Koji tasks, it finds all the Koji tasks that looks like they're a part of this nightly, and only considers the nightly 'done' when there are at least <em>some</em> tasks (so it doesn't report 'done' before the process starts at all) and none of the tasks is 'open' (meaning running or not yet started).</p> <p>So now we could make the openQA trigger script or the compose-check script wait for a compose to actually exist before running against it! Great. Only now I had a different problem: the openQA trigger script was set up to run for both nightlies. This is fine if it's not waiting - it just goes ahead and fires one, then the other. But how to make it work with waiting?</p> <p>This one had to go through a couple of revisions. My first thought was "I have a problem. I know! I'll use threads", and <a href="http://nedbatchelder.com/blog/201204/two_problems.html">we all know how that joke goes</a>. Sure enough, all three of the revisions of this approach (using <a href="https://docs.python.org/2/library/threading.html">threading</a>, <a href="https://docs.python.org/2/library/multiprocessing.html">multiprocessing</a> and <code>multiprocessing.dummy</code>) turned out to have problems. I eventually decided it wasn't worth carrying on fighting with that, and came up with some different approaches. One is a <a href="https://phab.qadevel.cloud.fedoraproject.org/D516">low-tech round-robin waiting approach</a>, where the trigger script alternates between checking for Branched and Rawhide. The other is even simpler: by just <a href="https://phab.qadevel.cloud.fedoraproject.org/D525">adding a few capabilities</a> to the mode where the trigger runs on a single compose, we can simply schedule two separate runs of that mode each night, one for Rawhide, one for Branched. That keeps the code simple and means either one can get all the way through the 'find compose, schedule jobs, run jobs, run compose report' process without waiting for the other.</p> <p>And that, finally, is about where we're at right now! I'm hoping one or the other openQA change will be approved on Monday and then we can have this whole process running unattended each night - which will more or less finally implement some more of the near-legendary <a href="https://fedoraproject.org/wiki/Israwhidebroken.com_Proposal">is Rawhide broken?</a> proposal. Up till then I'll keep running the compose reports by hand.</p> <p>Along the way I did some other messing around in fedfind, mostly to do with optimizing how it does Koji queries (and fixing some bugs). For all of a day or so, it used multiprocessing to run queries in parallel; I decided multithreading just wasn't worth it for the moderate performance increase, though, so I switched to using a batched query mode provided by xmlrpclib, which speeds things up a little less but keeps the code simpler. I also implemented a query cache, and spent an entire goddamn afternoon coming up with a reasonable way to make it handle fuzzy matches (when e.g. we run a query for 'all open or successful tasks', then run a query for 'all successful live CD tasks', we can derive the results for the latter from the former and not waste time talking to the server again). But I got there in the end, I think.</p> <p>It was quite a lot of work, in the end, but I'm pretty happy with the result. I'm really, really looking forward to the releng improvements, though. fedfind is more or less just the things releng is aiming to do, only implemented (unavoidably) stupidly and from the wrong end. As I understand it, releng's medium-term goals are:</p> <ul> <li>all composes to contain sufficient metadata on what's actually in them</li> <li>compose processes for nightlies to be the same as that for TCs/RCs</li> <li>compose process to notify properly at all stages via fedmsg</li> <li><a href="https://fedoraproject.org/wiki/ReleaseEngineering/ComposeDB">ComposeDB</a> to track what composes actually exist and where they are</li> </ul> <p>right now we don't really have any of those things, and so fedfind exists to reconstruct all that information painfully, from the other end. It will definitely be a relief when we can get all that information out of sane systems, and I don't have to maintain a crazy ball of magic knowledge, Koji queries and rsync scrapes any longer. For now, though, the whole crazy ball of wax seems to actually work. I'm really glad that folks like <a href="https://www.scrye.com/wordpress/nirik/">Kevin</a>, <a href="https://www.ausil.us/">Dennis</a>, <a href="http://nullr0ute.com/">Peter</a>, <a href="http://threebean.org/blog/">Ralph</a>, <a href="http://pseudogen.blogspot.ca/">Adam</a> and others are all thinking down the same general lines: I'm hopeful that with Pungi, ComposeDB (when it happens), and further work on Taskotron and openQA and even my stupid little scripts, we'll have continuously (see what I did there?!) better stories to tell as we move on for the next few releases.</p> <p></p> Testdays: an app for dealing with Test Day wiki pages https://www.happyassassin.net/posts/2015/07/02/testdays-an-app-for-dealing-with-test-day-wiki-pages/ 2015-07-02T15:42:18Z 2015-07-02T15:42:18Z Adam Williamson <p></p><p>Just wanted to quickly announce another app in the ever-growing <a href="https://www.happyassassin.net/wikitcms/">wikitcms/relval</a> conglomerate: <a href="https://www.happyassassin.net/testdays/">testdays</a>. testdays is to Test Day pages as relval is to release validation pages: it's a CLI app for interacting with Test Day pages, using the python-wikitcms module.</p> <p>Right now it only does one thing, really - generates statistics. You can't (yet?) use it to create Test Day pages or report results. But you <em>can</em> generate some fairly simple statistics about a single Test Day page or some group of them, with various ways of specifying which pages you want to operate on.</p> <p>For each page it will give you a count of testers, tests, and bug references, a ratio of bugs referred per tester, and an attempt to count what percentage of valid, unique bugs referred to in the page has been fixed. If you use it on a set of pages, it'll also give you overall statistics for all the pages combined, and a list of the top testers by number of results and bug references.</p> <p>I'd like to thank Chris Ward for prodding me into making this - he asked for some statistics about the Fedora 22 Test Day cycle, and I figured I may as well write a tool to produce them...</p> <p>testdays is available from the wikitcms/relval repository - if you have it set up, you can do <code>(yum|dnf) install testdays</code>.</p> <p></p> <p></p><p>Just wanted to quickly announce another app in the ever-growing <a href="https://www.happyassassin.net/wikitcms/">wikitcms/relval</a> conglomerate: <a href="https://www.happyassassin.net/testdays/">testdays</a>. testdays is to Test Day pages as relval is to release validation pages: it's a CLI app for interacting with Test Day pages, using the python-wikitcms module.</p> <p>Right now it only does one thing, really - generates statistics. You can't (yet?) use it to create Test Day pages or report results. But you <em>can</em> generate some fairly simple statistics about a single Test Day page or some group of them, with various ways of specifying which pages you want to operate on.</p> <p>For each page it will give you a count of testers, tests, and bug references, a ratio of bugs referred per tester, and an attempt to count what percentage of valid, unique bugs referred to in the page has been fixed. If you use it on a set of pages, it'll also give you overall statistics for all the pages combined, and a list of the top testers by number of results and bug references.</p> <p>I'd like to thank Chris Ward for prodding me into making this - he asked for some statistics about the Fedora 22 Test Day cycle, and I figured I may as well write a tool to produce them...</p> <p>testdays is available from the wikitcms/relval repository - if you have it set up, you can do <code>(yum|dnf) install testdays</code>.</p> <p></p> Post-F22 plans https://www.happyassassin.net/posts/2015/06/09/post-f22-plans/ 2015-06-09T19:21:57Z 2015-06-09T19:21:57Z Adam Williamson <p></p><p>Hi, folks! I've been away on vacation for a few weeks, but I'm back now - if you'd been holding off bugging me about something, please commence bugging now. Of course, I have a metric assload of email backlog to dig out from under.</p> <p>I'll probably have lots more on the list fairly soon, but just thought I'd kick off with a quick list of stuff I'm intending to do in the post-F22 timeframe:</p> <ul> <li>Replace <a href="https://www.happyassassin.net/wikitcms/">python-wikitcms</a>' homegrown, regex-based mediawiki syntax parsing with <a href="https://github.com/earwig/mwparserfromhell">mwparserfromhell</a> (which will imply packaging it)</li> <li>Improve python-wikitcms' handling of Test Days and add a new tool (a <code>relval</code> analog for Test Days) - initial focus on getting useful stats out, but I may set up a similar wiki template-based Test Day page creation system so you can create the initial page for a Test Day with a single command</li> <li>Write openQA tests for more of the release validation test cases</li> </ul> <p>I should also add better logging and unit tests to python-wikitcms, fedfind and relval, but let's be honest, I'll probably wind up doing shiny exciting things instead...</p> <p>I'm also going to update the ownCloud packages to 8.0.4 (everything except EPEL 6) and 7.0.6 (EPEL 6) soon.</p> <p></p> <p></p><p>Hi, folks! I've been away on vacation for a few weeks, but I'm back now - if you'd been holding off bugging me about something, please commence bugging now. Of course, I have a metric assload of email backlog to dig out from under.</p> <p>I'll probably have lots more on the list fairly soon, but just thought I'd kick off with a quick list of stuff I'm intending to do in the post-F22 timeframe:</p> <ul> <li>Replace <a href="https://www.happyassassin.net/wikitcms/">python-wikitcms</a>' homegrown, regex-based mediawiki syntax parsing with <a href="https://github.com/earwig/mwparserfromhell">mwparserfromhell</a> (which will imply packaging it)</li> <li>Improve python-wikitcms' handling of Test Days and add a new tool (a <code>relval</code> analog for Test Days) - initial focus on getting useful stats out, but I may set up a similar wiki template-based Test Day page creation system so you can create the initial page for a Test Day with a single command</li> <li>Write openQA tests for more of the release validation test cases</li> </ul> <p>I should also add better logging and unit tests to python-wikitcms, fedfind and relval, but let's be honest, I'll probably wind up doing shiny exciting things instead...</p> <p>I'm also going to update the ownCloud packages to 8.0.4 (everything except EPEL 6) and 7.0.6 (EPEL 6) soon.</p> <p></p> LinuxFest NorthWest 2015, ownCloud 8 for stable Fedora / EPEL https://www.happyassassin.net/posts/2015/05/01/linuxfest-northwest-2015-owncloud-8-for-stable-fedora-epel/ 2015-05-01T18:09:49Z 2015-05-01T18:09:49Z Adam Williamson <p></p><h3>LinuxFest NorthWest 2015</h3> <p>As I have for many of the last few years, I attended <a href="http://linuxfestnorthwest.org/2015">LinuxFest NorthWest</a> this year. It's been fun to watch this conference grow from a couple hundred people to nearly 2,000 while retaining its community-based and casual atmosphere - I'd like to congratulate and thank the organizers and volunteers who work tirelessly on it each year (and a certain few of them for being kind enough to drive me around and entertain me on Sunday evenings!)</p> <p>The Fedora booth was extra fun this year. As well as the <a href="https://en.wikipedia.org/wiki/OLPC_XO-1">OLPC XO</a> systems we usually have there (which always do a great job of attracting attention), <a href="https://fedoraproject.org/wiki/User:Paradoxguitarist">Brian Monroe</a> set up a whole music recording system running out of a Fedora laptop, with a couple of guitars, bass, keyboard, and even a little all-in-one electronic drum...thing. He had multitrack recording via <a href="http://ardour.org/">Ardour</a> and guitar effects from <a href="http://guitarix.org/">Guitarix</a>. This was a great way to show off the capabilities of <a href="https://fedoraproject.org/wiki/Fedora_jam">Fedora Jam</a>, and was very popular all weekend - sometimes it seemed like every third person who came by was ready to crank out a few guitar chords, and we had several bass players and drummers too. I spent a lot of time away from the booth, but even when I was there we had pretty much a full band going quite often.</p> <p>It was good to meet Brian and also <a href="https://fedoraproject.org/wiki/User:Immanetize">Pete Travis</a>, who does fantastic things for <a href="https://docs.fedoraproject.org/en-US/index.html">Fedora Docs</a>, as well as <a href="https://fedoraproject.org/wiki/User:Steelaworkn">Jeff Fitzmaurice</a>. <a href="https://fedoraproject.org/wiki/User:Jsandys">Jeff Sandys</a> was there as usual as well, so we got to catch up over lunch.</p> <p>I didn't have a talk this year (I proposed one but it didn't make it through the voting process), so I was able to take it nice and easy and just meet up with folks and watch talks. In between all of that I also got myself 3D scanned by <a href="https://twitter.com/pythondj">Dianne Mueller</a>, who had herself set up in a trailer with a big lazy Susan and a Kinect and software I know nothing at all about which managed to produce a scarily accurate model of me and my terrible posture. She's promised I'll get a tiny plastic bust of myself in the mail sometime soon, though I'm not sure exactly what to do with it...so thanks, Dianne!</p> <p>Hard to mention everyone else I ran into or met, but of course there was the (thankfully) inimitable <a href="http://lunduke.com/">Bryan Lunduke</a> and <a href="https://en.opensuse.org/User:Bear454">James Mason</a> of openSUSE fame, who took up their traditional spot opposite us and cried all weekend as they watched the huge crowds flock our booth...</p> <p>There were a lot of really good presentations. I particularly enjoyed <a href="http://franceshocutt.com/">Frances Hocutt</a>'s <a href="http://linuxfestnorthwest.org/2015/sessions/developers-eye-view-api-client-libraries"><em>A developer's-eye view of API client libraries</em></a>, which sounds a little dry but was very well presented and full of good notes for API client library producers and consumers. Frances wrangles API client libraries for the <a href="https://wikimediafoundation.org/">Wikimedia Foundation</a>, so it was good to get to thank her for her work on the <a href="https://www.mediawiki.org/wiki/API:Client_code">list of Mediawiki client libraries</a> and the <a href="https://www.mediawiki.org/wiki/API:Client_code/Gold_standard">Gold standard</a> set of guidelines for Mediawiki client library designers and all the other things she's done to improve API client libraries for Mediawiki - obviously this has been invaluable to <a href="https://www.happyassassin.net/wikitcms/">wikitcms</a> development.</p> <p>It was also great to meet <a href="http://karlitschek.de/">Frank Karlitschek</a> at his <a href="http://linuxfestnorthwest.org/2015/sessions/crushing-data-silos-owncloud"><em>Crushing data silos with ownCloud</em></a> talk. I've been packaging ownCloud and making small upstream contributions for a while now so I've chatted with several of the devs on IRC and GitHub - I didn't even know Frank was going to LFNW, so it was an unexpected bonus to be able to say 'hi' in real life, and inspired me to do some work on OC, of which more later!</p> <p>Diane's <a href="http://linuxfestnorthwest.org/2015/sessions/building-next-gen-paas-openshift-docker-kubernetes-project-atomic-oh-my">presentation on the latest bleeding-edge bits of the OpenShift stack</a> went partly over my head - my todo list includes items like 'learn what the hell is going on with all this cloudockershifty stuff already' - but she always presents effectively and it was interesting to learn that OpenShift 3.0 is a real production thing built on <a href="http://www.projectatomic.io/">Project Atomic</a>, which is a bit astonishing since in my head Atomic is still 'this weird experimental thing <a href="http://blog.verbum.org/">Colin Walters</a> keeps bugging me about'. These cloud people sure do move fast. Kids these days, I don't know.</p> <p>Finally it was great to see <a href="https://www.eff.org/about/staff/seth-schoen">Seth Schoen</a> present <a href="http://linuxfestnorthwest.org/2015/sessions/lets-encrypt-free-robotic-certificate-authority"><em>Let's Encrypt</em></a>. I'd heard a little about this <a href="https://letsencrypt.org/">awesome project</a> but it was good to get some details on exactly how it's being implemented and how it will work. Their goal is, pretty simply, to make it possible to get and install a TLS certificate <em>that will be trusted by all clients</em> for any web server by running a single command. They're basically automating domain validation: the server comes up with a set of actions that will demonstrate control of the domain, the script running on your web server box (the 'client' in this transaction) demonstrates the ability to make those changes, the server checks they were done and issues a certificate, the script installs it. None of it is rocket science, but it's so immeasurably superior to doing it all one awkward step at a time with openssl-generated CSRs and janky <a href="http://www.startssl.com/">web interfaces</a> that the only wish I have is for it to be in production already. The real goal is to enable a web where unencrypted traffic simply doesn't happen - make it sufficiently easy to get a trusted certificate that, simply, everyone does it. It was pretty cool that at the end of the talk Seth was mobbed by Red Hat / Fedora folks offering help with integration - I'm guessing you'll be able to use this stuff on RHEL and Fedora servers from day 1.</p> <p>All that and we had the now-traditional Friday night games night (beer, pizza, M&amp;Ms, and Cards Against Humanity - really all you need on a Friday night!) too. It was a very enjoyable event as always.</p> <h3>ownCloud 8 for Fedora 21, Fedora 20, and EPEL 7</h3> <p>The other big news I have: ownCloud 8.0.3 came out recently, and it seemed like an appropriate time to kick most of the still-maintained stable Fedora and EPEL releases over to it. So there are now <a href="https://admin.fedoraproject.org/updates/owncloud-8.0.3-1.fc21,php-Assetic-1.2.1-1.fc21,php-bantu-ini-get-wrapper-1.0.1-1.fc21,php-doctrine-dbal-2.5.1-1.fc21,php-natxet-cssmin-3.0.2-2.20141229git8883d28.fc21,php-Pimple-3.0.0-1.fc21">Fedora 21</a>, <a href="https://admin.fedoraproject.org/updates/owncloud-8.0.3-1.fc20,php-Assetic-1.2.1-1.fc20,php-bantu-ini-get-wrapper-1.0.1-1.fc20,php-doctrine-dbal-2.5.1-1.fc20,php-natxet-cssmin-3.0.2-2.20141229git8883d28.fc20,php-Pimple-3.0.0-1.fc20,php-PHPMailer-5.2.9-1.fc20">Fedora 20</a> and <a href="https://admin.fedoraproject.org/updates/owncloud-8.0.3-1.el7,php-Assetic-1.2.1-1.el7,php-bantu-ini-get-wrapper-1.0.1-1.el7,php-doctrine-dbal-2.5.1-1.el7,php-natxet-cssmin-3.0.2-2.20141229git8883d28.el7,php-Pimple-3.0.0-1.el7">EPEL 7</a> testing updates providing ownCloud 8.0.3 for those releases. Please do read the update notes carefully, and back everything up before trying the update!</p> <p>EPEL 6 is still on ownCloud 7.0.5 for now; I'd have to bump even more OC dependencies to new versions in order to have ownCloud 8 on EPEL 6. That might still happen, but I decided to get it done for the more up-to-date releases first.</p> <p>This build also includes some fixes to the packaged nginx configuration from <a href="https://blogs.gnome.org/ignatenko/">Igor Gnatenko</a>, so I'd like to thank him very much for those! I still haven't got around to testing OC on nginx, but Igor has been running it, so hopefully with these changes it'll now work out of the box.</p> <p></p> <p></p><h3>LinuxFest NorthWest 2015</h3> <p>As I have for many of the last few years, I attended <a href="http://linuxfestnorthwest.org/2015">LinuxFest NorthWest</a> this year. It's been fun to watch this conference grow from a couple hundred people to nearly 2,000 while retaining its community-based and casual atmosphere - I'd like to congratulate and thank the organizers and volunteers who work tirelessly on it each year (and a certain few of them for being kind enough to drive me around and entertain me on Sunday evenings!)</p> <p>The Fedora booth was extra fun this year. As well as the <a href="https://en.wikipedia.org/wiki/OLPC_XO-1">OLPC XO</a> systems we usually have there (which always do a great job of attracting attention), <a href="https://fedoraproject.org/wiki/User:Paradoxguitarist">Brian Monroe</a> set up a whole music recording system running out of a Fedora laptop, with a couple of guitars, bass, keyboard, and even a little all-in-one electronic drum...thing. He had multitrack recording via <a href="http://ardour.org/">Ardour</a> and guitar effects from <a href="http://guitarix.org/">Guitarix</a>. This was a great way to show off the capabilities of <a href="https://fedoraproject.org/wiki/Fedora_jam">Fedora Jam</a>, and was very popular all weekend - sometimes it seemed like every third person who came by was ready to crank out a few guitar chords, and we had several bass players and drummers too. I spent a lot of time away from the booth, but even when I was there we had pretty much a full band going quite often.</p> <p>It was good to meet Brian and also <a href="https://fedoraproject.org/wiki/User:Immanetize">Pete Travis</a>, who does fantastic things for <a href="https://docs.fedoraproject.org/en-US/index.html">Fedora Docs</a>, as well as <a href="https://fedoraproject.org/wiki/User:Steelaworkn">Jeff Fitzmaurice</a>. <a href="https://fedoraproject.org/wiki/User:Jsandys">Jeff Sandys</a> was there as usual as well, so we got to catch up over lunch.</p> <p>I didn't have a talk this year (I proposed one but it didn't make it through the voting process), so I was able to take it nice and easy and just meet up with folks and watch talks. In between all of that I also got myself 3D scanned by <a href="https://twitter.com/pythondj">Dianne Mueller</a>, who had herself set up in a trailer with a big lazy Susan and a Kinect and software I know nothing at all about which managed to produce a scarily accurate model of me and my terrible posture. She's promised I'll get a tiny plastic bust of myself in the mail sometime soon, though I'm not sure exactly what to do with it...so thanks, Dianne!</p> <p>Hard to mention everyone else I ran into or met, but of course there was the (thankfully) inimitable <a href="http://lunduke.com/">Bryan Lunduke</a> and <a href="https://en.opensuse.org/User:Bear454">James Mason</a> of openSUSE fame, who took up their traditional spot opposite us and cried all weekend as they watched the huge crowds flock our booth...</p> <p>There were a lot of really good presentations. I particularly enjoyed <a href="http://franceshocutt.com/">Frances Hocutt</a>'s <a href="http://linuxfestnorthwest.org/2015/sessions/developers-eye-view-api-client-libraries"><em>A developer's-eye view of API client libraries</em></a>, which sounds a little dry but was very well presented and full of good notes for API client library producers and consumers. Frances wrangles API client libraries for the <a href="https://wikimediafoundation.org/">Wikimedia Foundation</a>, so it was good to get to thank her for her work on the <a href="https://www.mediawiki.org/wiki/API:Client_code">list of Mediawiki client libraries</a> and the <a href="https://www.mediawiki.org/wiki/API:Client_code/Gold_standard">Gold standard</a> set of guidelines for Mediawiki client library designers and all the other things she's done to improve API client libraries for Mediawiki - obviously this has been invaluable to <a href="https://www.happyassassin.net/wikitcms/">wikitcms</a> development.</p> <p>It was also great to meet <a href="http://karlitschek.de/">Frank Karlitschek</a> at his <a href="http://linuxfestnorthwest.org/2015/sessions/crushing-data-silos-owncloud"><em>Crushing data silos with ownCloud</em></a> talk. I've been packaging ownCloud and making small upstream contributions for a while now so I've chatted with several of the devs on IRC and GitHub - I didn't even know Frank was going to LFNW, so it was an unexpected bonus to be able to say 'hi' in real life, and inspired me to do some work on OC, of which more later!</p> <p>Diane's <a href="http://linuxfestnorthwest.org/2015/sessions/building-next-gen-paas-openshift-docker-kubernetes-project-atomic-oh-my">presentation on the latest bleeding-edge bits of the OpenShift stack</a> went partly over my head - my todo list includes items like 'learn what the hell is going on with all this cloudockershifty stuff already' - but she always presents effectively and it was interesting to learn that OpenShift 3.0 is a real production thing built on <a href="http://www.projectatomic.io/">Project Atomic</a>, which is a bit astonishing since in my head Atomic is still 'this weird experimental thing <a href="http://blog.verbum.org/">Colin Walters</a> keeps bugging me about'. These cloud people sure do move fast. Kids these days, I don't know.</p> <p>Finally it was great to see <a href="https://www.eff.org/about/staff/seth-schoen">Seth Schoen</a> present <a href="http://linuxfestnorthwest.org/2015/sessions/lets-encrypt-free-robotic-certificate-authority"><em>Let's Encrypt</em></a>. I'd heard a little about this <a href="https://letsencrypt.org/">awesome project</a> but it was good to get some details on exactly how it's being implemented and how it will work. Their goal is, pretty simply, to make it possible to get and install a TLS certificate <em>that will be trusted by all clients</em> for any web server by running a single command. They're basically automating domain validation: the server comes up with a set of actions that will demonstrate control of the domain, the script running on your web server box (the 'client' in this transaction) demonstrates the ability to make those changes, the server checks they were done and issues a certificate, the script installs it. None of it is rocket science, but it's so immeasurably superior to doing it all one awkward step at a time with openssl-generated CSRs and janky <a href="http://www.startssl.com/">web interfaces</a> that the only wish I have is for it to be in production already. The real goal is to enable a web where unencrypted traffic simply doesn't happen - make it sufficiently easy to get a trusted certificate that, simply, everyone does it. It was pretty cool that at the end of the talk Seth was mobbed by Red Hat / Fedora folks offering help with integration - I'm guessing you'll be able to use this stuff on RHEL and Fedora servers from day 1.</p> <p>All that and we had the now-traditional Friday night games night (beer, pizza, M&amp;Ms, and Cards Against Humanity - really all you need on a Friday night!) too. It was a very enjoyable event as always.</p> <h3>ownCloud 8 for Fedora 21, Fedora 20, and EPEL 7</h3> <p>The other big news I have: ownCloud 8.0.3 came out recently, and it seemed like an appropriate time to kick most of the still-maintained stable Fedora and EPEL releases over to it. So there are now <a href="https://admin.fedoraproject.org/updates/owncloud-8.0.3-1.fc21,php-Assetic-1.2.1-1.fc21,php-bantu-ini-get-wrapper-1.0.1-1.fc21,php-doctrine-dbal-2.5.1-1.fc21,php-natxet-cssmin-3.0.2-2.20141229git8883d28.fc21,php-Pimple-3.0.0-1.fc21">Fedora 21</a>, <a href="https://admin.fedoraproject.org/updates/owncloud-8.0.3-1.fc20,php-Assetic-1.2.1-1.fc20,php-bantu-ini-get-wrapper-1.0.1-1.fc20,php-doctrine-dbal-2.5.1-1.fc20,php-natxet-cssmin-3.0.2-2.20141229git8883d28.fc20,php-Pimple-3.0.0-1.fc20,php-PHPMailer-5.2.9-1.fc20">Fedora 20</a> and <a href="https://admin.fedoraproject.org/updates/owncloud-8.0.3-1.el7,php-Assetic-1.2.1-1.el7,php-bantu-ini-get-wrapper-1.0.1-1.el7,php-doctrine-dbal-2.5.1-1.el7,php-natxet-cssmin-3.0.2-2.20141229git8883d28.el7,php-Pimple-3.0.0-1.el7">EPEL 7</a> testing updates providing ownCloud 8.0.3 for those releases. Please do read the update notes carefully, and back everything up before trying the update!</p> <p>EPEL 6 is still on ownCloud 7.0.5 for now; I'd have to bump even more OC dependencies to new versions in order to have ownCloud 8 on EPEL 6. That might still happen, but I decided to get it done for the more up-to-date releases first.</p> <p>This build also includes some fixes to the packaged nginx configuration from <a href="https://blogs.gnome.org/ignatenko/">Igor Gnatenko</a>, so I'd like to thank him very much for those! I still haven't got around to testing OC on nginx, but Igor has been running it, so hopefully with these changes it'll now work out of the box.</p> <p></p> Fedora 22 Alpha, Ipsilon Test Day, openQA progress, and ownCloud status https://www.happyassassin.net/posts/2015/03/11/fedora-22-alpha-ipsilon-test-day-openqa-progress-and-owncloud-status/ 2015-03-11T19:23:44Z 2015-03-11T19:23:44Z Adam Williamson <p></p><h3>Fedora 22 Alpha</h3> <p>The big news this week is definitely that <a href="http://fedoramagazine.org/fedora-22-alpha-released/">Fedora 22 Alpha is released!</a> We even managed to ship this one on time, though as it's an Alpha, it is of course <a href="https://fedoraproject.org/wiki/Common_F22_bugs">not bug-free</a>. For me the nicest thing about Fedora 22 is all the improvements in GNOME 3.16, which Alpha has a pre-release of. The new notification system is a huge improvement. I'm fighting some bugs in Evolution's new composer, but mostly it's working out.</p> <h3>Ipsilon Test Day</h3> <p>We also have the next Fedora 22 Test Day coming up tomorrow, <a href="https://fedoraproject.org/wiki/Test_Day:2015-03-12_Ipsilon">Ipsilon Test Day</a>. This is one of those pretty specialized events that requires some interest in and probably knowledge of a specialist area; it also requires a fairly powerful test system with a decent amount of spare disk space.</p> <p><a href="https://fedorahosted.org/ipsilon/">Ipsilon</a> provides SSO services together with separate identity management systems; it's being implemented into Fedora's identity/authentication system to replace Fedoauth (or this may have already happened, I'm not quite sure on the timelines!). It's one of those things where you go to some website, and you need to log in, and you get redirected through an authentication system that might be shared with lots of other websites - like when you log in to various Fedora sites through the shared Fedora authentication process.</p> <p>If you're already interested in <a href="http://www.freeipa.org/page/Main_Page">FreeIPA</a> and might be interested in looking at web application-focused SSO on top of it, you may well be interested in this Test Day. So come join us in #fedora-test-day on Freenode IRC and help out!</p> <h3>openQA automated install testing for Fedora</h3> <p>Personally, I've been working on our <a href="https://os-autoinst.github.io/openQA/">openQA</a>-based automated install testing system for the last few days.</p> <p><em>Waaaaaaaaait</em>, I hear you say, <em>Fedora has an automated install testing system</em>?</p> <p>Well, it does now. We are still expecting <a href="https://taskotron.fedoraproject.org/">Taskotron</a> to be our long-term story in this area, but after the awesome <a href="https://www.rebelmouse.com/openSUSE/richard-brown-testing-fedora-w-907001233.html">Richard Brown demonstrated</a> the feasibility of getting a quick-and-dirty Fedora deployment of openQA running, some of the Red Hat folks working on Fedora QA in our Brno office - <a href="http://jansedlak.cz/">Jan Sedlak</a> and <a href="https://fedoraproject.org/wiki/User:Jskladan">Josef Skladanka</a> - threw together an implementation in literally a few days, as a stopgap measure until Taskotron is ready.</p> <p>The original couple of deployments of the system are behind the Red Hat firewall, just because it works out easier for the devs that way, but I now have my <a href="https://openqa.happyassassin.net">own instance</a> which is running on my own servers and is entirely public; you can look in on all my experiments there. (Please don't download any really huge files from it - all the images I'm testing are available from the public Fedora servers and mirrors, and I have limited bandwidth).</p> <p>The <a href="https://bitbucket.org/rajcze/openqa_fedora">Fedora tests</a> and the <a href="https://bitbucket.org/rajcze/openqa_fedora">dispatcher and other miscellaneous bits</a> are available, and there's a <a href="https://bitbucket.org/rajcze/openqa_fedora_tools/src">deployment guide</a> - click on <code>InstallGuide.txt</code>, I'm not giving a direct link because BitBucket doesn't seem to make it possible to do a direct link to the current master revision of an individual file - if you're feeling adventurous and want to set up your own deployment. We are running our instances on openSUSE at the moment, because packaging openQA for Fedora and making any necessary adjustments to its layout would sort of defeat the object of a quick-and-dirty system. And hey, openSUSE is a pretty cool distro too! We're all friends.</p> <p>The first thing I did was make the system use <a href="https://www.happyassassin.net/fedfind/">fedfind</a> for locating, downloading and identifying (in terms of version and payload) images. This had the benefit of making it work for builds other than Rawhide nightlies.</p> <p>I have my own forks of the <a href="https://www.happyassassin.net/cgit/openqa_fedora/">tests</a> and <a href="https://www.happyassassin.net/cgit/openqa_fedora_tools/">tools</a> where I'm keeping topic branches for the work I'm doing. Currently I'm working on the <em>live</em> branches, where I'm implementing testing of live images (and really just uncoupling the system from some expectations it has about only testing one image per build per arch in general). It's mostly done, and I'm hoping to get it merged in soon. After that I'm probably going to work on getting the system to report bugs when it encounters crashes. It's painful sometimes figuring out how to do stuff in perl, but mostly it's been pretty fun working with the system; you can argue that a test system based on screenshot recognition is silly, but really, just about any form of automated testing for interactive interfaces is silly in <em>some</em> way or another, and at least this one's out there and working.</p> <p>The openQA devs have been very open to questions and suggestions, so thanks to them! I have a <a href="https://github.com/os-autoinst/openQA/commit/d26324790aedff77b022a5b1fb69706c2796361e">couple</a> of trivial <a href="https://github.com/os-autoinst/openQA/commit/ec0479d6578fcfca78764f2d2f73ad489e187e21">commits</a> upstream to fix issues I noticed while running the development packages on my instance.</p> <p>When you look at Fedora 22 (and later?) release validation pages, results you see from 'coconut' and 'colada' are from the openQA instances - 'coconut' results are from the deployments managed by the main devs, and 'colada' results are from my deployment.</p> <h3>ownCloud updates</h3> <p>Finally, before I dived into openQA I did find a bit of time to work on ownCloud. I got ownCloud 8 into good enough shape (I think!) to go in Rawhide and Fedora 22: it's now in Rawhide, and there's an <a href="https://admin.fedoraproject.org/updates/FEDORA-2015-3524/owncloud-8.0.1-0.1.rc1.fc22,php-Pimple-3.0.0-1.fc22,php-doctrine-dbal-2.5.1-1.fc22">update pending</a> for Fedora 22.</p> <p>The other distro which gets a major update is EPEL 6; there's an <a href="https://admin.fedoraproject.org/updates/FEDORA-EPEL-2015-1155/owncloud-7.0.4-3.el6,php-google-apiclient-1.1.2-2.el6">update pending</a> for 7.0.4 (current stable for EPEL 6 is OC 6).</p> <p>7.0.4-3 is also pending as an update for the stable Fedora releases and EPEL 7; for those releases it's a minor update which mostly tweaks the Apache HTTP configuration file handling. I came up with a new layout which, as well as making the App Store actually <em>work</em>, should make it easier to 'fire and forget' a typical ownCloud deployment's HTTP configuration; after you do the initial setup, you can run <code>ln -s /etc/httpd/conf.d/owncloud-access.conf.avail /etc/httpd/conf.d/z-owncloud-access.conf</code> to open up access from anywhere, and you won't have to manually check config files on future upgrades, because I'll keep <code>owncloud-access.conf.avail</code> up to date with any directory/path changes that crop up in future releases. If you have an existing ownCloud install and you'd like to 'migrate', I recommend:</p> <ul> <li>Move any customizations you have to the HTTP configuration into an override file, e.g. <code>/etc/httpd/conf.d/z-owncloud-local.conf</code></li> <li>Restore the clean packaged copy of <code>/etc/httpd/conf.d/owncloud.conf</code></li> <li>Run <code>ln -s /etc/httpd/conf.d/owncloud-access.conf.avail /etc/httpd/conf.d/z-owncloud-access.conf</code> (assuming you want to allow access from any host!)</li> <li>Never edit any of the packaged files in future (including the <code>.inc</code> and <code>.avail</code> file(s)), always do customizations in override files</li> </ul> <p>Happy Clouding!</p> <p></p> <p></p><h3>Fedora 22 Alpha</h3> <p>The big news this week is definitely that <a href="http://fedoramagazine.org/fedora-22-alpha-released/">Fedora 22 Alpha is released!</a> We even managed to ship this one on time, though as it's an Alpha, it is of course <a href="https://fedoraproject.org/wiki/Common_F22_bugs">not bug-free</a>. For me the nicest thing about Fedora 22 is all the improvements in GNOME 3.16, which Alpha has a pre-release of. The new notification system is a huge improvement. I'm fighting some bugs in Evolution's new composer, but mostly it's working out.</p> <h3>Ipsilon Test Day</h3> <p>We also have the next Fedora 22 Test Day coming up tomorrow, <a href="https://fedoraproject.org/wiki/Test_Day:2015-03-12_Ipsilon">Ipsilon Test Day</a>. This is one of those pretty specialized events that requires some interest in and probably knowledge of a specialist area; it also requires a fairly powerful test system with a decent amount of spare disk space.</p> <p><a href="https://fedorahosted.org/ipsilon/">Ipsilon</a> provides SSO services together with separate identity management systems; it's being implemented into Fedora's identity/authentication system to replace Fedoauth (or this may have already happened, I'm not quite sure on the timelines!). It's one of those things where you go to some website, and you need to log in, and you get redirected through an authentication system that might be shared with lots of other websites - like when you log in to various Fedora sites through the shared Fedora authentication process.</p> <p>If you're already interested in <a href="http://www.freeipa.org/page/Main_Page">FreeIPA</a> and might be interested in looking at web application-focused SSO on top of it, you may well be interested in this Test Day. So come join us in #fedora-test-day on Freenode IRC and help out!</p> <h3>openQA automated install testing for Fedora</h3> <p>Personally, I've been working on our <a href="https://os-autoinst.github.io/openQA/">openQA</a>-based automated install testing system for the last few days.</p> <p><em>Waaaaaaaaait</em>, I hear you say, <em>Fedora has an automated install testing system</em>?</p> <p>Well, it does now. We are still expecting <a href="https://taskotron.fedoraproject.org/">Taskotron</a> to be our long-term story in this area, but after the awesome <a href="https://www.rebelmouse.com/openSUSE/richard-brown-testing-fedora-w-907001233.html">Richard Brown demonstrated</a> the feasibility of getting a quick-and-dirty Fedora deployment of openQA running, some of the Red Hat folks working on Fedora QA in our Brno office - <a href="http://jansedlak.cz/">Jan Sedlak</a> and <a href="https://fedoraproject.org/wiki/User:Jskladan">Josef Skladanka</a> - threw together an implementation in literally a few days, as a stopgap measure until Taskotron is ready.</p> <p>The original couple of deployments of the system are behind the Red Hat firewall, just because it works out easier for the devs that way, but I now have my <a href="https://openqa.happyassassin.net">own instance</a> which is running on my own servers and is entirely public; you can look in on all my experiments there. (Please don't download any really huge files from it - all the images I'm testing are available from the public Fedora servers and mirrors, and I have limited bandwidth).</p> <p>The <a href="https://bitbucket.org/rajcze/openqa_fedora">Fedora tests</a> and the <a href="https://bitbucket.org/rajcze/openqa_fedora">dispatcher and other miscellaneous bits</a> are available, and there's a <a href="https://bitbucket.org/rajcze/openqa_fedora_tools/src">deployment guide</a> - click on <code>InstallGuide.txt</code>, I'm not giving a direct link because BitBucket doesn't seem to make it possible to do a direct link to the current master revision of an individual file - if you're feeling adventurous and want to set up your own deployment. We are running our instances on openSUSE at the moment, because packaging openQA for Fedora and making any necessary adjustments to its layout would sort of defeat the object of a quick-and-dirty system. And hey, openSUSE is a pretty cool distro too! We're all friends.</p> <p>The first thing I did was make the system use <a href="https://www.happyassassin.net/fedfind/">fedfind</a> for locating, downloading and identifying (in terms of version and payload) images. This had the benefit of making it work for builds other than Rawhide nightlies.</p> <p>I have my own forks of the <a href="https://www.happyassassin.net/cgit/openqa_fedora/">tests</a> and <a href="https://www.happyassassin.net/cgit/openqa_fedora_tools/">tools</a> where I'm keeping topic branches for the work I'm doing. Currently I'm working on the <em>live</em> branches, where I'm implementing testing of live images (and really just uncoupling the system from some expectations it has about only testing one image per build per arch in general). It's mostly done, and I'm hoping to get it merged in soon. After that I'm probably going to work on getting the system to report bugs when it encounters crashes. It's painful sometimes figuring out how to do stuff in perl, but mostly it's been pretty fun working with the system; you can argue that a test system based on screenshot recognition is silly, but really, just about any form of automated testing for interactive interfaces is silly in <em>some</em> way or another, and at least this one's out there and working.</p> <p>The openQA devs have been very open to questions and suggestions, so thanks to them! I have a <a href="https://github.com/os-autoinst/openQA/commit/d26324790aedff77b022a5b1fb69706c2796361e">couple</a> of trivial <a href="https://github.com/os-autoinst/openQA/commit/ec0479d6578fcfca78764f2d2f73ad489e187e21">commits</a> upstream to fix issues I noticed while running the development packages on my instance.</p> <p>When you look at Fedora 22 (and later?) release validation pages, results you see from 'coconut' and 'colada' are from the openQA instances - 'coconut' results are from the deployments managed by the main devs, and 'colada' results are from my deployment.</p> <h3>ownCloud updates</h3> <p>Finally, before I dived into openQA I did find a bit of time to work on ownCloud. I got ownCloud 8 into good enough shape (I think!) to go in Rawhide and Fedora 22: it's now in Rawhide, and there's an <a href="https://admin.fedoraproject.org/updates/FEDORA-2015-3524/owncloud-8.0.1-0.1.rc1.fc22,php-Pimple-3.0.0-1.fc22,php-doctrine-dbal-2.5.1-1.fc22">update pending</a> for Fedora 22.</p> <p>The other distro which gets a major update is EPEL 6; there's an <a href="https://admin.fedoraproject.org/updates/FEDORA-EPEL-2015-1155/owncloud-7.0.4-3.el6,php-google-apiclient-1.1.2-2.el6">update pending</a> for 7.0.4 (current stable for EPEL 6 is OC 6).</p> <p>7.0.4-3 is also pending as an update for the stable Fedora releases and EPEL 7; for those releases it's a minor update which mostly tweaks the Apache HTTP configuration file handling. I came up with a new layout which, as well as making the App Store actually <em>work</em>, should make it easier to 'fire and forget' a typical ownCloud deployment's HTTP configuration; after you do the initial setup, you can run <code>ln -s /etc/httpd/conf.d/owncloud-access.conf.avail /etc/httpd/conf.d/z-owncloud-access.conf</code> to open up access from anywhere, and you won't have to manually check config files on future upgrades, because I'll keep <code>owncloud-access.conf.avail</code> up to date with any directory/path changes that crop up in future releases. If you have an existing ownCloud install and you'd like to 'migrate', I recommend:</p> <ul> <li>Move any customizations you have to the HTTP configuration into an override file, e.g. <code>/etc/httpd/conf.d/z-owncloud-local.conf</code></li> <li>Restore the clean packaged copy of <code>/etc/httpd/conf.d/owncloud.conf</code></li> <li>Run <code>ln -s /etc/httpd/conf.d/owncloud-access.conf.avail /etc/httpd/conf.d/z-owncloud-access.conf</code> (assuming you want to allow access from any host!)</li> <li>Never edit any of the packaged files in future (including the <code>.inc</code> and <code>.avail</code> file(s)), always do customizations in override files</li> </ul> <p>Happy Clouding!</p> <p></p>