ownCloud news (7.0.3 coming)

I should have some good Fedlet news soon (later today or tomorrow), but in the meantime, I also spent some time working on ownCloud today.

ownCloud 7.0.3 has been tagged and tarballed upstream, and I've bumped the Fedora and EPEL 7 builds in Koji already. I have it running on my 'test' (that is, er, production) server and it looks fine, so I'll probably edit 7.0.3 into the EPEL 7 update and submit new updates for F20 and F21 tomorrow. Rawhide is already on 7.0.3.

I also sent a chunk of work upstream. I've been trying to get a pull request merged which updates ownCloud's bundled copy of Google's PHP library for their APIs to the current upstream 1.0 series - they're using an old, unsupported and kinda buggy release. I really just wanted to do this to make the library easier to unbundle for Fedora, but somehow it's turned into a bit of work involving fixing some bugs in the ownCloud code that uses it - OC can use Google Drive as an 'external storage' source, i.e. you can map a Google Drive account to a directory in an OC install - and dealing with some issues in the PHP library itself.

So aside from the big(gish) PR which updates the library I submitted a few other misc fixes based on unit testing which should actually make the thing work a lot better than it did before. Not that I'm actually at all interested in Google Drive, but hey! I'm hoping this will provide some motivation to get the 1.x update PR merged and then I can drop one more bundled dep from the OC packages.

Fedora Server Test Day today, 2014-11-07

It's Test Day time again! Today, 2014-11-07, is Fedora Server Test Day. We'll be running a range of tests against the Fedora 21 Server Product, trying to test out all the major features beyond what we already tested in Alpha and Beta release validation.

This is a great way to learn about Server's major innovations, Rolekit and Cockpit, so if you want to find out about these new features, come along - - if it all goes south, there'll be experts around to guide you.

For a Test Day like this it'll work out best if you have a disposable test system to work with, a virtual machine would do the job just fine. The wiki page has full instructions, and Server working group members and QA folks will be around all day in the Test Day IRC channel - #fedora-test-day on Freenode - to help out testers and debug problems. If you don’t know how to use IRC, you can read these instructions, or just use WebIRC.

Fedora 21 Beta, wikitcms/relval improvements, more stupid mediawiki tricks, working on laptops...

I really should blog more, shouldn't I?

Today's big news, of course, is that Fedora 21 Beta is out!

Fedora 21 Beta

It was an interesting release to test, with a lot of the practical consequences of the Fedora.next 'Product-ization' effort getting shaken out. There are still some fudges and rough bits here and there in the practical implementation of that vision, but it's not that bad, for a first cut. It's definitely great to see the Cloud images showing up and the awesome Roshi co-ordinating test efforts there - I think the Product approach is really paying off in terms of highlighting the Fedora Cloud story.

The Server bits are also coming together - there's still further to go, but in Beta you can use the current Rolekit code successfully to set up a Server system as a 'domain controller' (a FreeIPA server) pretty painlessly, and Cockpit is pretty great. It's neat to do a Server installation and go straight to its Cockpit interface to manage it.

I'm pretty happy with the quality of 21 Beta as well, especially given all the moving parts we had to keep track of. The big emergency - there's always one - turned out to be a somewhat nasty upgrade issue we discovered after we'd already signed off on the Beta images, but we've ben able to come up with a pretty good strategy for dealing with that, and the only practical consequence for most folks should be that trying to upgrade to 21 with fedup will fail (safely, and intentionally) until we've got images we're happy are safe from this bug lined up later today. No-one should be able to mess up their system due to it. Of course, upgrading is inherently dangerous and upgrading to a pre-release is more so, and you should never do it without solid backups and a recovery plan.

Relval / wikitcms

Since I last blogged about wikitcms/relval I've cut a few more releases. 1.2 added a 'post-processor' to the testcase-stats sub-command which basically cleans up its output in a couple of ways; it handles things better when tables are renamed or test cases moved around or added or removed between composes. I'm hosting testcase_stats output for all Fedora releases from 12 onwards - the output for the current release is updated daily or hourly, depending on how hot testing for the release is at present (right now I'm on daily updates for 21).

relval 1.3 added a report-results sub-command: you can now report release validation results using a rather 1980s-style TUI rather than by editing wiki pages. It's a bit silly, but I found at least for me with 21 Beta validation it was rather more efficient than editing the wiki pages directly. It's pretty easy to try out, so give it a shot.

Currently I'm working on making the whole project more generic. wikitcms 1.4 added some preliminary support for QATracker, Ubuntu's manual test management system - not because I'm specifically interested in QA Tracker, but because it was a handy real-world implementation of something similar to Fedora release validation using a different system.

After 1.4 I decided my initial approach - adding more generic object types within wikitcms - wasn't quite the right one. Instead I'm planning to revert wikitcms to being strictly a client for the Fedora 'TCMS', and instead add a third component, perhaps called python-libtcms, whose job will be to provide a consistent interface to multiple TCMSes. libtcms will have concepts like 'unique test instances' and 'results' implemented in such a way that libtcms consumers just pick a particular TCMS to interface with, and can then use the same methods and so on regardless of what TCMS they picked.

I've already done a proof of concept where a minimally-modified version of relval was able to generate testcase-stats results for Ubuntu instead of Fedora; I'm hoping to get the production implementation done and probably released as 1.5 or 2.0 some time soon. Then I'll look at adding additional backends, perhaps for Moztrap next (I picked QATracker as the first non-wiki system that got implemented because it's a real system with real-world data and it already has a nice simple Python module for talking to its API).

Stupid Mediawiki Tricks

Along the way with Fedora 21 validation and relval work, I added a couple of refinements to the wiki magic bits of the Fedora release validation complex. I'm still trying to decide if the extent to which we've managed to abuse mediawiki for these purposes is cool, terrifying, or both. Anyhow, there's now a new CurrentFedoraCompose template which simply identifies the 'current' TC or RC. This is updated automatically by relval compose when invoked with --current in recent versions. I initially wanted to edit the Test Results:Current (testtype) Test redirect pages to use that template, so they didn't each have to be edited each time a new compose is run, but turns out you can't use templates in redirects, unfortunately.

Still, there's a couple of other places so far where the template has come in handy. When you run relval report-results without specifying the compose you want to file results for, it checks the contents of the template and defaults to filing results against that compose (so it usually defaults to the right one).

The stupid mediawiki trick I was thinking of, though, is on the validation instructions template. This is transcluded in every results page, with variables specifying the compose version and the test type. I took advantage of that and added a warning that's only displayed conditionally. If the page's compose version matches CurrentFedoraCompose, it isn't displayed. Otherwise, it is. The warning says that the page is not for the current compose, and links to the correct page for the current compose.

This uses a mediawiki extension called ParserFunctions, which lets you do clever stuff like that - it has a few special templates which let you do conditionals and evaluate expressions and stuff. This particular trick uses the {{#ifeq:}} function which tests whether some string (in this case, the combination of the page's {{{release}}} {{{milestone}}} and {{{compose}}} variables) matches some other string (in this case, the current contents of the {{CurrentFedoraCompose}} template), and produces different content depending on whether it does or not - in this case the content if it matches is nothing at all, and the content if it doesn't match is the "Warning! Not the current compose" note.

This saves poor Andre Robatino going through all the old compose pages adding the same warning manually!

Working on laptops

Kevin Fenzi's on a mission to blog daily at present, and today he explained why he uses a laptop as his primary work machine. I found it interesting because I have more or less the opposite experience - I use a desktop full-time. I usually go to the UK for a few weeks a year and have to use a laptop as my main system there, and I find it terrible and can't wait to get back to my desk.

I post pictures of my desk occasionally, the latest is still pretty much accurate. I think the main thing that makes me more productive at the desktop is the two large displays, followed by a real mouse and a real mechanical keyboard. I think the display issue varies a lot depending on your work; my work tends to involve a lot of context switching and writing one thing with reference to another thing, which is probably why I find the dual displays so valuable. I can imagine it'd matter much less if you focused on one task at a time and didn't need reference material so much, or if a lot of your work was done through remote console sessions in any case.

My typical dual display workflows are having mail open on one head and a browser on the other (often writing an email with reference to a web page), having a text editor on one head and a browser on the other (lately the text editor usually has some Python that doesn't work, and the browser has sixteen tabs open on either StackOverflow or the Python docs...), having test VMs open on one head and IRC running on the other, stuff like that. On a laptop I just never feel like I have enough room to keep an eye on everything at once.

I've never found any kind of laptop-y pointing device which could get anywhere close to a real mouse. You can carry a mouse around with a laptop, of course, but then you always have to find the damn thing and set it up. My desktop mouse sits on my desk and is always plugged into my desktop, it's an attractively simple scenario. :)

And more!

Well, I guess there's not a lot more? I spent some time polishing the ownCloud updates for Fedora 19 and EPEL 6 after I first blogged about them - I think they should both be good to go, though I'm hoping for at least some Bodhi feedback before I push them. I'm nearly in a position to do an ownCloud 7 build for EPEL 7, as well - I'm just waiting on an EPEL 7 branch for php-phpseclib-crypt-rijndael, then I can do a few builds / tags and a bit of testing and send a build out. I'm pretty sure all the necessary major bits are available for the build to work. I need to find a few hours to clean up some upstream submissions and ideas I have lying around for OC, as well.

Fedlet is sort of sitting third on my passion project list behind wikitcms and ownCloud ATM - apologies to Fedlet folks waiting impatiently for new bits :) I will try and do a Fedora 21 Betaish build with a 3.18 kernel some time soon, though.

PSA: Don't fedup to Fedora 21 right now (EDIT: you can now!)

EDIT 2014-11-05: It's fine to fedup to F21 now - at least so far as I know, and so far as the bug described in this post is concerned. We've made sure several different ways that you should not possibly be able to hit the 15 minute timeout bug.

It's probably not a good idea to try and upgrade to Fedora 21 with fedup right now.

Currently Fedora 21 has a build of systemd that includes a new feature that was added upstream after the release of 216, which is intended to time out system startup if it's not complete after 15 minutes - the idea being to avoid things like your laptop melting / starting a fire in your bag if it gets accidentally powered on, stuff like that.

Unfortunately, turns out that having a timeout that hard powers down the system if boot hasn't completed after 15 minutes doesn't work very well with fedup, because while fedup's actual 'install the updated packages' step is running, systemd considers that boot has not 'completed'. So if you try and fedup to Fedora 21 using a fedup environment that has the affected systemd build (like the one in the Beta tree, and also in the current 21 'stable' tree), and your 'install updated packages' boot takes more than 15 minutes, it'll just suddenly cut off and shut down. Obviously, there's quite a high chance that'll leave the system in a broken state.

So: don't do it. Really, don't.

We're currently investigating the best way to deal with this problem, and we'll certainly try to have it all straightened out by Beta release date (Tuesday). But of course, it's never a good idea to upgrade a production system to a pre-release, especially if you don't have good backups!

ownCloud updates for Fedora 19 and EPEL 6

Hi, folks. Instead of relval (for a change) I spent some of my non-work time today working on ownCloud packaging (I'm the owner/'primary contact'/whatever for the ownCloud package, these days).

I've been in touch with ownCloud's awesome security folk, Lukas Reschke, recently, and he confirmed that the ownCloud version currently in Fedora 19 and EPEL 6 - 4.5.13 - is known to have some security vulnerabilities. It's also unmaintained and is very unlikely to be upgradable directly to ownCloud 7, so I really needed to Do Something for folks on those releases.

So I did! There's now an ownCloud 5.0.17 update candidate for Fedora 19 and an ownCloud 6.0.5 update candidate for EPEL 6. Well, the ownCloud 6 update for EPEL 6 has been around for a while, but it never actually installed before, it had all sorts of dep issues.

There is also an ownCloud 5 build for EPEL 6 in my oc5 side repository - you can grab https://www.happyassassin.net/temp/oc5_repo/oc5.repo and put it in /etc/yum.repos.d to enable that. This is intended for upgrading: ownCloud only officially supports upgrading one major version at a time, so if you have an existing EPEL 6 ownCloud 4.5.13 deployment it is probably best to upgrade from 4.5.13 to 5.0.17 via the side repo, then from 5.0.17 to 6.0.5. 4.5 to 6.x upgrade may work, I have no idea - I haven't tested it - but AFAIK it's not supported and can't be relied upon.

I've actually done some testing on these: I've tested upgrade of an F19 4.5.13 MySQL deployment to 5.0.17 and clean 5.0.17 install, and both worked fine. On EPEL 6 (on a CentOS 6.6 install) I tested upgrade from 4.5.13 to 5.0.17 via the side repo, then upgrade to 6.0.4-3, then upgrade to 6.0.5-1; it survived the whole process without obvious problems. I didn't test fresh deployment of OC 5 or OC 6 on EPEL 6, yet.

ownCloud 5 is still in maintenance upstream for a few months, and ownCloud 6 should be maintained for a while. I plan to send ownCloud 6 to Fedora 19 just before it goes EOL, the hope being people have a few weeks to upgrade from 4.5 to 5 now, then they can upgrade from 5 to 6, before upgrading to Fedora 20 or 21 where they'll get 7. I wanted to take roughly the same approach for EPEL 6, but Remi sent a 6.0.2 update to testing some time ago and I don't think I can 'rewind' to 5.x now that's happened.

I intend to keep the oc5 and oc6 repos available with both F19 and EPEL 6 builds of the latest 5 and 6 builds as long as I can (notwithstanding the /temp in the path - I really shouldn't have put 'em in that directory...), so there's at least some way to do a staged upgrade of old installations if you miss any boats.

I'll try and look at EPEL 7 later this week; I'll probably aim to start it out on 6.x then see if we can get it to 7.x. I think some dependent packages do not yet have EPEL 7 branches, though, so I may have to wait on other maintainers before I can possibly do an EPEL 7 build.

Being a Sporadic Overview Of Linux Distribution Release Validation Processes

Yup, that's what this is. It's kind of in-progress, I'll probably add to it later, haven't looked into what Arch or Debian or a few other likely suspects do.

Fedora

Manual testing

Our glorious Fedora uses Mediawiki to manage both test cases and test results for manual release validation. This is clearly ludicrous, but works much better than it has any right to.

'Dress rehearsal' composes of the entire release media set are built and denoted as Test Composes or Release Candidates, which can be treated interchangably as 'composes' for our purposes here. Each compose represents a test event. In the 'TCMS' a test event is represented as a set of wiki pages; each wiki page can be referred to as a test type. Each wiki page must contain at least one wiki table with the rows representing a concept I refer to as a unique test or a test instance. There may be multiple tables on a page; usually they will be in separate wiki page sections.

The unique, identifying attributes of a unique test are:

  1. The wiki page and page section it is in
  2. The test case
  3. The user-visible text of the link to the test case, which I refer to as the 'test name'

unique tests may share up to two of those attributes - two tests may use the same test case and have the same test name but be in different page sections or pages, or they may be in the same page section and use the same test case but have a different test name, for instance.

The other attributes and properties of a unique test are:

  1. A milestone - Alpha, Beta or Final - indicating the test must be run for that release and later releases
  2. The environments for the test, which are the column titles appearing after the test case / test name in the table in which it appears; the environments for a given test can be reduced from the set for the table in which it appears by greying out table cells, but not extended beyond the columns that appear in the table
  3. The results that appear in the environment cells

Basically, Fedora uses mediawiki concepts - sections and tables - to structure storage of test results.

The Summary page displays an overview of results for a given compose, by transcluding the individual result pages for that compose.

Results themselves are represented by a template, with the general format {{result|status|username|bugs}}.

Fedora also stores test cases in Mediawiki, for which it works rather well. The category system provides a fairly good capability to organize test cases, and templating allows various useful capabilities: it's trivial to keep boilerplate text that appears in many test cases unified and updated by using templates, and they can also be used for things like {{FedoraVersion}} to keep text and links that refer to version numbers up to date.

Obvious limitations of the system include:

  • The result entry is awkward, involving entering a somewhat opaque syntax (the result template) into another complex syntax (a mediawiki table). The opportunity for user error here is high.

  • Result storage and representation are strongly combined: the display format is the storage format, more or less. Alternative views of the data require complex parsing of the wiki text.

  • The nature of mediawiki is such that there is little enforcement of the data structures; it's easy for someone to invent a complex table or enter data 'wrongly' such that any attempt to parse the data may break or require complex logic to cope.

  • A mediawiki instance is certainly not a very efficient form of data storage.

Programmatic access

My own wikitcms/relval provides a python library (in Python) for accessing this 'TCMS'. It treats the conventions/assumptions about how pages are named and laid out, the format of the result template etc as an 'API' (and uses the Mediawiki API to actually interact with the wiki instance itself, via mwclient). This allows relval to handle the creation of result pages (which sort of 'enforces' the API, as it obviously obeys its own rules/assumptions about page naming and so forth) and also to provide a TUI for reporting results. As with the overall system itself this is prima facie ridiculous, but actually seems to work fairly well.

relval can produce a longitudinal view of results for a given set of composes with its testcase-stats sub-command. I provide this view here for most Fedora releases, with the results for the current pre-release updated hourly or daily. This view provides information for each test type on when each of its unique tests was last run, and a detailed page for each unique test detailing its results throughout the current compose.

Automated testing

Fedora does not currently perform any significant automated release validation testing. Taskotron currently only runs a couple of tests that catch packaging errors.

Examples

  • Fedora result page
  • Fedora test case: the page source demonstrates the use of templates for boilerplate text

Ubuntu

Manual testing

The puppy killers over at Ubuntu use a system called QATracker for manual testing. Here is the front end for manual release validation.

QATracker stores test cases and products (like Kubuntu Desktop amd64, Ubuntu Core i386). These are kind of 'static' data. Test events are grouped as builds of products for milestones, which form part of series. A series is something like an Ubuntu release - say, Utopic. A milestone roughly corresponds to a Fedora milestone - say, Utopic Final - though there are also nightly milestones which seem to fuzz the concept a bit. Within each milestone is a bunch of builds, of any number of products. There may be (and often is) more than one build for any given product within a single milestone.

So, for instance, in the Utopic Final milestone we can click See removed and superseded builds too and see that there were many builds of each product for that milestone.

Products and test cases are defined for each series. That is, for the whole Utopic series, the set of products and the set of test cases for each product is a property of the series, and cannot be varied between milestones or between builds. Every build of a given product within a given series will have the same test cases.

Test cases don't seem to have any capability to be instantiated (as in moztrap) - it's more like Fedora, a single test case is a single test case. I have not seen any capacity for 'templating', but may just have missed it.

Results are stored per build (as we've seen, a build is a member of a milestone, which is a member of a series). There is no concept of environments (which is why Ubuntu encodes the environments into the products) - all the results for a single test case within a single build are pooled together.

The web UI provides a fairly nice interface for result reporting, much nicer than Fedora's 'edit some wikitext and hope you got it right'. Results have a status of pass or fail - there does not appear to be any warn analog. Bug reports can be associated with results, as in Fedora, as can free text notes, and hardware information if desired.

QATracker provides some basic reporting capabilities, but doesn't have much in the way of flexible data representation - it presumably stores the data fairly sensibly and separately from its representation, but doesn't really provide different ways to view the data beyond the default web UI and the limited reporting capabilities.

The web UI works by drilling down through the layers. The front page shows a list of the most recent series with the milestones for each series within them, you can click directly into a milestone. The milestone page lists only active builds by default (but can be made to show superseded ones, as seen above). You can click into a build, and from the build page you see a table-ish representation of the test cases for that build, with the results (including bug links) listed alongside the test cases. You have to click on a test case to report a result for it. The current results for that test case are shown by default; the test case text is hidden behind an expander.

Limitations of the system seem to include:

  • There's no alternative/subsidiary/superior grouping of tests besides grouping by product, and no concept of environments. This seems to have resulted in the creation of a lot of products - each real Ubuntu product has multiple QATracker products, one per arch, for instance. It also seems to lead to duplication of test cases to cover things like UEFI vs. BIOS, which in Fedora's system or Moztrap can simply be environments.

  • Test case representation seems inferior to Mediawiki - as noted, template functionality seems to be lacking.

  • There seems to be a lack of options in terms of data representation - particularly the system is lacking in overviews, forcing you to drill all the way down to a specific build to see its results. There appears to be no 'overview' of results for a group of associated builds, or longitudinal view across a series of builds for a given product.

Examples

Programmatic access

QATracker provides an XML-RPC API for which python-qatracker is a Python library. It provides access to milestone series, milestones, products, builds, results and various properties of each. I was able to re-implement relval's testcase-stats for QATracker in a few hours.

Automated testing

Ubuntu has what appears to be a Jenkins instance for automated testing. This runs an apparently fairly small set of release validation tests.

OpenSUSE

Manual testing

Well...they've got a spreadsheet.

Automated testing

This is where OpenSUSE really shines - clearly most of their work goes into the OpenQA system.

The main front end to OpenQA provides a straightforward, fairly dense flat view of its results. It seems that test suites can be run against builds of distributions on machines (more or less), and the standard view can filter based on any of these.

The test suites cover a fairly extensive range of installation scenarios and basic functionality checks, comparable to the extent of Fedora's and Ubuntu's manual validation processes (though perhaps not quite so comprehensive).

An obvious potential drawback of automated QA is that the tests may go 'stale' as the software changes its expected behaviour, but at a superficial evaluation SUSE folks seem to be staying on top of this - there are no obvious absurd 'failure' results from cases where a test has gone stale for years, and the test suites seem to be actively maintained and added to regularly.

The process by which OpenQA 'failures' are turned into bug reports with sufficient useful detail for developers to fix seems to be difficult to trace at least from a quick scan of the documentation on the SUSE wiki.

I wrote a thing: relval / wikitcms 1.1, Fedora QA wiki test management tool

I think I might finally have to hand in my 'I don't code' card.

As I buried in another post last week, I wrote a thing - Relval (and wikitcms). relval is a tool for interacting with the Fedora wiki, as it is used by the Fedora QA team for managing test results. It's an interface to our test case management system...which is a wiki. :)

relval was originally written to make it easy to create the release validation testing result pages - you know, Current Installation Test and all the rest - which it still does (the Fedora 21 Beta TC2 and TC3 validation pages were created with relval). With 1.1, though, it grew two new capabilities, user-stats and testcase-stats.

These are re-implementations of (respectively) stats-wiki.py and testcase-stats, written by Kamil Paral and Josef Skladanka. They generate statistics about Fedora validation testing. user-stats generates the statistics about which users contributed results to a given set of pages, which are used in the "Heroes of Fedora testing" posts, like the Fedora 20 one on Mike's blog. testcase-stats generates statistics about test coverage - usually across an entire release series, like 20 or 21. See its current output for the last few Fedora releases. From the wiki pages, it's hard to know when a test that needs to be run for a release has not yet been done for any TC / RC, because you can only look at one at a time. testcase-stats makes it easier to see at a glance and in detail which tests have been run against which releases, so you can pick out tests that have been neglected and need to be run ASAP.

wikitcms is a Python library which does the actual work of interfacing with the wiki, using the mwclient Mediawiki API library. relval is backed by wikitcms.

If you want to play with relval, or use wikitcms for your own (nefarious?) purposes, you can easily get it from my repository. Grab wikitcms.repo and place it in /etc/yum.repos.d, then run yum install relval, and you should get relval and future updates for it. Please use --test if playing with compose - it operates against the staging wiki instead of the production one, for test purposes. The stats sub-commands only consume data from the wiki, they don't modify it, so they should be safe to play with (they do implement --test, just in case you want to generate test data for some tricky corner case or something).

Fedlet 2014-09-29 released

Hi, folks - announcing a new Fedlet image. sha256sum is aa2f1150e40965471fc2888db6aad7da52d98f36ce1224b630ba5ed99b28fd5e . This has been rigorously tested by me booting it and pressing a few buttons.

It's built on a 3.17 rc6 kernel and Fedora 21 userland. A couple of patches from Jan-Michael Brummer: one to make the 'Home' button on the Venue 8 Pro act as a Super (Start) key - I tested that, it works, thanks Jan! - and one which should enable microphone input. I haven't tried that one myself yet.

I think this one should fix the problem which caused the last one not to install, but I haven't tested, so give it a shot and let me know.

Fedora release validation, through the ages

Fedora 21 Alpha release, common bugs, relval, and wiki work

I've had quite the packed week, somehow. I started off on Monday with the regular QA meeting, then wrote up the initial Common Bugs page for Fedora 21 - many thanks to bitlord for adding a note on the ARM issue I didn't really understand!

As part of that, I did the usual thing where I copied the page header from an early version of the previous page, but it kind of irritated me - I felt like there should be a better way to it. As it happens, there is: I was reminded of the existence of substitution, a use of Mediawiki templates different from the transclusion typically used on the Fedora wiki - where you enter {{somethingorother}} and the contents are 'dynamically' replaced from another page each time your template is rendered.

With substitution, the replacement is done just once, at the time the page is first created. This is handy for something like the Common Bugs. We want to use the FedoraVersion and FedoraVersionNumber templates, but we don't want the page for Fedora 21 to suddenly start referring to Fedora 22 the next time the 'current' release changes. So substitution's just the thing: when you create the page from the template the correct value will be included, and it won't change afterwards. Now there are templates designed to be substituted to create the pre-release and stable release headers for Common Bugs pages via substitution.

Once I'd been reminded of the system, I had the idea of using it to improve the process of creating the release validation results pages. You know the ones - pages like Fedora_21_Alpha_RC1_Install, which we use for recording the results of validation testing. I've been looking into using Mozilla's Moztrap to replace our Wiki test 'system', but it's not turning out to be such a straightforward swap, and in the meantime I wound up getting stuck into this.

Creating the pages the way we've done it up till now is a bit of a pain. We had pages called 'templates' which weren't actually Mediawiki templates, but just sort of base pages that you would copy and paste, then go through by hand, somewhat painstakingly changing version numbers and updating link locations and so on. Now we have five 'mandatory' pages and one optional one, this was getting to take a lot of time.

So instead, I decided to look into creating the pages via templates. It got a bit tricky, because we also want to use transclusion to share some content dynamically between the different results pages. We want the test instructions to be transcluded so we can improve them at any time, but we want the result tables to be substituted, of course, because it'd hardly work to share their content between results pages!

But it's not impossible! The system is quite capable, and has some neat tricks. The design I wound up with uses:

The main template page itself is designed to be substituted to create the actual result pages - it handles substitution of the release identification information (release, milestone, compose).

Hold on, though! If you're paying attention, you're wondering how one template can substitute another - wouldn't the content just wind up in the main template at the time it's created?

To avoid this we use a trick that takes advantage of the <includeonly> tag, indicating content should only be displayed when the page is substituted or transcluded, not when it is displayed directly. If you wrap a use of {{subst:}} - a substitution - on a template page in <includeonly> tags, then each time that template itself is substituted, the substitution within the template will be rendered and hence will take effect. The template itself, though, remains unaltered. Cunning!

The upshot of all this jiggery-pokery is that if you create a Wiki page and pass this magic incantation as its only content:

{{subst:Validation_results|testtype=Base|release=21|milestone=Alpha|compose=RC1}}

your page will become a fully-formed Fedora 21 Alpha RC1 Base release validation results page. If you check it in the editor after creating it, you'll see the contents are very different from what you typed.

I was pretty happy with that trick. In my announcement email I rather recklessly offered a bit of a hostage to fortune:

"I have vague plans to write a little script using one or other of the zillion python mediawiki interfaces out there that would make creating the pages and handling the categories for a new TC/RC build a one-line operation."

So on Tuesday I had to go ahead and make good on that promise. And rather to my surprise, I seem at some point to have osmosed the ability to write some sort of passable imitation of object-oriented Python. I'm really not at all sure how that happened. Anyhow, I hereby present Relval.

Relval spits out release validation pages. Basically it's a command line tool for generating those magic {{subst:Validation_results}} lines, in appropriately named Wiki pages. It has to do other little things, like discover the available test types, validate user input, and solve captchas (not as impressive as it sounds, the wiki uses an extremely weak captcha system) and it's slightly over-engineered for its task at present, but that's because I plan to make it do more stuff, like do Rawhide monthly result pages, handle the categories and maybe generate announcement emails. It uses the mwclient Mediawiki API library, which seems to work fine - it's a bit under-documented, but pretty easy to follow from the source.

It's already pretty capable at generating TC/RC validation pages. If you want to try it out, pass the --test parameter which makes it work against the staging wiki rather than the production one. So when we hit the Beta test composes next week (next week? oh God), we can fire off:

./relval compose --release 21 --milestone Beta --compose TC1 --username adamwill

and all the release validation pages will appear. I think I'll be on the winning side of that ol' automation graph somewhere in the middle of F22, if we haven't moved on to a different TCMS by then...

I should gratefully acknowledge Mike Ruckman's advice and suggestions on relval, particularly on the captcha issue - all errors are mine, of course.

I sent relval out into the world on Tuesday, and was all set to do some cleaning up and extending of it on Wednesday, but after the release blocker meeting in the morning, I was making the freeze exception and blocker process pages share more content using templates (TEMPLATE ALL THE THINGS) and actually doing a modicum of testing Fedora (it's a crazy idea, I know, but it might just work) when I fell into a fateful conversation with David Shea and Samantha Bueno in #anaconda:

<davidshea> adamw: re: blocker review, if you want us to remember what the
commit/freeze/no-freeze policy is at any given time, I dunno maybe write it down?
<adamw> davidshea: it is written down...
<davidshea> where we can find it?
<davidshea> like, on a schedule?
<adamw> https://fedorapeople.org/groups/schedule/f-21/f-21-devel-tasks.html
<adamw> the 'change deadline' dates
<adamw> are the milestone freezes
<adamw> https://fedoraproject.org/wiki/Updates_Policy#Branched_release has some other bits
<davidshea> ok so I just need this big chart that's kind of giving me a
headache, plus the special fedora word translation ring
<sbueno> see, when i read "change deadline" i read that as "accepted changes"
 e.g. things for which a feature page has been written

and then...I don't know, I might have gone a little crazy. It all started sensibly enough, with a modest plan to maybe clarify a few pages, and a proposal to rename the 'change deadlines' and the 'branch freeze'.

Then I started poking a few 'what links here' pages, and one thing led to another, and it's now 36 hours later and I seem to have rewritten half the Wiki.

So, allow me to present:

  1. The Milestone freezes: Alpha freeze, Beta freeze, Final freeze (yes, people with long memories, everything old is new again)
  2. The revived and updated Fedora Release Life Cycle
  3. The completely new Repositories page - yes, now you can actually point somewhere to explain Fedora repositories.
  4. The extensively overhauled package maintainer update HOWTO
  5. The melded-from-three-sources-and-heavily-rewritten Package maintenance guide
  6. The overhauled Release Engineering Overview
  7. The carefully tweaked Updates Policy (no changes to the actual Policy, just updates to the links and some adjustment of concepts like the milestone freezes, Change freezes, and the 'Bodhi enabling point')
  8. The somewhat revised Branched page (I was running out of steam by that point...it should probably be redone with templates to share a lot of content with Rawhide, but I don't have the heart.)

There was a whole bunch of detail work that went along with the above - I kind of subsumed the concept of the "branch freeze" policy into the Updates Policy (and the new concept of the "Bodhi enabling point"), and clarified the relationship between it, the milestone freezes, and the Change freezes. I did small pokes to various pages as I went around, and updated just a metric assload of outdated links and redirects and things.

There's some more detail in my announcement mail, and you can look at my contributions page for the full list. I think I need to go and watch the Ryder Cup with a nice glass of Scotch around about now...