Looking for new maintainer for Fedora / EPEL ownCloud packages

So I've been maintaining ownCloud for the last little while. Unfortunately I sat down today to try again and update the package to the latest upstream (8.1.1), and somewhere in the second hour of insanely stupid PHP autoloader code, I just snapped. I can't take this crap any more.

I only personally really needed OC for calendar and contact sync anyway, so I've set up Radicale instead: it's written in Python and it doesn't have a ridiculous forest of bundled crap.

Given that there are dozens of other things I could be spending my time on that I'd find more rewarding, I'm just not willing to do any further major updates of the Fedora / EPEL ownCloud package, I'm sorry. I'm willing to keep the current major versions (8.x in everything but EPEL 6, 7.x in EPEL 6) updated until they go EOL, at which point if no-one else is interested, I will orphan the package.

If anyone would like to take on the work of doing the 8.1 upgrade and maintaining the package in future, please do let me know and I'll happily transfer it over. To do a decent job, though, you are going to need to know or be willing to learn quite a lot of intimate and incredibly annoying details about things like PHP class loading and how Composer works. If you don't, for instance, know what it means for unbundling purposes when a PHP library specifies 'classmap' as the autoload mechanism in its composer.json file, and you're not willing to spend your time learning, you probably don't want to own this package. :)

I'm very sorry to folks who are using it, but I really can't deal with the crap any more. If all you need is calendar/contact sync, there are easier ways. Check out Radicale or something like it.

Upstream does of course provide ownCloud packages in an OBS repo. They do not follow Fedora web app packaging policies or unbundling rules, and probably don't work very well with SELinux. Switching from the Fedora/EPEL packages to the OBS ones is likely to require moving various things around and config file editing and stuff. I'm not going to document that, sorry. If anyone else does, though, that'd be great.

Trained professional at work

Ever wondered what HIGHLY PROFESSIONAL SYSTEM we use in Fedora QA to test different SATA controllers? Well, wonder no more!

Highly professional SATA switch (a bunch of cables coming through a hacked-up hole in a cheap case)

Why yes, that is a whole pile of cheap SATA cables poking through a hole I hacked out of my test box's extremely cheap case with pliers, thanks for asking. Four extension cables - two hooked up to the motherboard's controller, two to a hardware RAID controller - and two hooked up to the disks. This is highly advanced stuff here, folks. Best not stand too close.

Programming note: no Cloud Test Day tomorrow

Hi folks! Eager Test Day-ers may have seen the trac ticket and/or Test Day calendar entry for a Cloud Test Day tomorrow (2015-08-27), but please know that it's no longer going to happen as all the bits are not yet in place. It will be run some time in September instead.

The Cloud team will still be getting together to discuss some plans and work on some documentation, but it won't be in #fedora-test-day and it won't be a Test Day. Come by #fedora-cloud if you want to check it out, though!

Flock 2015 report, and Fedora nightly compose testing

Hi, folks! I've been waiting to write my post-Flock report until I had some fun stuff to show off, because that's more exciting than just a bunch of 'I went to this talk and then talked to this person', right?

Fedora nightly compose testing

So let me get to the shiny first! Without further ado:

Cool, right? That's what I've been working on this whole week since Flock. All the bits are now basically in place such that, each night, openQA will run on the Branched and Rawhide nightly composes when they're done, and when openQA is done, the compose reports will be mailed out.

Flock report

The details behind that get quite long, so before I hit that, here's a quick round-up of other stuff I did at Flock! I'm not going to cover the talks and sessions many others have already blogged about (the keynotes, etc.) as it seems redundant, but I'll mention some stuff that hasn't really been covered yet.

Josef ran a workshop on getting started with openQA. It was a bit tricky, though, due to poor networking on site; the people trying to follow along and deploy their own Docker-based openQA instances couldn't quite get all the way. So we turned the last bit of the talk into a live demo using my openQA instance instead, and created a new test case LIVE ON STAGE. We didn't quite get it all the way done before getting kicked out by a wedding party, but I finished it up shortly after the session. Josef did a great job of explaining the basics of setting up openQA and creating tests, and I hope we'll have a few more people following the openQA stuff now.

Mike McLean did a great talk on Koji 2.0, which has been kinda under the radar (at least for me) compared to Bodhi 2, but sounds like it'll come with a lot of really significant improvements and a better design. As someone who's spent a lot of time staring at kojihub.py lately, I can only say it'd be welcome...

Denise Dumas gave the now-traditional What Red Hat Wants talk, which I'm really glad is happening now. I'm totally behind the idea that we're up-front about the relationship between Red Hat and Fedora, instead of some silly arrangement where Red Hat pretends Fedora just 'happens' and is a totally community-based distro; it's much better for RH to be saying a couple of years in advance 'hey, this is where we'd like to see things going', rather than every so often a bunch of Features/Changes 'mysteriously' appearing four months out from a release and lots of people suddenly caring a lot about them (but it just all being a BIG COINCIDENCE!)

Paul Frields did a nice talk on working remotely, which had a lot of great ideas that I don't do at all (hi Paul, it's 4:30pm and I'm writing this in my dressing gown...) - but it was great to compare notes with a bunch of other folks and think about other ways of doing things.

I did a lightning talk on Fedlet, showing it off running and talking a bit about what Fedlet involves and how well (or not) it runs. Folks seemed interested, and a few people came by to play with my fedlet afterwards.

Stephen Gallagher ran a rolekit hackfest. I was hoping to use it to come up with an openQA role, but failed for a couple of reasons: Stephen doesn't recommend creating new roles right now as the format is likely to change a lot quite soon, and since I last worked on the package openQA has added a few more dependencies which need packaging. But I did manage to move forward with work on the package a bit, which was useful. In the session Stephen explained the rolekit design and current state to people, and talked about various work that needs doing on it; hopefully he'll get some more help with it soon!

Of course, as always, there was lots of hallway track and social stuff. We had a couple of excellent poker games - good to see the FUDCon/Flock poker tradition continues strong - and played some Exploding Kittens, which is a lot of fun. My favourite bit is the NOPE cards. As many others have said, the Strong Museum was awesome - got to play a bunch of pinball, and see Will Wright's notebooks(!) and John Romero's Apple ][(!!!!).

Fedora compose testing: development details and The Future

So, back to the 'compose CI' stuff I spent a lot of time talking about/working on!

A lot of what I did at Flock centred around the big topic you can call 'CI for Fedora'. We still have lots of plans afoot for big, serious test and task automation based on Taskotron, which is now getting really close to the point where you'll see a lot more cool stuff using it. But in the meantime, the 'skunkworks' openQA project we spun up during the Fedora 22 cycle has grown quite a bit, and the fedfind project I mostly built to back openQA has grown quite a lot of interesting capabilities.

So while we were talking about properly engineered plans for the future, I realized I could probably hack together some stupidly-engineered stuff that would work right now! In Kevin Fenzi's Rawhide session I threw out a few ideas and then figured that, hell, I should just do them.

So I started out by teaching fedfind some new tricks. It can now 'diff' two releases: that is, it can tell you what images are in one, but not the other. It can also check a release for 'expected' images - basically it has some knowledge about what images we'd most want to be present all the time, and it can tell you if any are missing. (FIXME: I didn't know which of the Cloud images were the most important, so right now it has no 'expected' Cloud images: if some Cloud-y people want to tell me which images are most important, I can add them).

Then I wrote a little script called check-compose which produces a handy report from that information. It also looks for openQA tests for the compose it's checking, and includes a list of failures if it finds any. It can email the report and also write the results in JSON format (which seemed like a good idea in case we want to look back at them in some programmatic way later on). The 'compose check reports' that have been showing up this week (and that I linked above) are the output of the script.

I had all of that basically done by Tuesday, so what have I been wasting the rest of my week on? Read on!

What was missing was the 'C' part of 'CI'. There was nothing that would actually run the compose report at appropriate times, and we weren't actually running openQA tests nightly. For the past few days I've been kind of faking things up by manually kicking off openQA jobs and firing off the compose report when they're done. This kind of mechanical Turk CI doesn't really work in the long run! So for the last few days I've worked on that.

We were not actually scheduling nightly openQA runs at all. The openQA trigger script has an all mode which is intended to do that, but we weren't running it. I suggested we turn it back on, but I also wanted to fix one big problem it had: it didn't know whether the composes were actually done. It just got today's date and tried to run on the nightlies for it. If they weren't actually done whenever the script ran, you got no tests.

This definitely hooks in with one of the big topics at Flock: Pungi 4, which is the pending major revision. Pungi is the tool which runs Fedora composes. Well, that's not quite right: there's actually a couple of releng scripts which produce the composes (the first of those is for nightlies, the second is for TCs/RCs). They run pungi and do lots of other stuff too, because currently pungi only actually does some of the work involved in a compose (a lot of the images are just built by the trigger scripts firing off Koji tasks and other...stuff). The current revision of the compose process is something of a mess (it's grown chaotically as we added ARM images and Cloud images and Docker images and Atomic images and flavors and all the rest of it). With the Pungi 4 revision and associated changes to the releng process, it should be trivial to follow the compose process.

Right now, though, it isn't. Nightly composes and TC/RC composes are very different. TCs/RCs don't emit information on their progress really at all. Nightlies emit some fedmsg signals, but crucially, there's no signal when the Koji builds complete: you get a signal when they start, but not when they're done.

So it was time to teach fedfind some new tricks! I decided not to go the fedmsg route yet since it's not sufficient at present. Instead I taught it to tell if composes are complete in lower-tech ways. For the Pungi part of the process it looks for a file the script creates when it's done. For Koji tasks, it finds all the Koji tasks that looks like they're a part of this nightly, and only considers the nightly 'done' when there are at least some tasks (so it doesn't report 'done' before the process starts at all) and none of the tasks is 'open' (meaning running or not yet started).

So now we could make the openQA trigger script or the compose-check script wait for a compose to actually exist before running against it! Great. Only now I had a different problem: the openQA trigger script was set up to run for both nightlies. This is fine if it's not waiting - it just goes ahead and fires one, then the other. But how to make it work with waiting?

This one had to go through a couple of revisions. My first thought was "I have a problem. I know! I'll use threads", and we all know how that joke goes. Sure enough, all three of the revisions of this approach (using threading, multiprocessing and multiprocessing.dummy) turned out to have problems. I eventually decided it wasn't worth carrying on fighting with that, and came up with some different approaches. One is a low-tech round-robin waiting approach, where the trigger script alternates between checking for Branched and Rawhide. The other is even simpler: by just adding a few capabilities to the mode where the trigger runs on a single compose, we can simply schedule two separate runs of that mode each night, one for Rawhide, one for Branched. That keeps the code simple and means either one can get all the way through the 'find compose, schedule jobs, run jobs, run compose report' process without waiting for the other.

And that, finally, is about where we're at right now! I'm hoping one or the other openQA change will be approved on Monday and then we can have this whole process running unattended each night - which will more or less finally implement some more of the near-legendary is Rawhide broken? proposal. Up till then I'll keep running the compose reports by hand.

Along the way I did some other messing around in fedfind, mostly to do with optimizing how it does Koji queries (and fixing some bugs). For all of a day or so, it used multiprocessing to run queries in parallel; I decided multithreading just wasn't worth it for the moderate performance increase, though, so I switched to using a batched query mode provided by xmlrpclib, which speeds things up a little less but keeps the code simpler. I also implemented a query cache, and spent an entire goddamn afternoon coming up with a reasonable way to make it handle fuzzy matches (when e.g. we run a query for 'all open or successful tasks', then run a query for 'all successful live CD tasks', we can derive the results for the latter from the former and not waste time talking to the server again). But I got there in the end, I think.

It was quite a lot of work, in the end, but I'm pretty happy with the result. I'm really, really looking forward to the releng improvements, though. fedfind is more or less just the things releng is aiming to do, only implemented (unavoidably) stupidly and from the wrong end. As I understand it, releng's medium-term goals are:

  • all composes to contain sufficient metadata on what's actually in them
  • compose processes for nightlies to be the same as that for TCs/RCs
  • compose process to notify properly at all stages via fedmsg
  • ComposeDB to track what composes actually exist and where they are

right now we don't really have any of those things, and so fedfind exists to reconstruct all that information painfully, from the other end. It will definitely be a relief when we can get all that information out of sane systems, and I don't have to maintain a crazy ball of magic knowledge, Koji queries and rsync scrapes any longer. For now, though, the whole crazy ball of wax seems to actually work. I'm really glad that folks like Kevin, Dennis, Peter, Ralph, Adam and others are all thinking down the same general lines: I'm hopeful that with Pungi, ComposeDB (when it happens), and further work on Taskotron and openQA and even my stupid little scripts, we'll have continuously (see what I did there?!) better stories to tell as we move on for the next few releases.

Flock 2015 pre-post-wrap-up

The oddest thing about events like Flock, for me, is how you can go to them for a week and manage not to see someone else who was there at all - apparently I was more or less in the same hotel as mclasen and kk4ewt for a week without once saying hi. Strange!

I'll try to post a proper wrap-up soon. This was a quieter Flock for me than usual - no talk to give - but I managed to have some good discussions with various folks and sit down and get some actual work done, which may be a first!

Fedora 23 l10n Test Day: 2015-08-18 and NetworkManager Test Day: 2015-08-20

Hi folks! So the Fedora 23 Test Day schedule is swinging into gear next week, with the l10n Test Day. l10n is localization: for the test day, the goal is to check the 'availability and accuracy' of translations for several of Fedora's major desktop applications. If you're proficient in any language besides English, it would be really helpful if you could drop by and help us check folks can use Fedora in your language.

There will be a Test Day live image available, so all you'll need to do is download that, boot it up, run the specified applications and provide some feedback on the translations. There are full instructions on the wiki page and it doesn't require any special knowledge or too much time, so it's a great way to get started helping out!

After the l10n event will come NetworkManager Test Day on 2015-08-20. This is one of the events that has run regularly for many cycles, and this time around we'll be doing a general run through of all NetworkManager functionality to try and catch any lurking bugs.

Related to the l10n event will be the i18n Test Day that will come on 2015-09-01, which looks at underlying mechanisms like input methods, font rendering, and printing, so look out for that one too!

As always for Test Days, the live action is in #fedora-test-day on Freenode IRC. If you don’t know how to use IRC, you can read these instructions, or just use WebIRC.

Fedora 23 Alpha and more

Hi folks! So what's new?

Fedora 23 Alpha

Well, the big news is that Fedora 23 Alpha is released today. This was definitely a bit of a 'don't look in the sausage factory!' release during the validation / approval process, but in the end it's pretty much a standard Alpha. It mostly works fine. As always, we strongly recommend not installing it on any kind of production system (though my desktop's been running 23 ever since 22 came out: do as I say, not as I do!)

We weren't able to do as many TCs and RCs as I'd usually like due to issues with the compose process and some unfortunately timed ABI/API breaks, but in the end we managed to get composes done when we absolutely needed them, and completed the validation testing without too much craziness.

Revamping the compose / test process

One of the things I'm very much looking forward to working on with other teams - particularly release engineering - at the upcoming Flock is how we revamp the whole compose/test/milestone release cycle, which is just sooo 2004, for the brave new world of...(INSERT BUZZWORD HERE). What I'd like us to have is a model where we run a full release-style compose every day (or two, after Branching - one for Rawhide, one for Branched) and it is then run through whatever automated tests we have to throw at it; we'd publish the test results for each day's compose in whatever ways are most useful to people, and composes which passed whatever subset of the tests are considered 'gating' would be preserved for download.

If we did that I'm not sure we'd actually need Alpha or Beta releases any more; they're already somewhat artificial and about half their usefulness is for PR. Many people already deploy Branched from nightly live images or boot.iso builds instead of the previous milestone build.

Final releases would ideally be built with as similar a process as possible (at present there are a few switches that need to be flipped to mark a release as 'final', this might likely still have to be the case) and tested the same way.

We can look at various models for adding on manual testing steps along the way and perhaps stepping up the 'gating' requirements, as we currently increase the requirements of the release criteria through the Alpha, Beta and Final milestones.

The good thing is there's lots of work to get our teeth into here and it can all be done while maintaining the TC/RC and milestone build process for as long as it seems to make sense. For instance, I've been meaning for a while to work on fedmsg integration for openQA.

It would be really, really helpful in this model to reduce the major bottlenecks we have in the compose repos / compose images pipeline, so I'm also hoping to talk to releng folks about what the latest thoughts are in that area.

openQA progress

I've been working quite a lot on openQA lately. We got all the tests running properly on Fedora 23; you can see results from the two openQA deployments in the F23 validation result pages ('coconut' is the more-or-less official one running behind the RH firewall, 'colada' is my deployment which is publicly visible, tends to be running a slightly newer openQA, and gets my changes before they get merged upstream). Aside from the the rather subtle X dpi issue I blogged about before I ran into some similarly obscure issues with console fonts - there's various interesting details/bugs involved in the question of exactly what console font is used in an installed system, in a traditional installer image, and in the live environment - so I wound up doing more bug digging and screenshot updating for that.

After that I added some new storage tests, and along the way somewhat redesigned the way they're dispatched and run to keep things cleaner and improve code re-use.

Next I'm planning to do some more storage tests, and also if possible get tests running on i686 and UEFI; that'd help with test coverage and help avoid problems like the one we had for F23 where the i686 installer images didn't boot for a month and no-one noticed. Of course, we might decide to stop blocking on i686 releases instead...

Upcoming stuff

Of course, next on the timetable is the Fedora 23 Beta release; we'll start cutting TCs after Flock, I guess. We also have Fedora 23 Test Days starting up soon. The l10n Test Day is pencilled in for 2015-08-18, so mark your calendar - the Test Day page and more information should be coming soon!

Fedlet 2015-08-10 available

So here's a lil' pre-Flock present for all you crazy Baytrail-ers - a new Fedlet image. Get your 2015-08-10 while it's hot, over at the fedlet page.

This is pretty much just a rebase of the kernel to 4.2rc6 in an F23 Alpha userland. I built a GTK+ package with a patch from Jan-Michael to use a popover for URL histories, which should improve OSK interaction, but Firefox seems to have some issue in this image where it never pops up an OSK at all, and the GTK+ I built is actually older than the current F23 build so it's not included. You can 'downgrade' to it after installing, though, and it seems to work in Epiphany.

Seems to be working pretty well otherwise on the V8P - sound and wifi are both good, and even survive a suspend/resume. Installing worked fine for me (my old install got a bit messed up so I reinstalled), though I hit some anaconda bugs deleting the old install, so I did that manually with fdisk. Except for one bug: the top bar doesn't render properly, so you can't see the 'Done' buttons. They're there, though, so just click blindly in the right place (bottom left corner of the blue top bar). You can boot up a regular Fedora image in a VM or something to see what you're missing.

Fedora repository for Doom stuff: Odamex, Zandronum, Doomseeker, CnDoom

I had a bit of free time over the last few days, and looked at the current state of the art for Doom on Linux. The awesome Rahul Sundaram has been looking after several Doom-related packages for a while - including the Chocolate Doom package - but there are some things that seem to be commonly used these days that we didn't have packaged. So I packaged them up, and put them in a new repository!

First up I packaged Competition Doom, which is a sort of special-purpose variant of Chocolate intended for competitive speedrunning; it's the port used for the new tables at COMPET-N. It should have no trouble getting into the official Fedora repos, and I've filed a review request, but until that gets approved, it's living in this new repository.

Then I worked on Zandronum. Zandronum is a sort of continuation of Skulltag, which was a multiplayer-focused fork of ZDoom with the GZDoom OpenGL renderer subsequently added in (I think). It seems to be the most popular port for multiplayer Dooming at present.

Its source is also...ahem...something of a mess. The licensing would give a license lawyer migraines for weeks (Zandronum's is even more of a disaster than ZDoom's, as it pulls in even more stuff). The codebase bundles an ancient copy of the LZMA SDK and links against all sorts of private symbols in it, so there's no possibility of using any kind of system-wide copy. It does the same thing with the DUMB music library, and compiles some subset of TiMidity++ straight into the executable. It uses the Mercurial metadata for the source to version stamp the network code, so you have to include all the metadata in your source tarball or else your build won't connect to any servers. There's more ickiness. If you don't understand any of the above, the executive summary is it's a mess.

But hey, I got an okay package build done, so it's living in this repository. It likely won't be able to go anywhere else, really, due to the crazy licensing (until that gets cleaned up upstream) and library bundling. It's fine for personal use, but if you have any intent to redistribute it, do check the spec file comments and references and the license files in the package.

Along with Zandronum I packaged Doomseeker, which is a server browser with a modular design that, out of the box, can find Zandronum, Chocolate, Vavoom and Odamex servers. I hadn't really looked into Odamex before, but now I look at it it seems quite nice, so I guess I'll do that next.

EDIT: I've added Odamex packages now: odamex for the client (and the launcher / server browser), odamex-server for the server.

So basically, you can enable the repo (stick the repo file in /etc/yum.repos.d) and install odamex, cndoom, zandronum and/or doomseeker, should it take your fancy. There's a sort-of convention between source ports that they look for WADs (including IWADs) in the path exported as $DOOMWADDIR, so you can set that, or point each port to your wads separately (they have their own config files and will print info on how to do it if you try and run them).

As always with Doom ports, you need an IWAD file to play anything. You can use the official IWADs from Doom (doom.wad) or Doom2 (doom2.wad) or Final Doom (also doom2.wad, but a different one), or the F/OSS-licensed IWAD Freedoom, which is packaged as freedoom; the package places its wad file in /usr/share/doom, so if you use Freedoom, point the launcher and/or port to that directory. The official IWADs are still copyrighted, if you want to use them, you must own a copy (you might have an old one lying around or you can buy them on Steam, I believe; there are instructions around the internets).

To play on an online server you need a matching IWAD, I believe; Doomseeker and Odamex will download any PWADs (mods, basically) the server uses automatically when you connect.

Happy Dooming! I'll try and keep the packages up to date, but no promises. They're not signed yet, but I'll try and change that and publish the key sometime.

Downtime 3

Hey folks - just in case anyone was wondering, all of happyassassin.net was down for the last ~23 hours because of a power outage here. Sorry for any inconvenience this might have caused (not that I can think of much...:>)