Fedora 23 call for Test Days

Bit late blogging about this, but roshi sent out the Fedora 23 Call for Test Days at the end of June. We've been a bit light on Test Days for the last few cycles, focusing on release validation testing, but we'd love to put that right this time around! If you're interesting in having a Test Day for a package, feature, component, flavor or spin (lots of choices!) of Fedora you're involved with or interested in, please read the call and file a ticket on the QA trac. We can give you as much or as little help as you like in running the event! If you don't know much about Test Days, you can read up on the concept, read the guide to running one, and look at the Fedora 22 Test Days to see how a completed event looks.

Also just a small note - several people mentioned not being able to post comments on the site recently, I finally got around to fixing that - sorry. I'm using a fairly old theme, and something in its comment form handling has gotten broken by newer Wordpress versions; I've hacked around the problem for now, I hope. Let me know if you have any issues. I also replaced reCAPTCHA with a simpler system that doesn't involve foreign servers and free labor for Google's image matching system or whatever it is.

Of DPIs, desktops, and toolkits

Fair warning up front: this is going to be one of my long, rambling, deep dives. It doesn't even have a conclusion yet, just lots of messy details about what I'm trying to deal with.

So this week I was aiming to do some work on our openQA stuff - ideally, add some extra tests for Server things. But first, since we finally had working Rawhide images again, I figured I'd check the existing openQA tests are still working.

Unfortunately...nope. There are various problems, but I'm focusing on one particular case here. The 'default install of a live image' test reached the 'Root Password' screen and then failed.

With a bit of careful image comparison, I figured out why openQA wasn't happy. With the Workstation live image, the capital letters in the text - 'R', 'P', 'D' - were all one pixel shorter than on the reference screenshot, which had been taken from a boot.iso install. openQA works on screenshot comparisons; this difference was sufficient for it to decide the screen of the test didn't match the reference screen, and so the test failed. Now I needed to figure out why the fonts rendered ever so slightly differently between the two cases.

I happen to know a bit about font rendering and GNOME internals, so I had an initial theory: the difference was in the font rendering configuration. Somehow some element of this was set differently in the Workstation live environment vs. the traditional installer environment. On the Workstation live, obviously, GNOME is running. On 'traditional install' images - boot.iso (network install) and the Server DVD image - GNOME is not running; you just have anaconda running on metacity on X.

I happen to know that GNOME - via gnome-settings-daemon - has some of its own default settings for font rendering. It defaults to 'medium' hinting, 'greyscale' antialiasing, and a resolution of exactly 96dpi (ignoring hidpi cases for now). You can see the hinting values in this gsettings schema, and the DPI comes in via gsd-xsettings-manager.c.

I also knew that there's a file /etc/X11/Xresources, which can be used to set the same configuration at the X level in some way. I checked a clean Fedora install and found that file looks like this (minus comments):

Xft.dpi: 96
Xft.hintstyle: hintmedium
Xft.hinting: true

so...it appeared to be identical to the GNOME defaults. At this point I was a bit stuck, and was very grateful to Matthias Clasen for helping with the next bit.

He told me how to find out what settings a given anaconda instance was actually using. You can press ctrl+shift+d (or ctrl+shift+i) in any running GTK+ 3 app to launch GtkInspector, a rather handy debugging tool. By clicking the GtkSettings 'object' and then going to 'Properties', we can see the relevant settings (down at the bottom): gtk-xft-antialias, gtk-xft-dpi, gtk-xft-hinting and gtk-xft-hintstyle. In my test case - a KVM at 1024x768 resolution - I found that in the Workstation live case, gtk-xft-dpi was 98304 and gtk-xft-hintstyle was 'medium', while in the boot.iso case (and also on KDE live images), gtk-xft-dpi was 98401 and gtk-xft-hintstyle was 'full'.

So then we had to figure out why. gtk-xft-dpi has a multiplication factor of 1024. So on the Workstation image the actual DPI being used was 98304 / 1024 = 96 (precisely), while on boot.iso it was 98401 / 1024 = 96.094726563 . I confirmed that changing the gtk-xft-dpi value to 98304 and gtk-xft-hintstyle to medium made the affected glyphs the same in boot.iso as in Workstation. But why was boot.iso getting different values from Workstation? And where did the rather odd 98401 gtk-xft-dpi value come from?

Matthias again pointed out what actually happens with the /etc/X11/Xresources values, which I never knew. They get loaded via a command xrdb, which affects the contents of something called an X resource database. This is reasonably well documented on Wikipedia and in the xrdb man page. Basically, the resource database is used as a store of configuration data. The /etc/X11/Xresources values (usually...) get loaded into the RESOURCE_MANAGER property of the root window of screen 0 at X startup, and act as defaults, more or less.

There are various paths on which the /etc/X11/Xresources values actually get loaded into the RESOURCE_MANAGER. There's a file /etc/X11/xinitrc-common which does it: it calls xrdb -nocpp -merge /etc/X11/Xresources if that files exists. (It also loads in ~/.Xresources afterwards if that exists, allowing a user to override the system-wide config). xinitrc-common is sourced by /etc/X11/xinit/xinitrc and /etc/X11/xinit/Xsession, meaning any way you start X that runs through one of those files will get the /etc/X11/Xresources values applied. The startx command runs through xinit which runs through xinitrc, so that's covered. Display managers either run through /etc/X11/xinit/Xsession or implement something rather similar themselves; either way, they all ultimately load in the Xresources settings via xrdb.

Sidebar: how gnome-settings-daemon actually implements the configuration that's specified in dconf is to load it into the X resource database and xsettings. So what happens when you boot to GNOME is that gdm loads in the values from /etc/X11/Xresources, then gnome-settings-daemon overwrites them with whatever it gets from dconf).

In the anaconda traditional installer image case, though, nothing does actually load the settings. X gets started by anaconda itself, which does it simply by calling /usr/libexec/Xorg directly; it doesn't do anything to load in the Xresources settings, either then or at any other time.

GTK+ is written to use the values from xsettings or the X resource database, if there are any. But what does it do when nothing has written the settings?

Well, as Matthias and I finally found out, it has its own internal fallbacks. As of Monday, it would default to hintstyle 'full', and for the DPI, it would calculate it from the resolution and the display size reported by X. I found that, on the test VM, with a resolution of 1024x768, xdpyinfo reported the display size as 270x203mm. The calculation uses the vertical values, so we get: 768 / (203 / 25.4) * 1024 = 98400.851231527 , which rounds up to 98401 - the gtk-xft-dpi value we found that boot.iso was using! Eureka.

So finally we understood what was actually going on: nothing was loading in the Xresources or dconf settings (DPI exactly 96, medium hinting) so GTK+ was falling back on its internal defaults, and getting a DPI of just over 96 and 'full' hinting, and that was causing the discrepancy in rendering.

The next thing I did for extra credit was figure out what's going on with KDE, which had the same rendering as traditional installer images. You'd think the KDE path would load in the Xresources values, though. And - although at first I guessed the problem was that it was failing to - it does. SDDM, the login manager we use for KDE, certainly does load them in via xrdb. No, the problem turned out to be a bit subtler.

KDE does something broadly similar to GNOME. It has its own font rendering configuration options stored in its own settings database, and a little widget called 'krdb' which kicks in on session start and loads them into the X resource database. Qt then (I believe) respects the xrdb and xsettings values like GTK+ does.

Like GNOME, KDE overrides any defaults loaded in from Xresources (or anywhere else) with its own settings. In the KDE control center you can pick hinting and antialiasing settings and a DPI value, and they all get written out via xrdb. However, KDE has one important difference in how it handles the DPI setting. As I said above, GNOME wires it to 96 by default. KDE, instead, tries to entirely unset the Xft.dpi setting. What KDE expects to happen in that case is that apps/toolkits will calculate a 'correct' DPI for the screen and use that. As we saw earlier, this is indeed what GTK+ attempts to do, and Qt attempts much the same.

As another sidebar, while figuring all this out, Kevin Kofler - who was equally helpful on the KDE side of things as Matthias was on the GNOME side - and I worked out that the krdb code is actually quite badly broken here: instead of just unsetting Xft.dpi, it throws away all existing X resource settings. But even if that bug gets fixed, the fact that it throws out any DPI setting that came from /etc/X11/Xresources is actually intentional. So in the KDE path, the values from /etc/X11/Xresources do get loaded (by SDDM), but krdb then throws them away. And so we fall into the same path we did on boot.iso, for anaconda: GTK+ has no X resource or xsetting value for the DPI, so it calculates it, and gets the same 98401 result.

For today's extra credit, I started looking at the DPI calculation aspect of this whole shebang from the other end: where was the 270x203mm 'display size', that results in GTK+ calculating a DPI of just over 96, coming from in the first place?

The (rather funny, to me at least...) answer is that it comes from X!

When you start X, it sets an initial physical screen size - and it doesn't do it the way you might, perhaps, expect. It doesn't look at the results of an EDID monitor probe, or try in some way to combine the results of multiple probes for multiple monitors. Nope, it simply specifies a physical size with the intent of producing a particular DPI value!

Here's the code.

As you can see there are basically three paths: one if monitorResolution is set, one if output->conf_monitor->mon_width and output->conf_monitor->mon_height are set, and a fallback if neither of those are true. monitorResolution is set if the Xorg binary was called with the -dpi parameter. mon_width and mon_height are set if some X config file or another contains the DisplaySize parameter.

If you specified a dpi on the command line, X will figure out whatever 'physical size' would result in that DPI for the resolution that's in use, and claim that the screen is that size. If you specify a size with the DisplaySize parameter, X just uses that. And if neither of those is true, X will pick a physical size based on a built-in default DPI, which is...96.

One important wrinkle is that X uses ints for this calculation. What that means is it uses whole numbers; the ultimate result (the display size in millimetres) is rounded to the nearest whole number.

You know those bits where GTK+ and Qt try to calculate a 'correct' DPI based on the monitor size? Well (I bet you can guess where this is going), so far as I can tell, the value they use is the value X made up to match some arbitrary DPI in the first place!

Let's have a look at the full series of magnificent inanity that goes on when you boot up a clean Fedora KDE and run an app:

  • X is called without -dpi and there is no DisplaySize config option, so X calculates a completely artificial 'physical size' for the screen in order to get a result of DEFAULT_DPI, which is 96. Let's say we get 1024x768 resolution - X figures out that a display of size 270x203mm has a resolution of 96dpi, and sets that as the 'physical size' of the display. (Well...almost. Note that bit about rounding above.)
  • sddm loads in the /etc/X11/Xresources DPI value - precisely 96 - to the X resource database: Xft.dpi: 96
  • krdb says 'no, we don't want some silly hardcoded default DPI! We want to be clever and calculate the 'correct' DPI for the display!' and throws away the Xft.dpi value from the X resource database. (Because it's broken it throws away all the other settings too, but for now let's pretend it just does what it actually means to do.)
  • When the app starts up, Qt (or GTK+, they do the same calculation) takes the resolution and the physical display size reported by X and calculates the DPI...which, not surprisingly, comes out to about 96. Only not exactly 96, because of the rounding thing: X rounded off its calculation, so in most cases, the DPI the toolkit arrives at won't be quite exactly 96.

Seems a mite silly, doesn't it? :) There is something of a 'heated debate' about whether it's better to try and calculate a DPI based on screen size or simply assume 96 (with some kind of integer scaling for hidpi displays) - the GNOME code which assumes 96dpi has a long email justification from ajax as a comment block - but regardless of which side of that debate you pick, it's fairly absurd that the toolkits have this code to perform the 'correct' DPI calculation based on a screen size which is almost always going to be a lie intended to produce a specific DPI result!

Someone proposed a patch for X.org back in 2011 which would've used the EDID-reported display size (so KDE would actually get the result it wants), but it seems to have died on the vine. There's some more colorful, er, 'context' in this bug report.

I haven't figured out absolutely every wrinkle of this stuff, yet. There's clearly some code in at least Qt and possibly in X which does...stuff...to at least some 'display size' values when the monitor mode is changed (via RandR) in any way. One rather funny thing is that if you go into the KDE Control Center and change the display resolution, the physical display size reported by xdpyinfo changes - but it doesn't change to match the correct physical size of the screen, it looks like something else does the same 'report a physical screen size that would be 96dpi at this resolution' calculation and changes the physical size to that. It doesn't quite seem to do the calculation identically to how X does it, though, or something - it comes up with slightly different answers.

So on my 1920x1080 laptop, X initially sets the physical size to 507x285mm (which gives a DPI of 96.252631579). If I use KDE control center to change the resolution to 1400x1050, the physical size reported by xdpyinfo changes to 369x277mm (which gives a DPI of 96.281588448). If I then change it back to 1920x1080, the physical size changes to 506x284mm - nearly the same value X initially set it to, but not quite. That gives a DPI of 96.591549296. Why is that? I'm damned if I know. None of those sizes, of course, resembles the actual size of the laptop's display, which is 294x165mm (as reported by xrandr) - it's a 13" laptop.

None of this happens, though, if I change the resolution via xrandr instead of the KDE control panel: the reported physical size stays at 507x285mm throughout. (So if you change resolutions in KDE with xrandr then launch a new app, it'll be using some different DPI value from your other apps, most likely).

So what the hell do I do to make anaconda look consistent in all cases, for openQA purposes? Well, mclasen made a change to GTK+ which may be controversial, but would certainly solve my problem: he changed its 'fallback' behaviour. If no hinting style is set in the X resource database or xsettings, now it'll default to 'medium' instead of 'full' - that's fairly uncontroversial. But if no specific DPI is configured in xrdb or xsettings, instead of doing the calculation, it now simply assumes 96dpi (pace hidpi scaling). In theory, this could be a significant change - if things all worked as KDE thought they did, there'd now be a significant discrepancy between Qt and GTK+ apps in KDE on higher resolution displays. But in point of fact, since the 'calculated' DPI is going to wind up being ~96dpi in most cases anyway, it's not going to make such a huge difference...

Without the GTK+ change (or if it gets reverted), we'd probably have to make anaconda load the Xresources settings in when it starts X, so it gets the precise 96dpi value rather than the calculation which ends up giving not-quite-exactly 96dpi. For the KDE case, we'd probably have to futz up the tests a bit to somehow force the 96dpi value into xrdb or xsettings before running anaconda. We're not yet running the openQA tests on the KDE live image, though, so it's not urgent yet.

Testdays: an app for dealing with Test Day wiki pages

Just wanted to quickly announce another app in the ever-growing wikitcms/relval conglomerate: testdays. testdays is to Test Day pages as relval is to release validation pages: it's a CLI app for interacting with Test Day pages, using the python-wikitcms module.

Right now it only does one thing, really - generates statistics. You can't (yet?) use it to create Test Day pages or report results. But you can generate some fairly simple statistics about a single Test Day page or some group of them, with various ways of specifying which pages you want to operate on.

For each page it will give you a count of testers, tests, and bug references, a ratio of bugs referred per tester, and an attempt to count what percentage of valid, unique bugs referred to in the page has been fixed. If you use it on a set of pages, it'll also give you overall statistics for all the pages combined, and a list of the top testers by number of results and bug references.

I'd like to thank Chris Ward for prodding me into making this - he asked for some statistics about the Fedora 22 Test Day cycle, and I figured I may as well write a tool to produce them...

testdays is available from the wikitcms/relval repository - if you have it set up, you can do (yum|dnf) install testdays.

Post-F22 plans

Hi, folks! I've been away on vacation for a few weeks, but I'm back now - if you'd been holding off bugging me about something, please commence bugging now. Of course, I have a metric assload of email backlog to dig out from under.

I'll probably have lots more on the list fairly soon, but just thought I'd kick off with a quick list of stuff I'm intending to do in the post-F22 timeframe:

  • Replace python-wikitcms' homegrown, regex-based mediawiki syntax parsing with mwparserfromhell (which will imply packaging it)
  • Improve python-wikitcms' handling of Test Days and add a new tool (a relval analog for Test Days) - initial focus on getting useful stats out, but I may set up a similar wiki template-based Test Day page creation system so you can create the initial page for a Test Day with a single command
  • Write openQA tests for more of the release validation test cases

I should also add better logging and unit tests to python-wikitcms, fedfind and relval, but let's be honest, I'll probably wind up doing shiny exciting things instead...

I'm also going to update the ownCloud packages to 8.0.4 (everything except EPEL 6) and 7.0.6 (EPEL 6) soon.

LinuxFest NorthWest 2015, ownCloud 8 for stable Fedora / EPEL

LinuxFest NorthWest 2015

As I have for many of the last few years, I attended LinuxFest NorthWest this year. It's been fun to watch this conference grow from a couple hundred people to nearly 2,000 while retaining its community-based and casual atmosphere - I'd like to congratulate and thank the organizers and volunteers who work tirelessly on it each year (and a certain few of them for being kind enough to drive me around and entertain me on Sunday evenings!)

The Fedora booth was extra fun this year. As well as the OLPC XO systems we usually have there (which always do a great job of attracting attention), Brian Monroe set up a whole music recording system running out of a Fedora laptop, with a couple of guitars, bass, keyboard, and even a little all-in-one electronic drum...thing. He had multitrack recording via Ardour and guitar effects from Guitarix. This was a great way to show off the capabilities of Fedora Jam, and was very popular all weekend - sometimes it seemed like every third person who came by was ready to crank out a few guitar chords, and we had several bass players and drummers too. I spent a lot of time away from the booth, but even when I was there we had pretty much a full band going quite often.

It was good to meet Brian and also Pete Travis, who does fantastic things for Fedora Docs, as well as Jeff Fitzmaurice. Jeff Sandys was there as usual as well, so we got to catch up over lunch.

I didn't have a talk this year (I proposed one but it didn't make it through the voting process), so I was able to take it nice and easy and just meet up with folks and watch talks. In between all of that I also got myself 3D scanned by Dianne Mueller, who had herself set up in a trailer with a big lazy Susan and a Kinect and software I know nothing at all about which managed to produce a scarily accurate model of me and my terrible posture. She's promised I'll get a tiny plastic bust of myself in the mail sometime soon, though I'm not sure exactly what to do with it...so thanks, Dianne!

Hard to mention everyone else I ran into or met, but of course there was the (thankfully) inimitable Bryan Lunduke and James Mason of openSUSE fame, who took up their traditional spot opposite us and cried all weekend as they watched the huge crowds flock our booth...

There were a lot of really good presentations. I particularly enjoyed Frances Hocutt's A developer's-eye view of API client libraries, which sounds a little dry but was very well presented and full of good notes for API client library producers and consumers. Frances wrangles API client libraries for the Wikimedia Foundation, so it was good to get to thank her for her work on the list of Mediawiki client libraries and the Gold standard set of guidelines for Mediawiki client library designers and all the other things she's done to improve API client libraries for Mediawiki - obviously this has been invaluable to wikitcms development.

It was also great to meet Frank Karlitschek at his Crushing data silos with ownCloud talk. I've been packaging ownCloud and making small upstream contributions for a while now so I've chatted with several of the devs on IRC and GitHub - I didn't even know Frank was going to LFNW, so it was an unexpected bonus to be able to say 'hi' in real life, and inspired me to do some work on OC, of which more later!

Diane's presentation on the latest bleeding-edge bits of the OpenShift stack went partly over my head - my todo list includes items like 'learn what the hell is going on with all this cloudockershifty stuff already' - but she always presents effectively and it was interesting to learn that OpenShift 3.0 is a real production thing built on Project Atomic, which is a bit astonishing since in my head Atomic is still 'this weird experimental thing Colin Walters keeps bugging me about'. These cloud people sure do move fast. Kids these days, I don't know.

Finally it was great to see Seth Schoen present Let's Encrypt. I'd heard a little about this awesome project but it was good to get some details on exactly how it's being implemented and how it will work. Their goal is, pretty simply, to make it possible to get and install a TLS certificate that will be trusted by all clients for any web server by running a single command. They're basically automating domain validation: the server comes up with a set of actions that will demonstrate control of the domain, the script running on your web server box (the 'client' in this transaction) demonstrates the ability to make those changes, the server checks they were done and issues a certificate, the script installs it. None of it is rocket science, but it's so immeasurably superior to doing it all one awkward step at a time with openssl-generated CSRs and janky web interfaces that the only wish I have is for it to be in production already. The real goal is to enable a web where unencrypted traffic simply doesn't happen - make it sufficiently easy to get a trusted certificate that, simply, everyone does it. It was pretty cool that at the end of the talk Seth was mobbed by Red Hat / Fedora folks offering help with integration - I'm guessing you'll be able to use this stuff on RHEL and Fedora servers from day 1.

All that and we had the now-traditional Friday night games night (beer, pizza, M&Ms, and Cards Against Humanity - really all you need on a Friday night!) too. It was a very enjoyable event as always.

ownCloud 8 for Fedora 21, Fedora 20, and EPEL 7

The other big news I have: ownCloud 8.0.3 came out recently, and it seemed like an appropriate time to kick most of the still-maintained stable Fedora and EPEL releases over to it. So there are now Fedora 21, Fedora 20 and EPEL 7 testing updates providing ownCloud 8.0.3 for those releases. Please do read the update notes carefully, and back everything up before trying the update!

EPEL 6 is still on ownCloud 7.0.5 for now; I'd have to bump even more OC dependencies to new versions in order to have ownCloud 8 on EPEL 6. That might still happen, but I decided to get it done for the more up-to-date releases first.

This build also includes some fixes to the packaged nginx configuration from Igor Gnatenko, so I'd like to thank him very much for those! I still haven't got around to testing OC on nginx, but Igor has been running it, so hopefully with these changes it'll now work out of the box.

GNOME Software / Apper Test Day on Thursday

The last of the scheduled Fedora 22 Test Days will happen this Thursday: GNOME Software / Apper Test Day. We'll be testing out the graphical software managers for the two main Fedora desktops, GNOME and KDE. Note the wiki page is incomplete right now - I'll be filling it out with test cases and so on tomorrow.

This should be a fairly light Test Day, but as always all help is welcome! You do need an installed copy of Fedora 22, so if you don't want to sacrifice a 'real' system, a virtual machine is a good idea. As always, we'll be meeting in #fedora-test-day on Freenode IRC. If you don’t know how to use IRC, you can read these instructions, or just use WebIRC.

Please come along and help out if you can! Thanks.

Fedora 22 Beta, and FedUp Test Day

It's yet another busy week in the world of Fedora QA!

Tomorrow the Fedora 22 Beta will be released. Despite the apparently ever-expanding set of deliverables (images and such), we've only had one week of delays in the 22 schedule so far, this Beta release coming a week late. The bug we didn't manage to fix in time for the initial planned release date was one that prevented Cloud instances from being rebooted, in case you're interested! I learned some stuff (and got my name in the commit log for util-linux) while helping to get that fixed...

The Beta's in pretty good shape overall, with the usual crop of known bugs but probably pretty solidly installable and usable for most folks. I've been running 22 here for some time, of course, and have been really enjoying all the GNOME 3.16 improvements as well as all the Fedora changes.

Tomorrow (2015-04-21) is also a Test Day in another two-Test-Day week! It will be FedUp Test Day. We've already done quite some upgrade testing as part of release validation, but it's always good to have more testers throw more configurations at fedup and make sure it's behaving properly. Obviously you need to have an installation of Fedora 20 or 21 you don't mind using for test purposes, but you can test with a virtual machine if necessary!

All the information is on the Test Day page, and QA and development folks will be available in #fedora-test-day on Freenode IRC (no, you darn kids, that's not a hashtag) to help with any questions or feedback you have. If you don’t know how to use IRC, you can read these instructions, or just use WebIRC. Please do come along and help us test Fedora 22 upgrades if you can!

ABRT and virtualization Test Days this week!

This week in Fedora QA we have two Test Days! Today (yes, right now!) is ABRT Test Day. There are lots of tests to be run, but don't let it overwhelm you - no-one has to do all of them! If you can help us run just one or two it'll be great. A virtual machine running Fedora 22 is the ideal test environment - you can help us with Fedora 22 Beta RC2 validation testing too. All the information is on the Test Day page, and the abrt crew is available in #fedora-test-day on Freenode IRC (no, you darn kids, that's not a hashtag) right now to help with any questions or feedback you have. If you don’t know how to use IRC, you can read these instructions, or just use WebIRC.

Thursday 2015-04-16 will be Virtualization Test Day. As with the ABRT event there's lots of testing you can do, but just doing a little helps us out! Once again, we'll be meeting in #fedora-test-day, and all the test instructions and other necessary information are on the wiki page.

Many thanks to everyone who can find some time to help with either or both Test Days!

SSL certificate issues with reverse proxy subdomains when using python-requests with Python < 2.7.9

So I had an interesting issue today which I couldn't find many Google results for, so I'll create one!

Many thanks to sigmavirus24 from #python-requests on Freenode for this - I'm just passing it on.

I use reverse proxying quite a bit on happyassassin.net, because I've only got one public IP address, but I don't want to run all web apps from the same host. Reverse proxying means there's one server which gets all requests to that IP on port 80, but it then passes them on to different servers inside my network depending on what the requested URL was. So when you request a page under www.happyassassin.net (like this one), the request is handled by a different machine than when you request a page under openqa.happyassassin.net.

I also use different SSL certificates for the various subdomains. If you check www.happyassassin.net and openqa.happyassassin.net, you'll see you get different SSL certs for each, and neither is valid for the other subdomain.

When you're using reverse proxying with this sort of config, the reverse proxy machine has to have all the certificates, and present the correct server certificate when the SSL/TLS connection is established - the establishment of the connection can't be handed off for $SECURITY_REASONS. This actually depends on an SSL/TLS extension called SNI (Server Name Indication), which was fairly new stuff six or seven years ago. These days though it mostly Just Works with all browsers and other client-y things you really care about (no, you don't care about IE 6. You really, really don't) and once you figure out the initial configuration you kinda forget about it...until it comes back to bite you in the ass!

I was working on our openQA integration stuff, porting it to use my openQA python client library which is based on the awesome python-requests. I was trying to test it with openqa.happyassassin.net , but it kept failing certificate validation: it was clearly getting the cert for www.happyassassin.net , not the one for openqa.happyassassin.net.

As Mr. Virus24 explained, the problem is pretty simple: python-requests does not always support SNI. It's kind of an optional feature. If you have Python 2.7.9 or Python 3 (possibly it requires some specific Python 3 minor release, but I'm not sure which) SNI support can be relied upon, but for older Python 2 releases, requests will only do SNI if some other Python modules are installed. If you're using pip, you can do pip install requests[security]. I use distro packages, so I went and looked at requests' setup.py file and figured that, on Fedora 21, I'd need pyOpenSSL, python-pyasn1, and python-ndg_httpsclient packages installed. I had the first two, but the last was missing. Sure enough, as soon as I installed it, my test script magically started working, with no other changes - requests is now doing SNI.

So, if you're using python-requests (directly or indirectly) and having certificate issues that look like this (the server sends the cert for the wrong domain), check you have all the right bits installed for requests to do SNI!

Hardware refresh, March 2015

I don't know if anyone else gets anything out of these posts, but I find myself referring back to them surprisingly frequently when I try to remember what the hell I have in my boxes, so you're getting another one. :)

I've been doing some hardware overhauling here at HA Towers again. The last major refresh I did was almost exactly two years ago; last year I moved stuff around quite a bit, but that didn't really involve much in the way of hardware changes (except buying a dedicated MythTV box).

The first thing I did was upgrade the MythTV box with a Core i7-4790, max its RAM out to 16GB, get rid of its big hard disks and give it an SSD instead, and take out the Firewire card.

If all those things sound odd for a MythTV box, congratulations for paying attention; it's not one any more. It was fun playing with MythTV, but it turned out to (as last time I played with it) still be just a bit too buggy to rely on (playback/recording has a habit of suddenly dying, often at the top of the hour or on switchoff from one program to the next), and we really don't record that much TV anyway.

So now it's my openQA box instead! It can run four workers simultaneously pretty well, which is good enough for me.

I also pretty much entirely replaced the 'virtual machine host' box which runs various server VMs - this site, my mail server, my IRC proxy, and my FreeIPA domain controller. The old one was still working fine, but I kinda wanted to add a few more machines and split up the web server into one for public stuff and one for private, and it was maxed out at 16GB of RAM which was getting tight for that many VMs.

So I put together another i7-4790-based box, with an Asus H97M-E/CSM motherboard (wanted a good mATX board which could support 32GB of RAM). I also bumped up the storage space (was running a bit low on that too): the old box had 128GB (2x Samsung 840 Pro in RAID-1), the new box has 240GB (2x Intel 730 in RAID-1).

The old virt host box is now my dedicated bare metal test box - yes, after a mere six years or so in QA, I finally have one.

After all the shuffling was done I wound up with enough disks lying around that I could migrate my desktop from a single 128GB SSD to three of them as a RAID-5 set, so my desktop now has twice the storage too.

I also replaced our HTPC box; the old one was a Zotac AD02 Plus, which still worked just fine, but hi10p is becoming more and more common, and that system can't play 1080p hi10p smoothly (nothing does hardware decoding of hi10p yet, so it's reliant on CPU power, and CPU power was never that box's strong suit).

The new one is an Intel NUC5i3RYK - one of their lil' NUC boxes. I managed to get my hands on one of the fairly-brand-new ones with an i3-5010U CPU. I stole half the RAM from the old-vmhost-now-test-box, and just got a 32GB USB-3 stick for storage. Just like the old box, it runs OpenELEC, which is still a fantastic project. A small box running OpenELEC is pretty unbeatable as an HTPC, for my needs anyhow.

The only slight fly in the ointment is that the handy onboard infrared is broken in Linux, apparently due to Intel messing up the firmware; that should get resolved sooner or later, for now I'm still using the USB IR receiver I had for the old one. It also uses the slightly unusual mini HDMI port, so make sure you get the right cable, if you buy one...

Finally, I've also replaced my phone recently, and today I got a new tablet. I'm still playing with Fedlet when I can, but it's not really usable and likely won't be soon. I don't really use a tablet that much, but I do like to have one when I go travelling, it's a lot nicer to carry around and use a small travel bag with a tablet than a big bag with a laptop.

My new phone is an LG G3, and the tablet is a Samsung Galaxy Tab S 8.4. I bought both of them for two specific reasons: i) they're good light thin hardware with sufficient RAM and nice screens, and ii) they both have decent official Cyanogenmod 12 builds. I got pretty sick of the firmware on my Xperia ZL after a while; the phone wasn't bad, but it was stuffed with Sony and Google apps I didn't need and couldn't get rid of, and it wouldn't shut up about updates for the Sony ones. CM12 with modular GApps is a hell of a lot nicer.