WIP: Fedora Finder

Fedora Finder finds Fedoras.

Funny what you wind up working on for QA. I'm currently working on a Thing which finds Fedoras. Feed it a Fedora pre-release, release, TC/RC, or nightly version and it'll find the images from that version.

At least, that's the Grand Vision. So far I'm mostly focusing on TCs/RCs and nightlies, as that's what we actually need for our immediate purposes (it'd be useful for python-wikitcms and also for the OpenQA stuff we're currently working on - more on that later).

It's still heavily WIP, but I've thrown it in a repository so I don't lose it and so anyone else who wants to can take a look. At present it's a single file with a few functions and some test code, I'll turn it into a proper module when it gets further along.

The example code shows how to use the functions for finding nightly images, and find_compose_images() is pretty easy to play with - it'll give you a list of URLs to images for the specified TC/RC compose (only ones that are actually there).

Documenting Wikitcms

A few folks had asked me to document all the various things I did to the wiki during the Fedora 21 cycle, related to the release validation test result pages - the system for creating them from templates, and the interaction with python-wikitcms and relval.

I already added template documentation for most of the template pages that are relevant, but that just kind of gives you an idea of what each template does specifically; it doesn't give you an overview of how the whole system works.

As I've been working on this stuff for the last few months, I've settled on a perhaps slightly wacky interpretation of it which nevertheless makes the most sense to me. I like to consider that what we have effectively done, over time, is implement (part of) a test management system as a set of conventions about wiki content.

So I decided to document the system in just this way: the Wikitcms page on the wiki documents the Wikitcms 'test management system'. It explains all (well, most) of the conventions about page naming and contents and categorization and so on that python-wikitcms expects when reading existing results, and that are honored by the templates and python-wikitcms/relval when creating new result pages and entering results.

In time-honored F/OSS tradition, this concept introduces some fun naming confusion with the python module. You'll note I've taken to calling it python-wikitcms in this post; from now on, as far as practical, the Python module is python-wikitcms, the wiki-based 'test management system' is Wikitcms (with a big W). But I'm not going to bend over backwards to rename git repos and things, that's just unnecessarily awkward.

Fedora 21: problems with offline updates, other PackageKit stuff

Hey, you! Yes, you! Are you having problems with software updates in Fedora 21? Mysterious errors from GNOME Software or Apper? Well, ask your pharmacist today about new Updatrex™...

no, wait, that's not it. Ahem.

Since the middle of last week we've been aware of some bugs with the PackageKit stack. The initial bug report was for offline updates failing, but during testing of the fix for that, various other bugs were identified which could potentially cause problems with many PackageKit transactions - that's mostly documented in this report. Mostly, though, folks only seem to have been noticing issues since libhif 0.1.7 came out as an update in late December.

Richard Hughes has been working hard to fix the problems, and we now have an update in updates-testing which we're fairly confident should fix all the known bugs in this area. Note: if you're seeing the error SearchGroups not supported by backend from Apper, that's a different issue, not covered by this post.

Until the update gets an advisory ID, you can install it like this:

# mkdir /tmp/updtmp
# cd /tmp/updtmp
# yum install bodhi-client
# bodhi -D PackageKit-1.0.4-1.fc21
# yum update *.rpm
# systemctl restart packagekit.service

For most folks simply installing the update should resolve any problems, but if you were unfortunate, you may also need to reboot your system, run:

# pkcon repair

and reboot again. That should resolve any problems with offline updates and other PackageKit operations.

And of course, if the effects of Updatrex™ persist for more than four hours, consult your physician immediately.

OpenSSL: trust and purpose

Those following me on various Intarweb Media may have noticed I've spent half the week staring at openssl source code and weeping. Here's one of the results of that.

OpenSSL has two somewhat different mechanisms for deciding what uses a certificate is good for: trust and purpose. This is quite subtle and not terribly well documented, so I thought I'd write it up here.

Here's a tl;dr approximation: purpose is about what the entity that issued the certificate has to say about what it should be used for. trust is about the degree to which the entity validating the certificate trusts the entity that issued it.

purpose checking is more commonly encountered and understood. The reason we have it should be fairly obvious. Say I'm a CA, and you ask me to issue you a certificate for your web server, awesomesauce.com. What I, as the CA, want to say to the world at large is "I affirm that the entity that holds the other half of this here key pair is the legitimate operator of awesomesauce.com". Notably, I do not necessarily want to say to the world at large "I affirm that the holder of the other half of this here key pair can tell you who else to trust, on my behalf" (i.e. to act as an intermediate CA), or even "I affirm that the holder of the other half of this here key pair is the legitimate owner of the email address bob@awesomesauce.com" (S/MIME email stuff).

So we don't just need a mechanism for entities to sign each other's keys; we need to indicate what purpose they're affirming trust in the other entity for.

If you have a certificate for your web server or whatever, take a look at it, with openssl x509 -in certfile -noout -text. If you don't, here's an excerpt from the output you get by running that command on the happyassassin.net certificate:

    Subject: C=CA, ST=British Columbia, L=Vancouver, O=Adam Williamson, CN=www.happyassassin.net/emailAddress=postmaster AT happyassassin.net
        X509v3 Subject Alternative Name: 
            DNS:www.happyassassin.net, DNS:happyassassin.net, DNS:mail.happyassassin.net
        X509v3 Key Usage: 
            Digital Signature, Key Encipherment, Key Agreement
        X509v3 Extended Key Usage: 
            TLS Web Client Authentication, TLS Web Server Authentication

That information is a part of the certificate as issued by the CA. What it means is that the CA trusts me (as the holder of the other half of the key pair associated with the certificate) for the purposes of "Digital Signature, Key Encipherment, Key Agreement" and "TLS Web Client Authentication, TLS Web Server", as they relate to the subject and subject alternative name that also form a part of the certificate metadata. The various (extended) key usages are defined in RFC5280 and RFC3280, mostly (I think a few crop up in other RFCs).

When openssl does purpose checking, what it does is check all the certificates in the chain being verified which don't come from the store of trusted certificates to see if their key usage extensions (and a few similar bits of metadata) match the purpose for which the chain's validity is being tested. The intermediate certificates are required to have the appropriate key usage extensions for a CA issuing certificates for the purpose, and the leaf certificate (the server certificate or whatever) is required to have the appropriate key usage extensions for an entity performing the purpose.

trust checking is a bit less commonly encountered and, probably, understood.

We mentioned the trusted certificate store just now. This is the list of certificates considered 'trusted'. Commonly, this is kind of a binary operation. If you have a regular certificate in OpenSSL's trust store it is considered that you trust it for all purposes (corrections welcome on this, but my reading of the source is OpenSSL does not run purpose checks on certificates from the trust store, even though they can express key usages - OpenSSL's trust store is pretty much explicit a store of trusted CA certificates, you shouldn't ever put server certificates in it). The trust store usually contains root CA certificates - the basis of the CA trust system, your Verisigns and StartComs and GoDaddys and so on. When you verify a typical site certificate, it provides its own certificate and all the intermediate certificates between itself and the root CA; you have the root CA certificate in your trust store; and OpenSSL does purpose checking on the server certificate and all the intermediates (as these are untrusted certificates, not from the trust store) as well as validating them in other ways. If you're checking a web server's certificate, it will make sure that certificate and all intermediate certificates have a key usage consistent with the web server purpose (or have no key usage extensions at all, for compatibility with very old certificates).

OpenSSL does, however, provide a mechanism by which you can limit the extent to which you (the entity doing the validation) trust a certificate in the trusted store. This is what the trust mechanism is all about.

OpenSSL understands an extended certificate format which I call BEGIN TRUSTED, after the text that identifies it when written to PEM format. It's sometimes referred to in the documentation as the 'trusted certificate' format, but I don't like that term because they sometimes use the same term to refer to any certificate in the trust store, which just gets confusing.

It's a bad name, because what it actually does is limit the trust placed in the certificate. A regular certificate in the trust store, as I said, is trusted for all purposes. The BEGIN TRUSTED certificate format lets you say 'I trust this certificate, but only for certain purposes'.

You can take an existing certificate and run it through openssl x509 with the -addtrust and/or -addreject parameters to explicitly state that you trust or distrust it for certain purposes. So if I take a CA cert and run this on it:

openssl x509 -in ca.pem -addtrust serverAuth -addreject clientAuth -out ca-partial-trust.pem`

I get out a BEGIN TRUSTED style certificate file which says "this CA is explicitly trusted to issue certificates for SSL servers and explicitly distrusted to issue certificates for SSL clients", and doesn't express an opinion about its trustworthiness for other purposes. Again, this is actually a less trusted certificate, because OpenSSL treats certificates in the trust store which have absolutely no trust extensions as trusted for all purposes.

When you validate a certificate chain, OpenSSL 1.0.1k runs trust checks against the first certificate in the chain (the root certificate). OpenSSL git master runs trust checks against all certificates in the chain that came from the trusted certificate store. The way the trust check is written is that if there are any trust extensions in the certificate, then it must be trusted for the trust type for which the chain is being validated (that is, so long as any trust extensions are present, a certificate which isn't explicitly trusted for the trust type being checked will be rejected). The trust type can be specified by the code running the validation process, and they differ from but are similar to the 'purpose' types. For our web server validation case, the trust type is likely to be "SSL Server" (X509_TRUST_SSL_SERVER in the code). If the certificates that have trust checks run on them aren't trusted for that purpose, the validation fails.

This mechanism has been around for a long time (since 1999), but isn't terribly commonly used with OpenSSL. The only distribution I've found that uses the extended BEGIN TRUSTED format for its system certificate store is Arch, and I rather suspect they're actually doing it by accident (see here and here for some boring details on that). Most distributions keep their OpenSSL trust stores in basic format, with no trust extensions, and hence the certificates in them are trusted for all purposes. But if you do encounter it, that's what it's all about - remember the difference between trust and purpose.

There is a decent reason to have such a mechanism. Interested parties like browser and OS vendors are making something of a concerted effort to improve overall internet security by dropping old and potentially insecure CA certificates from their trust stores, but this can be difficult when certificates they issued are still being used in the real world. Sometimes we run into a situation where we know, say, that certificates signed with a bad old CA certificate are still being used for S/MIME purposes, but no servers should be using certificates issued by them any more. In this case, it is good to have a mechanism by which distributions can say 'this CA should be trusted only for mail certificates, not for server certificates'. This is the function served by the TRUSTED CERTIFICATE feature. NSS has its own implementation of this, and Firefox and many vendor NSS distributions already include certain CA certificates in their trust stores with this kind of partial trust; as I mentioned above, its use with OpenSSL is less common, but is at least possible.

When using openssl verify to check certificates and chains manually, you can pass -purpose to specify the purpose to be checked. If you don't pass -purpose, no purpose checks will be run.

openssl verify, at present, cannot be made to run trust checks, which is a notable difference from the behaviour of most OpenSSL-based applications. Applications which use OpenSSL's SSL functions, unless they explicitly set a trust type, will use the SSL Server trust type unless they identify themselves as server applications, which will use the SSL Client trust type. I sent a patch for this, which just hooks up a few bits that already exist so that when you pass -purpose, a trust type is chosen accordingly and trust checks are run against that trust type.

Bash history with multiple sessions

Today I spent a bit of time investigating a rather annoying buglet that seems to have shown up since Fedora 21. I'd been noticing, vaguely, that things I was sure should be in my bash history weren't there.

So far we're guessing the bug happens when you reboot with a terminal app running; it seems sometimes the active sessions don't get time to write the history out to the bash history file before they're killed. It may be related to this bug, though waiting to hear back from the systemd devs on that.

Anyway, while looking into bash's behaviour in this area, I came across a nice snippet from Bradley Kuhn, and I've taken the liberty of adopting that (it's GPL, right Bradley? :>) and tweaking it a bit:


function prompt_command {
    if [ $(($SECONDS - $LAST_HISTORY_WRITE)) -gt 60 ]; then
        history -a && history -c && history -r


I saved this as /etc/profile.d/zz-happyassassin-history-safe.sh. There's a /etc/profile.d/vte.sh which overwrites any previous PROMPT_COMMAND, so you need to make sure it's alphabetically after that, and I have a convention of including happyassassin in the names of config files that are my own customizations, so I can conveniently identify them.

Basically, every time I run a command, if it's been more than 60 seconds since the last time this happened, it saves the current session's history to the history file (history -a), clears the current session's history (history -c) and reads the history back in from the file (history -r).

This both protects against the problem where you lose commands from active sessions at shutdown, and means when you have lots of shells open at once (in tabs or windows or whatever), every 60 seconds each will pick up history from all the others - I get a bit annoyed when I have six terminals open and I realize I need something from the history of a different terminal but I can't remember which one...

You can tweak the frequency by changing 60 to something else - Bradley's snippet had 300 so it'd happen every 5 minutes, for testing I set it to 5. You could even drop the timer entirely and have it happen every time you run a command, for maximum safety (and to keep the order of the commands as accurate as possible, I guess). On an SSD at least, this happens so fast you don't notice it - it might take a bit longer if you have the 11MiB history file discussed in the initial thread, so tweak to your desires.

I've been running this for a whole twenty minutes so far, so I'll update this post if it turns out to cause terrible casualties or anything, but just thought I'd note it quickly first :)

EDIT: I've been using this since and generally like it, though it does have drawbacks. Reading the history from other consoles back into the current console regularly changes the order of the history - you can't be sure that pressing 'up' three times will give you the third-last command from that console. Which is sometimes annoying. But overall, I'm happy with it.

Trusting additional CAs in Fedora / RHEL / CentOS: an alternative to editing /etc/pki/tls/certs/ca-bundle.crt (or /etc/pki/tls/cert.pem)

Around the internet, you can find various pages advising appending CA certificates to /etc/pki/tls/certs/ca-bundle.crt or /etc/pki/tls/cert.pem (they're the same file, one's a symlink to the other) as a good way to trust them.

This may be necessary, but it has drawbacks (the main one being that once you've edited the file, it will no longer be automatically updated; updates will appear as ca-bundle.crt.rpmnew and you'll have to remember to manually move it to ca-bundle.crt and re-append your custom certs). There's an alternative approach that may be better for you, even on really old RHEL / CentOS.

On Fedora since 19, RHEL / CentOS 7, and RHEL / CentOS 6 since this update, the Shared System Certificates feature is available. With that system, the correct method is to place the certificate to be trusted (in PEM format) in /etc/pki/ca-trust/source/anchors/ and run sudo update-ca-trust. (If the certificate is in OpenSSL's extended BEGIN TRUSTED CERTIFICATE format, place it in /etc/pki/ca-trust/source). On RHEL 6, you have to activate the system with update-ca-trust enable after installing the update; if you don't want to use it, you can try the approach below.

On RHEL / CentOS 5, that system isn't available. But it's still not a good idea to modify the distribution's bundle file unless you really have to, as explained above.

Instead of appending to the bundle file, you can try placing the certificate to be trusted (in PEM format with the extension .pem) in /etc/pki/tls/certs and run sudo c_rehash (you may need a yum install /usr/bin/c_rehash). The .pem extension is important, c_rehash will only process files with this extension.

Readers of my last post may grok what's going on there: /etc/pki/tls/certs is OpenSSL's 'default CApath' on RHEL/CentOS, and OpenSSL will trust both certificates found in the 'default CAfile' - /etc/pki/tls/cert.pem - and in the 'default CApath' - /etc/pki/tls/certs.

The caveat is that this only works for things that use OpenSSL and use its default trust store locations. It won't work for apps that use OpenSSL but directly use the bundle file instead of using OpenSSL's 'default trust store' function, and it won't work for anything based on GnuTLS (whereas editing the bundle file often will, as we often have those patched to load the bundle file directly).

So sometimes you just have to edit the bundle file - but in some cases you might be able to avoid it.

A note about SSL/TLS trusted certificate stores, and platforms (OpenSSL and GnuTLS)

Pop quiz: where is OpenSSL's default store of trusted CA certificate files?

  1. /etc/ssl/certs
  2. /etc/pki/tls/certs/ca-bundle.crt
  3. /etc/ssl/certs/ca-bundle.crt
  4. /etc/pki/tls/certs/ca-bundle.trust.crt
  5. /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
  6. /System/Library/OpenSSL
  7. Some other goddamn place

If your answer was 8. It's a trick question, well done, take this gold star for knowledge and/or test-taking skills. Depending on the platform you're on, any of the above could exist, and sensible choices are 1, 2, 4, 6 (OS X, at least according to this page, which gives some other fun if rather old choices), or 7, depending mostly on what OS / distribution you're running on.


If you're writing software that does SSL/TLS certification validation using OpenSSL - well, commiserations. But also, please don't assume that any of the above locations exists, and certainly don't hard code one as the default. Usually, what you should do is try and use your SSL library's default store; if that fails, you can fall back on trying a few default locations. You should also provide a configuration option for the user to specify a store location and it should handle two different types of location.

For OpenSSL (and derivatives like LibreSSL), a store of trusted CA certificates can be a single file containing one or more concatenated certificates in PEM format, or a directory containing individual certificate files in PEM format, where each file is named in a specific format according to its hash value (these directories are usually produced by running the c_rehash command on a directory full of certificate files with more human-readable names, which produces symlinks in the expected format; p11-kit's trust extract / p11-kit extract command also has some support for doing this).

Debian, Red Hat / Fedora, and OpenSUSE have systems which produces a canonical trust store from the certificates found in various locations - the idea being to allow for flexible packaging of the distribution's own default trusted certificates, and for the system administrator to add to or override that list in such a way that software will respect their choices.

In Debian, the system ultimately populates the /etc/ssl/certs directory with certificate files and runs c_rehash on it. It also produces a bundle file, /etc/ssl/certs/ca-certificates.crt. I'm not sure which is considered 'preferred' for Debian purposes, if either is - but on Debian and derivatives, you can rely on /etc/ssl/certs being usable as a hashed directory, and /etc/ssl/certs/ca-certificates.crt as a bundle file.

In RHEL 5 and higher and all versions of Fedora you care about, RHEL/Fedora-derived distributions, and some others which are not derived from RHEL or Fedora but use its system (e.g. Arch, since 2014-09 or so) or follow its default locations, the system creates a bundle file. A newer and more capable system is used since Fedora 19 and RHEL 7 than was used in earlier releases, but both systems ultimately provide the bundle in OpenSSL's expected format at /etc/pki/tls/certs/ca-bundle.crt, so that is a safe default location for all Fedora releases and, I believe, RHEL 5+.

In OpenSUSE, I believe - see their system - the canonical locations are under /var/lib/ca-certificates (the canonical bundle file is produced at /var/lib/ca-certificates/ca-bundle.pem), and a hashed /etc/ssl/certs directory exists for compatibility with things that expect that (Debian) layout. As this post does, OpenSUSE explicitly recommends relying on the SSL library's default paths: "Your system openSSL knows how to read that, don't hardcode the path! Call SSL_CTX_set_default_verify_paths() instead."

Note that Fedora and RHEL provide a /etc/ssl/certs directory as an attempt at Debian compatibility, but it actually doesn't work very well for that at all - it just provides a bundle file /etc/ssl/certs/ca-bundle.crt (which doesn't match Debian's bundle name), and does not make any attempt to make the directory usable as a hashed directory at all. Basically, don't use RHEL/Fedora's /etc/ssl/certs, it's a trap. Don't use /etc/ssl/certs/ca-bundle.crt, even though it does probably exist on most current Fedora/RHEL installs; it's not a canonical location for Fedora/RHEL and it's unlikely to exist on anything else, and it may possibly go away on Fedora in future. Just don't use it.

So, if you wanted to handle trust store locations yourself, you could probably cover all or most Linux distros by checking for /etc/pki/tls/certs/ca-bundle.crt and /etc/ssl/certs in that order and using the first found, in the appropriate way (the first as a bundle file, the second as a directory). But you still wouldn't be covering non-platform builds of OpenSSL, or OS X, or Windows.

So really, what you should do - like I said - is first of all, try letting OpenSSL handle it. If you're using OpenSSL directly, the function you want is SSL_CTX_set_default_verify_paths(). If a default trust store was specified at the time the OpenSSL build your app winds up using was done, it'll get used. Distributions should always make sure this is properly handled, and it ought to work on OS X and Windows as well. Hilariously enough, the function isn't officially documented; this thread has some discussion of it. If you're using some wrapper it probably ought to either do this for you somehow, or expose that function for you to use in some way (see e.g. SSLContext.load_default_certs() in Python, though you really ought to use something like Requests for Python if at all possible).

If you're worried about cases where it somehow doesn't work, you can fall back to checking the Fedora/RHEL and Debian default locations. Knowing when to fall back might be a bit tricky; I forget the details, but I believe SSL_CTX_set_default_verify_paths() can fail, but it can also 'succeed' but result in an empty set of trusted certificates. You can probably check for that case. Exactly how hard you want to look for a default trust store, and what your app should do if it can't find one, is to an extent dependent on what your app does; it may make sense to bail out and warn the user, or go ahead without certificate validation (i.e. insecurely) and warn the user, or do something else, it's pretty context-dependent. Just think about it carefully.

Whatever that case, the other thing you should probably do (as I mentioned above) is allow for user configuration. Even with a distro that allows for modification of the default trust store, there may be a case where a user wants/needs to use a different trust store for your app. Maybe, given the nature of your app and how it's deployed, the user wants it to trust only their own site CA, for instance.

If you're allowing user configuration, you really ought to allow the trust store to be in either format. This is very easy to handle. If you're using OpenSSL directly, you use the SSL_CTX_load_verify_locations() function. The first argument to this function is always the SSL context. If you're loading a bundle file as the trust store, it goes as the second argument, and the third is NULL. If you're loading a hashed directory as the trust store, it goes as the third argument, and the second is NULL. OpenSSL wrappers for other languages usually expose this fairly directly. All you need to do is test whether the provided location is a file or a directory, and load it appropriately (and handle the case where it's neither appropriately - usually you're going to want to throw some kind of error; I wouldn't recommend falling back to the system default trust store, as if the user is going to the trouble of specifying a location, they're quite strongly implying that's not what they want).

An aside on OPENSSLDIR

If you poke around enough random forum / mailing list /Stack Overflow discussions of this stuff, you'll find the occasional assertion that someone (usually Red Hat) isn't following the 'OpenSSL defaults', and making everyone's life harder - e.g. here. So far as I can tell, this isn't accurate.

For the sake of thoroughness I looked into how SSL_CTX_set_default_verify_paths() actually works. It goes through a fairly complex code path, but basically what it winds up doing is trying to load certificates from 'the default file' and then from 'the default directory'. The default locations are defined in cryptlib.h as OPENSSLDIR "/certs" (default directory) and OPENSSLDIR "/cert.pem" (default file).

So, what's OPENSSLDIR? Well, you configure it at build time with the --openssldir to the Configure script. If neither that parameter nor --prefix is passed - which would be the closest thing OpenSSL has to a 'default' - it becomes /usr/local/ssl. AFAIK no distro actually uses /usr/local/ssl as its OPENSSLDIR. As far as I can see, no standard location has ever been defined by a commonly-adopted specification: there's nothing in any version of the FHS or LSB about OpenSSL locations. Therefore there really is no 'default' OPENSSLDIR location; distributions have been pretty much picking it out of a hat since the year dot.

What's that, reader? You want more specific historical trivia? Fine! I can but oblige.

From Debian's openssl changelog, it looks like /etc/ssl came into existence one snowy night (indulge me, here) in 1999:

 -- Christoph Martin <christoph.martin@uni-mainz.de>  Wed, 31 Mar 1999 15:54:26 +0200

ssleay (0.9.0b-2) unstable; urgency=low

  * Include message about the (not)usage of RSAREF (#24409)
  * Move configfiles from /usr/lib/ssl to /etc/ssl (#26406)

We can also track the invention of Red Hat's /etc/pki/tls precisely, to this bug report, from 2004 - I'm indebted to Ben Kahn for the reference.

Both references also indicate the previous location: /usr/share/ssl on Red Hat, /usr/lib/ssl on Debian. Indeed, the very first entry in the Fedora openssl changelog confirms this:

* Tue Oct 26 1999 Bernhard Rosenkränzer <bero@redhat.de>
- inital packaging
- changes from base:
  - Move /usr/local/ssl to /usr/share/ssl for FHS compliance
  - handle RPM_OPT_FLAGS

So Debian and Red Hat disagreed about where OPENSSLDIR should be at least since 1999 - from the first day Red Hat had a package for it. Debian's openssl package dates back to 1997 (as ssleay), so you could make an argument that Red Hat should've matched Debian's location (they'd already moved to /etc/ssl at the time RH's openssl package showed up), and our 16-years-later lives would have been a lot simpler. But they didn't and so here I am, picking through stone age spec files. BAD BAD Red Hat of 1999. You displease the monkey.

Bonus random Google reference: this book, published in 2003, indicates that SUSE was using /usr/share/ssl/certs as its trust directory at the time - so presumably using /usr/share/ssl, the same OPENSSLDIR as Red Hat, but shipping its system trusted certs in /usr/share/ssl/certs rather than as a /usr/share/ssl/cert.pem bundle.

Even more historical trivia: what the hell is the deal with Red Hat's certs/ dir anyway?

So doing all this digging made me start wondering...what the hell is the deal with OPENSSLDIR/certs on Red Hat anyway?

To recap - it's existed at least since openssl-0.9.5a-14, which is the oldest NEVR you can distinguish in the Fedora git repo. That's from before the ca-bundle.crt file was added to the package. It's not entirely clear why the directory was created in the first place; you can't see if it was there before Makefile.certificate or if it was created to put the Makefile in, and there's no comment or any context indicating whether it was created with regard to OpenSSL's expectation for such a directory as the 'default CA trust directory'. (I think there's an offline copy of the old CVS repos somewhere which may still let someone with access distinguish the first few commits to the package, but I'm not sure).

At the time, the only thing installed to the new /usr/share/ssl/certs was Makefile.certificate, installed as /usr/share/ssl/certs/Makefile, which still exists today. At the time it could create key pairs, CSRs, and self-signed certificates.

The ssl.conf file that was shipped in the mod_ssl package did not, at the time, expect to find the server certificate and key in /usr/share/ssl; it was configured to look in mod_ssl's own config dirs, under /etc/httpd/conf/.

I took a troll through the openssl 0.9.5a source, and near as I can tell, even back then, all it ever expected to find in OPENSSLDIR/certs was c_rehash-style individual certificate files. In fact, the whole SSL_CTX_set_default_verify_paths() complex was pretty much the same in 0.9.5a as it is today.

So far as I can tell, the first thing RH ever put in OPENSSLDIR/certs - Makefile.certificate - couldn't generate anything OpenSSL would expect to find there; it doesn't create files with the hash-style names, or call c_rehash. In fact, now I look at it closely, it was originally set up in its most simple invocations to output to the locations expected by mod_ssl: make genkey would create /etc/httpd/conf/ssl.key/server.key. So why the file was placed in OPENSSLDIR/certs in the first place seems obscure.

The Makefile seems to have been placed there in early 2000:

* Wed Mar  1 2000 Florian La Roche <Florian.LaRoche@redhat.de>
- Bero told me to move the Makefile into this package

I'd guess that before that, the directory existed but was empty.

So, my tentative conclusion is that no-one at RH ever really thought about what the directory was supposed to be for; they saw a directory called certs and thought, hey, that looks like as good a place as any to stick this Makefile. If Bero or Florian are still around and want to disagree with me, I'm right here :)

Finally, where the hell does ca-bundle.crt come from?

It turns up in the openssl package, in OPENSSLDIR/certs, in late 2000:

* Wed Oct 25 2000 Nalin Dahyabhai <nalin@redhat.com>
- add a ca-bundle file for packages like Samba to reference for CA certificates

and for a while I thought RH had simply invented the file out of whole cloth, but that turns out not to be the case. We nicked it from mod_ssl (official site is down as of 2018-09, linking to Wayback Machine), the oldest (I think?) SSL module for Apache (which is now part of the Apache codebase). According to the changelog in old mod_ssl tarballs, it invented the ca-bundle.crt file in 1998:

  Changes with mod_ssl 2.1b4 (08-Sep-1998 to 17-Sep-1998)

      3. A ssl.crt/ca-bundle.crt is now installed (but not enabled!) which
         contains all 33 CA root certificates of known public CAs.  They were
         extracted from Netscape Communicator 4.06 with my certbundle stuff.

I've had ever so much fun poking through old Red Hat Linux packages. In 7.0, we only shipped the mod_ssl copy, at /etc/httpd/conf/ssl.crt/ca-bundle.crt, but in 7.1, we added the file to openssl but did not drop it from mod_ssl, so both packages had copies. OpenSSL's copy had a Red Hat certificate appended to it; mod_ssl's did not. We didn't drop the mod_ssl copy until 8.0, by the looks of things.

It seems reasonable to infer that the file was copied (then moved) into the openssl package to make it available to things other than Apache without the need to install Apache (indeed there's an ancient samba bug indicating this). The filename was preserved from mod_ssl. But so far as the path goes, my guess is again, it was probably just put into OPENSSLDIR/certs because it looked like the right kinda name at the time. (Later, of course, the ca-certificates package was separated from the openssl package, and OPENSSLDIR changed from /usr/share/ssl to /etc/pki/tls, but OPENSSLDIR/certs/ca-bundle.crt has been there all along, just following these changes).

Well, maybe no-one else cares, but I feel weirdly satisfied at having tracked that down!

So far as other SSL libraries goes...


NSS handles things fairly differently - it stores certs in a database, not directly as files. NSS stuff seems to usually work properly more or less 'out of the box', so I've less experience with messing with it.


edit: I just spent like two hours researching GnuTLS. I demand gratitude.

Current (very current) GnuTLS can use a PEM bundle, a PKCS #11 module via p11-kit, or a directory as a trust store. It does not handle directories in the same way as OpenSSL; it simply throws every file it finds in the directory at its 'load from file' code, so it'll presumably read in both individual certificates and bundle files found in the directory in the case of a hybrid directory like Debian's /etc/ssl/certs.

You use gnutls_x509_trust_list_add_trust_file() to load either a cert file/bundle or (a bit confusingly) a PKCS #11 URL into the trusted cert list. gnutls_x509_trust_list_add_trust_dir() is for directories.

GnuTLS introduced a 'system default trust store' concept on 2012-05-08 - specifically, with this commit. The first release with the feature was 3.0.20. At that point it could only be a PKCS #11 URL or a file; no directories allowed. The ability to use a directory was added with this commit on 2014-07-21, the first release with directory support was 3.3.6.

Since the default trust store feature was added, if a specific default trust store location was not passed at compile time, it has checked a few locations and used the first one that matched if any. At first it only checked for /etc/ssl/certs/ca-certificates.crt and /etc/pki/tls/cert.pem; on 2012-07-19 it was updated to also check for /etc/ssl/cert.pem and /usr/local/share/certs/ca-root-nss.crt (I'll have to look into that one...); and on 2013-06-01 it was updated to also look for /etc/ssl/ca-bundle.pem.

If you have a sufficiently new GnuTLS that was compiled with an appropriate parameter (or whose compile time location guessing worked), the function gnutls_certificate_set_x509_system_trust() tells GnuTLS to load the certificates from the default trust store. gnutls-cli will try this by default, which is a useful way to check if it's working on a given platform; run gnutls-cli google.com and look at the top of the output, see if there's a count of loaded certificates, or a warning message. When loading a directory it sets the GNUTLS_TL_NO_DUPLICATES flag, so presumably it won't actually end up wasting effort storing duplicates.

If you parse all that crap out and look at when it landed in various distros...well, you can be pretty sure that there's no default trust store in absolutely any GnuTLS build older than 3.0.20, for a start. That's a freebie. (It doesn't look like any of the LTS / enterprise distros backports the feature to 2.x). Debian and Ubuntu are still shipping gnutls 2.x in current releases, alongside 3.x; I have not been able to figure out if they're shipping any packages built against it yet (still trying to figure out the magic commands). RHEL 6, nope. SLES 11, nope. The first Debian which included 3.x at all is Jessie; the first Ubuntu I'm not totally sure, but it was at least in 14.04. The first OpenSUSE with a sufficiently new build was 12.2, I believe.

But even if the build is new enough it either needs to have had a correct parameter passed at build time, or the location guessing needs to have worked (that's not only a question of whether the distro provides one of the guessed locations; the package that provides it must have been installed when the package was built).

Fedora 21, OpenSUSE 13.2, Ubuntu 14.10, and Debian since 3.2.4-2 (which I think is in sid and wheezy at least) build with an explicit trust store location passed to configure, so it seems pretty safe to assume that the default trust store function will work on those versions and newer of those distributions.

For some, it will work with earlier releases thanks to the 'guessing' functionality, but you really have to check individually to make sure whether this is the case. I've tested Fedora 20, and it does work there (probably using the guessed location /etc/pki/tls/cert.pem). I haven't tested older Ubuntu or OpenSUSE builds yet, or RHEL 7.

So for gnutls, I'd say the advice is broadly the same as OpenSSL, but it's rather less likely that you can rely on the 'system default store' stuff working. It doesn't handle directories the same as OpenSSL, but most directories that works as an OpenSSL store ought to work as a GnuTLS store, so you should be able to try loading the same 'common' locations with the appropriate functions. You'll probably want to set the GNUTLS_TL_NO_DUPLICATES flag to avoid unnecessary duplication, especially when using directories. Note that GnuTLS cannot handle OpenSSL's 'trusted certificate' format - the one with BEGIN TRUSTED CERTIFICATE - so neither bundles nor directories containing certificates of this type will work.

So, tl;dr: please don't assume any given location of trusted certs can be relied upon - just because it exists on your dev platform, it doesn't exist for all your users. Go with the library's defaults, if possible. Otherwise, at least check the canonical locations for Fedora/RHEL (/etc/pki/tls/certs/ca-bundle.crt as a bundle file) and Debian (/etc/ssl/certs as a hashed directory, which will also work on OpenSUSE), not just one or the other. And allow a user specified location, and handle it being either a bundle file or a hashed directory.

(note: I'm not a security professional, just a QA monkey who keeps running into issues with this stuff; I think the above is all broadly correct, but corrections from distro and/or upstream SSL specialists are of course welcome!)

ownCloud 8.x preview packages for Fedora Rawhide

Oh hey there, good lookin' - are you on the hunt for some instability in your life? Well look no further, I've got just the thing for you.

I've got a very experimental ownCloud 8 git snapshot repo up, mainly for me to use in working on the OC 8 packages. At present it's only for Fedora Rawhide (22). There's some hackiness to it - ownCloud doesn't (so far as I can tell) post its build scripts anywhere, and it splits the stuff that's usually shipped in the tarballs across a bunch of git repos, so the source is actually a bunch of git checkouts sloppily piled into one tarball.

The repo has builds for versions of some dependencies I don't want to push to Rawhide at least until official OC 8 pre-releases are out, as well.

As of right now, OC 8 contains some technically non-free code (see previous post for lots more gory details on that). The package currently in the repo still uses the technically non-free JSMin, but it certainly won't go into official Fedora that way; I'm working with upstream to switch to a minifier with a non-problematic license, and they've said they'll take care of it for the 8.0 release.

The package doesn't have a changelog and I didn't bother changing the License: field to reflect the JSMin license.

I've put up an OC8 variant of one of my test deployment kickstarts. If you install from a Rawhide boot.iso with inst.ks=https://www.happyassassin.net/ks/oc/oc8-httpd-sqlite.ks you should wind up with a working (and hilariously insecure, so don't attach it to the public internet!) OC 8 deployment - just browse to http://(host)/owncloud and you'll be at the main interface, logged in as 'admin' with password 'admin' (system root password is '111111' - I told you it was insecure).

I was able to successfully upgrade a bare stock OC 7 + sqlite install to OC 8, but haven't tested with a populated instance or any other databases yet. I really, really, really, really, really heartily recommend you don't let this within fifty yards of production data, production systems, or a publicly accessible host.

The asset pipelining stuff - which concatenates and minifies OC's JS and CSS - isn't actually enabled by default, but if you want to try it out, you can edit /etc/owncloud/config.php and change the asset-pipeline.enabled setting to true. You will also need to configure Apache to allow access to the directory /var/lib/owncloud/assets in the usual way.

Adventures in PHP web asset minimization

When I've had time for 'side projects' lately I've mostly been working on preparing the ground for ownCloud 8 in Fedora, trying to get out ahead of upstream changes and package new library dependencies.

ownCloud's web asset minification

Today I wound up looking at OC's new web asset minification stuff. A few releases back OC delivered CSS and JavaScript itself and minified them in real time, using minifiers ripped out of the Mediawiki source code. That was pretty ugly.

In ownCloud 7 they switched to using Assetic for asset management, which is a lot better. It didn't do any minification at that point, though.

With the PR linked above, OC 8 does minification using Assetic filters. This means it grows some new dependencies on the minimization libraries that back the Assetic filters.

Not surprisingly, this being the world of PHP, there's some fun craziness here. Craziness follows - but if you're interested in a performance comparison of available PHP web asset minifiers, skip it and look down a bit further.

The filters ownCloud currently uses are Assetic's CssMinFilter and JSMinFilter. To back those, ownCloud's composer.json now pulls in mrclay/minify and natxet/CssMin.


I started out by taking a look at natxet/CssMin, which isn't too horrible. It loses some points for not being sanely laid out (there's one source file with a whole bunch of classes in it and it just uses classmap autoloading...), but basically it minifies CSS and that's it. It was fairly simple to build a package. It claims to be just a github mirror of this 'cssmin' project, but in fact it's clearly being actively maintained and has developed on from that project; I'm treating it as its own project, now. It doesn't appear to be a port or rewrite of any other minifier.

mrclay/minify and its exciting assortment of minifiers

Then I looked at the JS minifier, though, and started to encounter the crazy. mrclay/minify looks, to my monkey eyes at least, like a fairly hairy ball of olde-worlde PHP craziness. It's not just a minification library, or even several minification libraries. In its own words, it's "an HTTP content server", i.e. it's sort of trying to do the same thing as Assetic, only it's a rather older and smaller implementation. But because choice is oh so tasty, it includes at least three CSS minifiers and two JS minifiers, and lets you pick whichever you like!

  • It has its own CSS minifier, Minify_CSS_Compressor.
  • It includes the github project YUI-CSS-compressor-PHP-port - which is, as the name suggests, a PHP port of the CSS bit of Yahoo's YUI Compressor, which is written in Java. This is named CSSmin.php.
  • It also, for no goddamn reason I can see, ships a different but far less complete port of the YUI Compressor, as lib/Minify/YUI/CssCompressor.php.

For JavaScript:

  • It includes JSMin.php. This (so far as I can figure out) originated as a port of Douglas Crockford's JSMin (which is written in C), by Ryan Grove. It lived at Google Code until it was moved for license reasons (see long bit below!) to GitHub, where it is now very definitely unmaintained. mrclay has continued to work on it in his tree since it was abandoned by Mr. Grove, so the mrclay/minify version is now a fork. Again see below for details, but briefly, this code is clearly non-free by Debian and Fedora standards, as it imposes a 'field-of-use' restriction.
  • It includes JSMinPlus.php. This is a copy of JSMin+, which is by all appearances abandoned by its author (though there were two years between the releases of 1.3 and 1.4, so who knows). Despite the name, JSMin+ is a completely different project from (anything else at all called) JSMin, it is not an unofficial successor nor is it based on JSMin (again, whatever you consider to be 'JSMin' exactly - the original code, or any of its various ports) in any way.
  • It includes lib/Minify/ClosureCompiler.php, which is a wrapper around the offline Java version of Google's Closure Compiler - you're expected to provide it with a copy of the CC .jar file.
  • Not at all confusingly it also provides lib/Minify/JS/ClosureCompiler.php, which is an entirely different way to use the Closure Compiler - this time via Google's public REST API for it.

When I say it 'includes' or 'has' something, of course, I don't mean it sensibly expresses a dependency on a copy of it, or anything. In classic fashion for the PHP 'Wild West' interregnum between the reigns of PEAR and Packagist/Composer, copies of the libraries are just dumped willy-nilly into minify's tree, not in any particularly obvious layout (in fact most of them appear to be PSR-0 compliant relative to lib/, but you have to poke about a bit to figure that out), without any kind of manifest to explain clearly what they all are, never mind what they're for.

Interlude: Adam is sad

So I was, obviously, rather reluctant to just package up the mrclay 'minify' for Fedora simply in order to satisfy ownCloud's need for a JavaScript minimizer. This kind of haphazardly bundled source tree is a goddamn nightmare to package in a guideline-compliant way just in general, and the redundancy was too absurd for me to swallow - if you're not keeping count, so far we've found that minify has three CSS minimizers, two JavaScript minimizers, and two different ways you can use yet a third JavaScript minifier it doesn't directly include. And don't forget that natxet\CssMin is not the same thing as minify's CSSmin.php (which, if you've forgotten, is tubalmartin's port of the YUI compressor). If I'd just gone ahead and packaged minify, I'd have been packaging six minifier libraries and eight minifier approaches in order to provide ownCloud with two minifiers. No. Not happening.

My next thought was to just package the actual minifier ownCloud uses, via Assetic's JSMinFilter, which is JSMin.php - not any of the other bits of minify. It looks like it'd pretty much lift right out (not surprisingly, as it was initially a third party lib that minify dumped into its tree, see above). At this point, though, I run into another rather larger problem which I mentioned briefly above.

JSMin, freedom, and the JSON License

JSMin.php is not free. This is because the original, Douglas Crockford-written JSMin is not free; no port or derivative of it is free either. It's licensed under the MIT license, but with an added clause:

"The Software shall be used for Good, not Evil."

There's a page where the story of JSMin being kicked off Google Code for this is related, which includes an excerpt from a talk the author gave where he plays the whole situation for laughs and talks about how ha ha funny it is that Google and IBM want him to allow them to use his software for evil, and that's all nice and cute, but the best legal advice available to Debian, Red Hat and (to my knowledge) all other authorities on software licensing is that such restrictions clearly make the software non-free. License wonks refer to this license as 'the JSON license' and it is explicitly listed as non-free by the FSF, Fedora, and Debian (it's hard to find a good reference for DFSG determinations, that's the best I could get).

So, I can't possibly have Fedora's OC package use that code regardless of how or where it comes from. Darn.

Yet more minifiers (no, of course we're not done yet)

So I started looking into alternatives, which (me being me) turned into a bit of a review of PHP web asset minifiers. Here's the ones I found beyond those listed above:

  • JSqueeze, a native PHP JS minifier
  • JShrink, a different native PHP JS minifier
  • matthiasmullie/minify, a native PHP JS and CSS minifier (not at all the same thing as mrclay/minify)

There may have been others, but I'm sort of losing the will to live at this point. As I was planning to suggest ownCloud switch away from JSMin, it seemed prudent to check out all the alternatives, so I hacked up a script for doing some quick and dirty benchmarking on minifiers. (If you want to try it, you'll have to download all the minifier libs and dump them in the directory with the script, dump some .css and .js files in the same directory, and edit CSSmin.php to rename the class to CSSmin1 so it doesn't conflict with CssMin.php - I told you it was dirty).

Benchmarking minifiers

I dumped all ownCloud's CSS and JS assets (minus translations) into a directory and wrote a thing which just calls each minifier on all the files for its asset class, saves the output with a unique extension, and tells me how long it took. Then I compared the sizes of the minified files. The contenders are:



My CSS results are probably a bit questionable as OC has fairly little CSS, but here they are:

[adamw@adam jstest]$ php ./mintest.php 
CSSmin (tubal) time: 0.15763902664185
CSS Compressor time: 0.099493980407715
natxet time: 0.51680493354797
MM minifiy (CSS) time: 0.090332984924316
[adamw@adam jstest]$ for i in css compressor cssmin natxet mmmcss; do echo $i; du -c *.$i | grep -i total; done
css (original)
360 total
compressor (CSS Compressor)
344 total
cssmin (CSSmin tubal)
340 total
natxet (CssMin natxet)
344 total
mmmcss (MM minify)
344 total

Compressor and Matthias Mullie's minify are the fastest, tubalmartin's CSSmin is a bit slower, and natxet/CssMin is quite a lot slower...but in practice, for OC's purposes, they'd all run pretty much instantly, the 'slowest' takes half a second. They're all almost the same in file size - tubalmartin's 'wins' by 4KB but that's probably in the margin of error (which I don't know how to calculate cos I'm not a scientist), and they only save about 5% on the original size, not really worth crowing about.

The JS results are rather more interesting, as OC has a lot of JS.

JSMin time: 13.471320867538
JSMinPlus time: 15.368160009384
JSqueeze (strong) time: 6.8297760486603
JSqueeze (safe) time: 6.8371610641479
JShrink time: 11.497622013092
MM minifiy (JS) time: 19.345649003983

3364 total
2468 total
2408 total
2192 total
2220 total
2468 total
2464 total

I noticed that JSqueeze's global renaming feature seems to cause quite a lot of issues - half the bug reports in its github repo seem to be related to it - so it seemed prudent to test it both ways: the 'safe' version is simply $jsqueeze->squeeze(file_get_contents($filename), $specialVarRx=false) instead of $jsqueeze->squeeze(file_get_contents($filename)) (the 'strong' version).

JSqueeze rather beats the pants off of everything else, there - it's nearly twice as fast as the closest competitor, and does substantially better on file size, even in its 'safe' version.

edit 2014-12-30: After initially writing this post I managed to get a test instance of ownCloud git master up and running and tried plunking JSqueeze in in place of JSMin and, lo and behold...it had a bug.

edit 2015-01-01: p@tchwork fixed the bug mentioned in the previous edit; JSqueeze 2.0.1 works well with ownCloud in my testing. 2.0.1 also namespaces the class (which makes the layout a PSR-4 one), and disables the global renaming stuff by default (i.e. 'safe' not 'strong' is now the default). Also, Matthias Mullie substantially sped up his minifier: after that commit his is the very fastest for me, at 5.8328921794891 seconds. Its compression ratio still ranks in the group at the back of the pack, though.

What did we learn on the show tonight, Adam?

PHP developers, for the love of all that is freaking holy, can you all please just goddamn well sit down together and decide on one implementation of trivial things like asset minifiers, and just work on that? Yeeeeeeeeeeeeeeesh, people. And please don't dump your code's third party dependencies into its tree by hand, without an apparent plan or a manifest. At least use Composer, that gives us a fighting chance at figuring out what the hell you've got in there and splitting it out. And, you know, sit down and really think about whether your minifier blob needs seven different goddamned minifier implementations in different places, half of which are variants of each other...

All things considered for JS I'd recommend using JSqueeze, JShrink or Matthias' 'minify', in about that order - those three all seem to be actively maintained and following good current practices. Avoid JSMin due to the licensing issue and because it's not very actively maintained, and JSMinPlus as it doesn't seem to be maintained at all.

Fedora 22 (and later) nightly testing

Right around the release of Fedora 21, I talked to the anaconda team a bit about the idea of doing more organized early validation testing for Fedora 22. We later discussed and approved the idea at a QA meeting, and since then I've been working on the bits to make it all run smoothly.

We had an early prototype with the Fedora 21 nightly testing, which used pages created every month and asked folks to enter the date of the image they tested along with their report. But since then, I'd set up the template system for generating result pages, and written relval, reviving the testcase-stats output, and I thought it would probably make sense to change the approach a bit. It was impractical to produce new result pages very often with the old manual-cut-n-paste method - it was just too much of a pain - but it's very easy with relval, and having new pages regularly (but not too regularly) results in better testcase-stats output and an easier experience for testers.

So after quite a bit of work on relval and the wiki templates, as of last night, we should have a completely automated process in place for 'nominating' nightly composes for testing - creating result pages for them, and sending an announcement mail to test-announce. Possibly starting tomorrow (it depends on some checks), you might see the first fully-automated nightly compose nomination, with a subject something like Fedora 22 20141225 nightly compose nominated for testing.

Every few days - it won't ever be more than every three days, and should rarely or never be less than every 14 - a new nightly compose will be nominated, the full set of result pages will be created, and the announcement mail will go out. The results pages should provide accurate instructions and download links for testing the nightly composes (right now there are direct links to network install images, and Koji searches for lives and disk images - providing direct links to those would be a bit messy, though not impossible, though some improvements to Koji's web UI might help). The Fedora 22 testcase-stats pages are being updated every hour, to provide an up-to-date overview of the current test coverage. Remember, you can report results with relval report-results these days, and you may find it a bit more convenient than editing the wiki (although it doesn't yet allow you to add comments - I'll add that in a later version). Please use the latest relval, 1.8.3 as I write this, for reporting results against nightly composes.

Of course, there's no strict requirement for the amount of nightly testing that gets done. The only strict requirement in the validation test plan is that we get full test coverage for the release candidates. The idea of the nightly testing is we try our best to cover as much testing as we can, which will help identify blocker bugs earlier and help the developers - particularly the installer developers - space out the work on fixing them, and reduce the amount of churn we experience trying to fix blocker bugs during the TC/RC windows. The same pay-off applies for testers - we can spread out the work of identifying bugs across the cycle instead of starting at Alpha TC1.

As mentioned this idea started in discussion with the anaconda folks, but it may well be of interest to all the WGs. I hope the WGs will take advantage of the opportunity to help test their bits as early as possible and get things into shape early!