Good morning: bugfixing and thinking about

Well, this has been one of the best mornings I've had for a while.

I spent a couple of hours tracking down a rather subtle issue between OwnCloud, Google's PHP client library, and Google itself - required me to bump up my PHP skills further and learn/remember some stuff about HTML entities, so yay self-improvement!

But far cooler than that, some discussion on the desktop@ list led me to a bit of a personal revelation about

I'm really hoping I'm right about all this, as I've hated all along being something of a .next sceptic. I like doing big cool new things, I like getting behind them and pushing. I just couldn't really feel the shape of this big new thing at all, and was worried about where it was going to wind up. I think there's probably a lot of other Fedorans (Fedorites?) and interested onlookers in the same position, so - and with the caveat that this is just my own personal interpretation I cooked up this morning, but I really hope it lines up with how the Relevant Authorities see things - here's the way of looking at .next that I think might make it look much less worrying and much more exciting. is about providing some new particular spaces within the whole Fedora project where we can add on some interesting and exciting new capabilities. It's not really about reducing the 'possibility space' within the whole Fedora project or distribution at all.

Let's take the Fedora Workstation "product" as an example. Let's say that what the "Workstation product" really is, is a way of marking off one particular space within the larger space of the Fedora project as a whole. You can choose to "paint within the lines" and be running "the Fedora Workstation product", and in exchange for you doing that, we can offer you some exciting new capabilities and possibilities that weren't really viable before.

Take, for instance, the question of third-party software, a popular bugbear for Linux. One of the problems for third-party software distributors is that it's quite hard for them to choose a target to write to. If they decide they want their software to run on Fedora, what does that mean? Let's say Alice has every one of the ten thousand packages in our repositories installed, and Bob has just done a minimal install. Both Alice and Bob are "running Fedora".

So the only way for a third-party distributor to have any certainty about the platform is to use the distribution's packaging system, but then they have to understand the distribution's packaging system, and that's something that a) they might not want to spend the time doing, and b) they frequently get wrong.

If we define "Fedora Workstation" as a particular space within the Fedora project, though, it goes a long way to helping with this problem. We can say that, to be considered as running "Fedora Workstation", you have to have a particular set of packages installed - and maybe there are other requirements to do with configuration, etc. This is all stuff we're still sorting out, but the concept is the key here. So a third party can write to that platform, and put their software out and say it "works on Fedora Workstation". The Workstation product has provided an extra possibility that just wasn't feasible before.

This is only one example, and there are many others. It's even more obvious for, say, the Fedora Server product. We can say that to be running Fedora Server you have to have installed a set of packages that provides the kind of "role configuration" mechanism we've been discussing, and use it to configure your server roles. In exchange for that requirement, we can provide you with some cool new guarantees that just aren't viable without this Product mechanism - we'll handle migrations to a new, non-backward-compatible version of some component of your server role cleanly, for instance.

So the Products provide new spaces within the existing Fedora project and, precisely by restricting your 'choice' in certain regards - but only so long as you want to be considered to be running a Product - open up new possibilities in return.

The second key realization, for me anyway, is that none of this needs to make things worse for people who aren't running Products. We can keep the concept of "just running Fedora". Nothing about the Products requires us to throw that away, or compromise it significantly. We may find issues here and there where we have to resolve some kind of tension between the Product space and the non-Product space, and have a big argument about it on devel@, and throw rotten fruit at each other. It wouldn't be Fedora if that kind of thing didn't happen. But that's not really new, we already have that kind of tension between the requirements of different use cases - we just don't even have any good mechanism for codifying particular use cases at present, so it's very difficult to do a good and consistent job of reconciling the differing requirements. The Product concept actually gives us the potential to do this better.

So, say you want to use Fedora, but you find some of the requirements of the "Fedora Workstation" product are not things you want - say it requires GTK+ or GNOME Software to be installed, or something, and you don't want that.

You know what? That's fine. So far as I can see, no-one is really suggesting that we need to take away the whole Fedora 'possibility space' as it currently exists. We can still provide generic installation media that you let install any package set you like. We can discuss Spins and whether we want to change that system at all, but we don't really have to change it, and nothing about the Products plan requires us to do anything horrible to spins - obviously we might choose to feature Products more heavily on the download page and in PR stuff than spins, but that's really just marketing. The current devel@ discussion about Spins was kicked off in a slightly unfortunate way, with a bit too heavy an emphasis on some radical scenarios, but that really doesn't seem to have been intentional. We've been talking about the Spins system not being perfect for years, long before the Products idea ever came along - it's not really that worrying that we're talking about it again.

It's just the same for servers. You want to run a server box on Fedora, but the "Fedora Server" product's approach doesn't work for you? That's fine. You can still go ahead and install from a generic medium and build your server configuration out from the Fedora packages, just like you do now, and there's no reason we can't continue to make that experience work as well (or, y'know, badly :>) as it currently does.

We can still provide all the expectations we currently provide for the non-Product space. I don't see anything in the Product space that has to conflict with it. We can ship generic media, and ensure they work, and pledge that we'll do our best to provide a coherent, up-to-date and secure 'Fedora distribution' space, just as we do right now. But we can also mark off particular areas within that space, call them Products, and say "if you stay inside these lines, we'll give you these extra capabilities in exchange". It's optional, and we don't have to make your experience any worse if you choose not to do that.

Sure, there's still lots of stuff to discuss and decide about what media will be shipped by what groups, how they'll be promoted, what QA guarantees the project as a whole is going to provide for what situations, and so on and so forth. Those are all significant issues that I'm sure I'll have an opinion on! But if my interpretation of all this lines up with the folks working hard to make the Products plan a reality, I'm feeling way more optimistic now about the fundamental concept. I think it gives us a big big chance to make things better, and only a small small chance to make things worse.

Coming in Fedora 21: more installer partitioning improvements

So I've been running some installs with Rawhide images lately, and the anaconda team has already landed some really nice changes to the partitioning workflow which I think will make some of those unhappy with newUI...less unhappy. So here, take a look at them!

Modified disk selection screen, with partitioning choices

Here's the first shiny new change: this is the 'disk selection' screen, the first one you see after clicking Installation Destination. You can see that we've merged in most of the choices that used to be on the Installation Options pop-up after you finished this screen.

So what happens?

If you pick Automatically configure partitioning, don't check the I would like to make additional space available box, and you have sufficient space on your chosen disks, then when you click Done you're really done: we'll go ahead and configure automatic partitioning to the free space on your selected disks.

If you do the same but you do check I would like to make additional space available, you get the Reclaim Space dialog as usual, where you can delete or shrink existing partitions. If you don't check I would like to make additional space available, but you don't have sufficient space for an install, you'll see a simplified version of Installation Options which just advises you of the problem and offers to let you go to Reclaim Space, or cancel back out and maybe change your software selection or pick a different disk.

If you pick I will configure partitioning, you're now on the highway straight to custom partitioning: I would like to make additional space available gets greyed out as it's a no-op (you have complete control in the custom partitioning screen), and when you click Done, you'll be taken straight to the custom partitioning screen.

I'm a big fan of this change - it keeps the basic workflow and all the choices you previously had, while making the whole thing more efficient, and solving the problem we had where people were looking at this screen and wondering where the partitioning choices are. I know some people hate that the button is labelled Done, and we're still considering whether to do something about that, but hopefully the fact that the partitioning strategy choices are right here on this screen now makes it obvious what's going on.

But wait! There's more!

Error checking in custom partitioning screen

This one I'm a really big fan of. Previously, if you did something wrong in custom partitioning and managed to come up with a layout we thought was invalid - which happens rather more than it used to, now people are getting used to the ins and outs of UEFI - what happened was a bit baroque. The custom partitioning spoke would let you out without a complaint, but when you landed back at the hub, you'd see that the Installation Destination spoke was in an error condition, saying "Error checking storage configuration" - but it still didn't tell you what the error was. You had to click to go back into the spoke, then click on an orange bar at the bottom of the screen, and then it told you what the error was. It all seemed a bit like Where's Waldo. In theoretical terms it all made sense - the idea was that error checking was done async and always indicated through the spoke status - but it felt rather weird in this particular case.

Now, the storage validity check is run when you click Done on the custom partitioning screen, and if it hits a warning or an error, it pops up an orange bar right away - just as you see in the screenshot. You can click on the orange bar to see the errors right there, without all the running around in circles.

Now, ten points to anyone who can guess what the intentional mistake here was without looking at the next screenshot...

...yep, that "test_uefi" in the window title is the giveaway. This is a UEFI virtual machine, and if you didn't fall asleep halfway through that last blog post, you'll no doubt have noticed that the layout I created is missing an EFI system partition.

Now to toot my own horn a bit...if you've ever walked into this bear trap yourself, you may remember that the error message Fedora gave you (once you found it) was a bit...unhelpful. It said "you have not created a bootloader stage1 target device". This was technically correct, but pretty difficult to interpret.

Let's take a look at what it says now! Here's what you see if you fail the custom partitioning test and click on the error bar:

Rather more helpful missing ESP error

That's a bit better, right? Of course, this error is UEFI-specific - you only see it if you're doing a UEFI install. We can write specific error messages for each of the other 'platforms' anaconda recognizes too, explaining the most likely cause of stage1 bootloader failure on those platforms.

The fact that anaconda isn't visible in the background is some kind of Rawhide bug I need to get around to investigating - whenever some kind of dialog pops up over anaconda in current Rawhide images, anaconda itself disappears until you close the dialog.

There's also some cool work going on with the keyboard spoke right now, and all sorts of other things. I'm getting excited for Fedora 21 already!

UEFI boot: how does that actually work, then?

It's AdamW Essay Time again! If you're looking for something short and snappy, look elsewhere.

Kamil Paral kindly informs me I'm a chronic sufferer of Graphomania. Always nice to know what's wrong with you.

IMPORTANT NOTE TO INDUSTRY FOLKS: This blog post is aimed at regular everyday folks; it's intended to dispel a few common myths and help regular people understand UEFI a bit better. It is not a low-level fully detailed and 100% technically accurate explanation, and I'm not a professional firmware engineer or anything like that. If you're actually building an operating system or hardware or something, please don't rely on my simplified explanations or ask me for help; I'm just an idiot on the internet. If you're doing that kind of thing and you have money, join the UEFI Forum or ask your suppliers or check your reference implementation or whatever. If you don't have money, try asking your peers with more experience, nicely. END IMPORTANT NOTE

You've probably read a lot of stuff on the internet about UEFI. Here is something important you should understand: 95% of it was probably garbage. If you think you know about UEFI, and you derived your knowledge anywhere other than the UEFI specifications, mjg59's blog or one of a few other vaguely reliable locations/people - Rod Smith, or Peter Jones, or Chris Murphy, or the documentation of the relatively few OSes whose developers actually know what the hell they're doing with UEFI - what you think you know is likely a toxic mix of misunderstandings, misconceptions, half-truths, propaganda and downright lies. So you should probably forget it all.

Good, now we've got that out of the way. What I mostly want to talk about is bootloading, because that's the bit of firmware that matters most to most people, and the bit news sites are always banging on about and wildly misunderstanding.


First, let's get some terminology out of the way. Both BIOS and UEFI are types of firmware for computers. BIOS-style firmware is (mostly) only ever found on IBM PC compatible computers. UEFI is meant to be more generic, and can be found on systems which are not in the 'IBM PC compatible' class.

You do not have a 'UEFI BIOS'. No-one has a 'UEFI BIOS'. Please don't ever say 'UEFI BIOS'. BIOS is not a generic term for all PC firmware, it is a particular type of PC firmware. Your computer has a firmware. If it's an IBM PC compatible computer, it's almost certainly either a BIOS or a UEFI firmware. If you're running Coreboot, congratulations, Mr./Ms. Exception. You may be proud of yourself.

Secure Boot is not the same thing as UEFI. Do not ever use those terms interchangeably. Secure Boot is a single effectively optional element of the UEFI specification, which was added in version 2.2 of the UEFI specification. We will talk about precisely what it is later, but for now, just remember it is not the same thing about UEFI. You need to understand what Secure Boot is, and what UEFI is, and which of the two you are actually talking about at any given time. We'll talk about UEFI first, and then we'll talk about Secure Boot as an 'extension' to UEFI, because that's basically what it is.

Bonus Historical Note: UEFI was not invented by, is not controlled by, and has never been controlled by Microsoft. Its predecessor and basis, EFI, was developed and published by Intel. UEFI is managed by the UEFI Forum. Microsoft is a member of the UEFI forum. So is Red Hat, and so is Apple, and so is just about every major PC manufacturer, Intel (obviously), AMD, and a laundry list of other major and minor hardware, software and firmware companies and organizations. It is a broad consensus specification, with all the messiness that entails, some of which we'll talk about specifically later. It is no one company's Evil Vehicle Of Evilness.


If you really want to understand UEFI, it's a really good idea to go and read the UEFI specification. You can do this. It's very easy. You don't have to pay anyone any money. I am not going to tell you that reading it will be the most fun you've ever had, because it won't. But it won't be a waste of your time. You can find it right here on the official UEFI site. You have to check a couple of boxes, but you are not signing your soul away to Satan, or anything. It's fine. As I write this, the current version of the spec is 2.4 Errata A, and that's the version this post is written with regard to.

There is no BIOS specification. BIOS is a de facto standard - it works the way it worked on actual IBM PCs, in the 1980s. That's kind of one of the reasons UEFI exists.

Now, to keep things simple, let's consider two worlds. One is the world of IBM PC compatible computers - hereafter referred to just as PCs - before UEFI and GPT (we'll come to GPT) existed. This is the world a lot of you are probably familiar with and may understand quite well. Let's talk about how booting works on PCs with BIOS firmware.

BIOS booting

It works, in fact, in a very, very simple way. On your bog-standard old-skool BIOS PC, you have one or more disks which have an MBR. The MBR is another de facto standard; basically, the very start of the disk describes the partitions on the disk in a particular format, and contains a 'boot loader', a very small piece of code that a BIOS firmware knows how to execute, whose job it is to boot the operating system(s). (Modern bootloaders frequently are much bigger than can be contained in the MBR space and have to use a multi-stage design where the bit in the MBR just knows how to load the next stage from somewhere else, but that's not important to us right now).

All a BIOS firmware knows, in the context of booting the system, is what disks the system contains. You, the owner of this BIOS-based computer, can tell the BIOS firmware which disk you want it to boot the system from. The firmware has no knowledge of anything beyond that. It executes the bootloader it finds in the MBR of the specified disk, and that's it. The firmware is no longer involved in booting.

In the BIOS world, absolutely all forms of multi-booting are handled above the firmware layer. The firmware layer doesn't really know what a bootloader is, or what an operating system is. Hell, it doesn't know what a partition is. All it can do is run the boot loader from a disk's MBR. You also cannot configure the boot process from outside of the firmware.

UEFI booting: background

OK, so we have our background, the BIOS world. Now let's look at how booting works on a UEFI system. Even if you don't grasp the details of this post, grasp this: it is completely different. Completely and utterly different from how BIOS booting works. You cannot apply any of your understanding of BIOS booting to native UEFI booting. You cannot make a little tweak to a system designed for the world of BIOS booting and apply it to native UEFI booting. You need to understand that it is a completely different world.

Here's another important thing to understand: many UEFI firmwares implement some kind of BIOS compatibility mode, sometimes referred to as a CSM. Many UEFI firmwares can boot a system just like a BIOS firmware would - they can look for an MBR on a disk, and execute the boot loader from that MBR, and leave everything subsequently up to that bootloader. People sometimes incorrectly refer to using this feature as 'disabling UEFI', which is linguistically nonsensical. You cannot 'disable' your system's firmware. It's just a stupid term. Don't use it, but understand what people really mean when they say it. They are talking about using a UEFI firmware's ability to boot the system 'BIOS-style' rather than native UEFI style.

What I'm going to describe is native UEFI booting. If you have a UEFI-based system whose firmware has the BIOS compatibility feature, and you decide to use it, and you apply this decision consistently, then as far as booting is concerned, you can pretend your system is BIOS-based, and just do everything the way you did with BIOS-style booting. If you're going to do this, though, just make sure you do apply it consistently. I really can't recommend strongly enough that you do not attempt to mix UEFI-native and BIOS-compatible booting of permanently-installed operating systems on the same computer, and especially not on the same disk. It is a terrible terrible idea and will cause you heartache and pain. If you decide to do it, don't come crying to me.

For the sake of sanity, I am going to assume the use of disks with a GPT partition table, and EFI FAT32 EFI system partitions. Depending on how deep you're going to dive into this stuff you may find out that it's not strictly speaking the case that you can always assume you'll be dealing with GPT disks and EFI FAT32 ESPs when dealing with UEFI native boot, but the UEFI specification is quite strongly tied to GPT disks and EFI FAT32 ESPs, and this is what you'll be dealing with in 99% of cases. Unless you're dealing with Macs, and quite frankly, screw Macs.

Edit note: the following sections (up to Implications and Complications) were heavily revised on 2014-01-26, a few hours after the initial version of this post went up, based on feedback from Peter Jones. Consider this to be v2.0 of the post. An earlier version was written in a somewhat less accurate and more confusing way.

UEFI native booting: how it actually works - background

OK, with that out of the way, let's get to the meat. This is how native UEFI booting actually works. It's probably helpful to go into this with a bit of high-level background.

UEFI provides much more infrastructure at the firmware level for handling system boot. It's nowhere near as simple as BIOS. Unlike BIOS, UEFI certainly does understand, to varying degrees, the concepts of 'disk partitions' and 'bootloaders' and 'operating systems'.

You can sort of look at the BIOS boot process, and look at the UEFI process, and see how the UEFI process extends various bits to address specific problems.

The BIOS/MBR approach to finding the bootloader is pretty janky, when you think about it. It's very 'special sauce': this particular tiny space at the front of the disk contains magic code that only really makes much sense to the system firmware and special utilities for writing it. There are several problems with this approach.

  • It's inconvenient to deal with - you need special utilities to write the MBR, and just about the only way to find out what's in one is to dd the contents out and examine them.
  • As noted above, the MBR itself is not big enough for many modern bootloaders. What they do is install a small part of themselves to the MBR proper, and the rest to the empty space on the disk between where the conventional MBR ends and the first partition begins. There's a rather big problem with this (well, the whole design is a big problem, but never mind), which is that there's no reliable convention for where the first partition should begin, so it's difficult to be sure there'll be enough space. One thing you usually can rely on is that there won't be enough space for some bootloader configurations.
  • The design doesn't provide any standardized layer or mechanism for selecting boot targets other than disks...but people want to select boot targets other than disks. i.e. they want to have multiple bootable 'things' - usually operating systems - per disk. The only way to do this, in the BIOS/MBR world, is for the bootloaders to handle it; but there's no widely accepted convention for the right way to do this. There are many many different approaches, none of which is particularly interoperable with any of the others, none of which is a widely accepted standard or convention, and it's very difficult to write tooling at the OS / OS installation layer that handles multiboot cleanly. It's just a very messy design.
  • The design doesn't provide a standard way of booting from anything except disks. We're not going to really talk about that in this article, but just be aware it's another advantage of UEFI booting: it provides a standard way for booting from, for instance, a remote server.
  • There's no mechanism for levels above the firmware to configure the firmware's boot behaviour.

So you can imagine the UEFI Elves sitting around and considering this problem, and coming up with a solution. Instead of the firmware only knowing about disks and one 'magic' location per disk where bootloader code might reside, UEFI has much more infrastructure at the firmware layer for handling boot loading. Let's look at all the things it defines that are relevant here.

EFI executables

The UEFI spec defines an executable format and requires all UEFI firmwares be capable of executing code in this format. When you write a bootloader for native UEFI, you write in this format. This is pretty simple and straightforward, and doesn't need any further explanation: it's just a Good Thing that we now have a firmware specification which actually defines a common format for code the firmware can execute.

The GPT (GUID partition table) format

The GUID Partition Table format is very much tied in with the UEFI specification, and again, this isn't something particularly complex or in need of much explanation, it's just a good bit of groundwork the spec provides. GPT is just a standard for doing partition tables - the information at the start of a disk that defines what partitions that disk contains. It's a better standard for doing this than MBR/'MS-DOS' partition tables were in many ways, and the UEFI spec requires that UEFI-compliant firmwares be capable of interpreting GPT (it also requires them to be capable of interpreting MBR, for backwards compatibility). All of this is useful groundwork: what's going on here is the spec is establishing certain capabilities that everything above the firmware layer can rely on the firmware to have.

EFI system partitions

I actually really wrapped my head around the EFI system partition concept while revising this post, and it was a great 'aha!' moment. Really, the concept of 'EFI system partitions' is just an answer to the problem of the 'special sauce' MBR space. The concept of some undefined amount of empty space at the start of a disk being 'where bootloader code lives' is a pretty crappy design, as we saw above. EFI system partitions are just UEFI's solution to that.1

The solution is this: we require the firmware layer to be capable of reading some specific types of filesystem. The UEFI spec requires that compliant firmwares be capable of reading the FAT12, FAT16 and FAT32 variants of the FAT format, in essence. In fact what it does is codify a particular interpretation of those formats as they existed at the point UEFI was accepted, and say that UEFI compliant firmwares must be capable of reading those formats. As the spec puts it:

"The file system supported by the Extensible Firmware Interface is based on the FAT file system. EFI defines a specific version of FAT that is explicitly documented and testable. Conformance to the EFI specification and its associate reference documents is the only definition of FAT that needs to be implemented to support EFI. To differentiate the EFI file system from pure FAT, a new partition file system type has been defined."

An 'EFI system partition' is really just any partition formatted with one of the UEFI spec-defined variants of FAT and given a specific GPT partition type to help the firmware find it. And the purpose of this is just as described above: allow everyone to rely on the fact that the firmware layer will definitely be able to read data from a pretty 'normal' disk partition. Hopefully it's clear why this is a better design: instead of having to write bootloader code to the 'magic' space at the start of an MBR disk, operating systems and so on can just create, format and mount partitions in a widely understood format and put bootloader code and anything else that they might want the firmware to read there.

The whole ESP thing seemed a bit bizarre and confusing to me at first, so I hope this section explains why it's actually a very sensible idea and a good design - the bizarre and confusing thing is really the BIOS/MBR design, where the only way for you to write something from the OS layer that you knew the firmware layer could consume was to write it into some (but you didn't know how much) Magic Space at the start of a disk, a convention which isn't actually codified anywhere. That really isn't a very sensible or understandable design, if you step back and take a look at it.

As we'll note later, the UEFI spec tends to take a 'you must at least do these things' approach - it rarely prohibits firmwares from doing anything else. It's not against the spec to write a firmware that can execute code in other formats, read other types of partition table, and read partitions formatted with filesystems other than the UEFI variants of FAT. But a UEFI compliant firmware must at least do all these things, so if you are writing an OS or something else that you want to run on any UEFI compliant firmware, this is why the EFI system partition concept is so important: it gives you (at least in theory) 100% confidence that you can put an EFI executable on a partition formatted with the UEFI FAT implementation and the correct GPT partition type, and the system firmware will be able to read it. This is the thing you can take to the bank, like 'the firmware will be able to execute some bootloader code I put in the MBR space' was in the BIOS world.

So now we have three important bits of groundwork the UEFI spec provides: thanks to these requirements, any other layer can confidently rely on the fact that the firmware:

  • Can read a partition table
  • Can access files in some specific filesystems
  • Can execute code in a particular format

This is much more than you can rely on a BIOS firmware being capable of. However, in order to complete the vision of a firmware layer that can handle booting multiple targets - not just disks - we need one more bit of groundwork: there needs to be a mechanism by which the firmware finds the various possible boot targets, and a way to configure it.

The UEFI boot manager

The UEFI spec defines something called the UEFI boot manager. (Linux distributions contain a tool called efibootmgr which is used to manipulate the configuration of the UEFI boot manager). As a sample of what you can expect to find if you do read the UEFI spec, it defines the UEFI boot manager thusly:

"The UEFI boot manager is a firmware policy engine that can be configured by modifying architecturally defined global NVRAM variables. The boot manager will attempt to load UEFI drivers and UEFI applications (including UEFI OS boot loaders) in an order defined by the global NVRAM variables."

Well, that's that cleared up, let's move on. ;) No, not really. Let's translate that to Human. With only a reasonable degree of simplification, you can think of the UEFI boot manager as being a boot menu. With a BIOS firmware, your firmware level 'boot menu' is, necessarily, the disks connected to the system at boot time - no more, no less. This is not true with a UEFI firmware.

The UEFI boot manager can be configured - simply put, you can add and remove entries from the 'boot menu'. The firmware can also (it fact the spec requires it to, in various cases) effectively 'generate' entries in this boot menu, according to the disks attached to the system and possibly some firmware configuration settings. It can also be examined - you can look at what's in it.

One rather great thing UEFI provides is a mechanism for doing this from other layers: you can configure the system boot behaviour from a booted operating system. You can do all this by using the efibootmgr tool, once you have Linux booted via UEFI somehow. There are Windows tools for it too, but I'm not terribly familiar with them. Let's have a look at some typical efibootmgr output, which I stole and slightly tweaked from the Fedora forums:

[root@system directory]# efibootmgr -v
BootCurrent: 0002
Timeout: 3 seconds
BootOrder: 0003,0002,0000,0004
Boot0000* CD/DVD Drive  BIOS(3,0,00)
Boot0001* Hard Drive    HD(2,0,00)
Boot0002* Fedora        HD(1,800,61800,6d98f360-cb3e-4727-8fed-5ce0c040365d)File(\EFI\fedora\grubx64.efi)
Boot0003* opensuse      HD(1,800,61800,6d98f360-cb3e-4727-8fed-5ce0c040365d)File(\EFI\opensuse\grubx64.efi)
Boot0004* Hard Drive    BIOS(2,0,00)P0: ST1500DM003-9YN16G        .
[root@system directory]#

This is a nice clean example I stole and slightly tweaked from the Fedora forums. We can see a few things going on here.

The first line tells you which of the 'boot menu' entries you are currently booted from. The second is pretty obvious (if the firmware presents a boot menu-like interface to the UEFI boot manager, that's the timeout before it goes ahead and boots the default entry). The BootOrder is the order in which the entries in the list will be tried. The rest of the output shows the actual boot entries. We'll describe what they actually do later.

If you boot a UEFI firmware entirely normally, without doing any of the tweaks we'll discuss later, what it ought to do is try to boot from each of the 'entries' in the 'boot menu', in the order listed in BootOrder. So on this system it would try to boot the entry called 'opensuse', then if that failed, the one called 'Fedora', then 'CD/DVD Drive', and then the second 'Hard Drive'.

UEFI native booting: how it actually works - boot manager entries

What does these entries actually mean, though? There's actually a huge range of possibilities that makes up rather a large part of the complexity of the UEFI spec all by itself. If you're reading the spec, pour yourself an extremely large shot of gin and turn to the EFI_DEVICE_PATH_PROTOCOL section, but note that this is a generic protocol that's used for other things than booting - it's UEFI's Official Way Of Identifying Devices For All Purposes, used for boot manager entries but also for all sorts of other purposes. Not every possible EFI device path makes sense as a UEFI boot manager entry, for obvious reasons (you're probably not going to get too far trying to boot from your video adapter). But you can certainly have an entry that points to, say, a PXE server, not a disk partition. The spec has lots of bits defining valid non-disk boot targets that can be added to the UEFI boot manager configuration.

For our purposes, though, lets just consider fairly normal disks connected to the system. In this case we can consider three types of entry you're likely to come across.

BIOS compatibility boot entries

Boot0000 and Boot0004 in this example are actually BIOS compatibility mode entries, not UEFI native entries. They have not been added to the UEFI boot manager configuration by any external agency, but generated by the firmware itself - this is a common way for a UEFI firmware to implement BIOS compatibility booting, by generating UEFI boot manager entries that trigger a BIOS-compatible boot of a given device. How they present this to the user is a different question, as we'll see later. Whether you see any of these entries or not will depend on your particular firmware, and its configuration. Each of these entries just gives a name - 'CD/DVD Drive', 'Hard Drive' - and says "if this entry is selected, boot this disk (where 'this disk' is 3,0,00 for Boot0000 and 2,0,00 for Boot0004) in BIOS compatibility mode".

'Fallback path' UEFI native boot entries

Boot0001 is an entry (fictional, and somewhat unlikely, but it's for illustrative purposes) that tells the firmware to try and boot from a particular disk, and in UEFI mode not BIOS compatibility mode, but doesn't tell it anything more. It doesn't specify a particular boot target on the disk - it just says to boot the disk.

The UEFI spec defines a sort of 'fallback' path for booting this kind of boot manager entry, which works in principle somewhat like BIOS drive booting: it looks in a standard location for some boot loader code. The details are different, though.

What the firmware will actually do when trying to boot in this way is reasonably simple. The firmware will look through each EFI system partition on the disk in the order they exist on the disk. Within the ESP, it will look for a file with a specific name and location. On an x86-64 PC, it will look for the file \EFI\BOOT\BOOTx64.EFI. What it actually looks for is \EFI\BOOT\BOOT{machine type short-name}.EFI - 'x64' is the "machine type short-name" for x86-64 PCs. The other possibilities are BOOTIA32.EFI (x86-32), BOOTIA64.EFI (Itanium), BOOTARM.EFI (AArch32 - that is, 32-bit ARM) and BOOTAA64.EFI (AArch64 - that is, 64-bit ARM). It will then execute the first qualifying file it finds (obviously, the file needs to be in the executable format defined in the UEFI specification).

This mechanism is not designed for booting permanently-installed OSes. It's more designed for booting hotpluggable, device-agnostic media, like live images and OS install media. And this is indeed what it's usually used for. If you look at a UEFI-capable live or install medium for a Linux distribution or other OS, you'll find it has a GPT partition table and contains a FAT-formatted partition at or near the start of the device, with the GPT partition type that identifies it as an EFI system partition. Within that partition there will be a \EFI\BOOT directory with at least one of the specially-named files above. When you boot a Fedora live or install medium in UEFI-native mode, this is the mechanism that is used. The BOOTx64.EFI (or whatever) file handles the rest of the boot process from there, booting the actual operating system contained on the medium.

Full UEFI native boot entries

Boot0002 and Boot0003 are 'typical' entries for operating systems permanently installed to permanent storage devices. These entries show us the full power of the UEFI boot mechanism, by not just saying "boot from this disk", but "boot this specific bootloader in this specific location on this specific disk", using all the 'groundwork' we talked about above.

Boot0002 is a boot entry produced by a UEFI-native Fedora installation. Boot0003 is a boot entry produced by a UEFI-native OpenSUSE installation. As you may be able to tell, all they're saying is "load this file from this partition". The partition is the HD(1,800,61800,6d98f360-cb3e-4727-8fed-5ce0c040365d) bit: that's referring to a specific partition (using the EFI_DEVICE_PATH_PROTOCOL, which I'm really not going to attempt to explain in any detail - you don't necessarily need to know it, if you interact with the boot manager via the firmware interface and efibootmgr). The file is the File(\EFI\opensuse\grubx64.efi) bit: that just means "load the file in this location on the partition we just described". The partition in question will almost always be one that qualifies as an EFI system partition, because of the considerations above: that's the type of partition we can trust the firmware to be able to access.

This is the mechanism the UEFI spec provides for operating systems to make themselves available for booting: the operating system is intended to install a bootloader which loads the OS kernel and so on to an EFI system partition, and add an entry to the UEFI boot manager configuration with a name - obviously, this will usually be derived from the operating system's name - and the location of the bootloader (in EFI executable format) that is intended for loading that operating system.

Linux distributions use the efibootmgr tool to deal with the UEFI boot manager. What a Linux distribution actually does, so far as bootloading is concerned, when you do a UEFI native install is really pretty simple: it creates an EFI system partition if one does not already exist, installs an EFI boot loader with an appropriate configuration - often grub2-efi, but there are others - into a correct path in the EFI system partition, and calls efibootmgr to add an appropriately-named UEFI boot manager entry pointing to its boot loader. Most distros will use an existing EFI system partition if there is one, though it's perfectly valid to create a new one and use that instead: as we've noted, UEFI is a permissive spec, and if you follow the design logically, there's really no problem with having just as many EFI system partitions as you want.

Configuring the boot process (the firmware UI)

The above describes the basic mechanism the UEFI spec defines that manages the UEFI boot process. It's important to realize that your firmware user interface may well not represent this mechanism very clearly. Unfortunately, the spec intentionally refrains from defining how the boot process should be represented to the user or how the user should be allowed to configure it, and what that means - since we're dealing with firmware engineers - is that every firmware does it differently, and some do it insanely.

Many firmwares do have fairly reasonable interfaces for boot configuration. A good firmware design will at least show you the boot order, with a reasonable representation of the entries on it, and let you add or remove entries, change the order, or override the order for a specific boot (by changing it just for that boot, or directly instructing the firmware to boot a particular menu entry, or even giving you the option to simply say "boot this disk", either in BIOS compatibility mode or UEFI 'fallback' mode - my firmware does this). Such an interface will often show 'full' UEFI native boot entries (like the Fedora and openSUSE examples we saw earlier) only by their name; you have to examine the efibootmgr -v output to know precisely what these entries will actually try and do when invoked.

Some firmwares try to abstract and simplify the configuration, and may do a good or a bad job of it. For instance, if you have an option to 'enable or disable' BIOS compatibility mode, what it'll really likely do is configure whether the firmware adds BIOS compatibility entries for attached drives to the UEFI boot manager configuration or not. If you have an option to 'enable or disable' UEFI native booting, what likely really happens when you 'disable' it is that the firmware changes the UEFI boot manager configuration to leave all UEFI-native entries out of the BootOrder.

The key point to remember is that any configuration option inside your firmware interface which is to do with booting is really, behind the scenes, configuring the behaviour of the UEFI boot manager. If you understand all the stuff we've discussed above, you may well find it easier to figure out what's really happening when you twiddle the knobs your firmware interface exposes.

In the BIOS world, you'll remember, you don't always find that systems are configured to try and boot from removable drives - CD, USB - before booting from permanent drives. Some are, and some aren't. Some will try CD before the hard disks, but not USB. People have got fairly used to having to check the BIOS configuration to ensure the boot order is 'correct' when trying to install a new operating system.

This applies to the UEFI world too, but because of the added flexibility/complexity of the UEFI boot manager mechanism, it can look unfamiliar and scary.

If you want to ensure that your system tries to boot from removable devices using the 'fallback' mechanism before it tries to boot 'permanent' boot entries - as you will want to do if you want to, say, install Fedora - you need this to be the default for your firmware, or you need to be able to tell the firmware this. Depending on your firmware's interface, you may find there is a 'menu entry' for each attached removable device and you just have to adjust the boot order to put it at the top of the list, or you may find that there is the mechanism to directly request 'UEFI fallback boot of this particular disk', or you may find that the firmware tries to abstract the configuration somehow. We just don't know, and that makes writing instructions for this quite hard. But now you broadly understand how things work behind the scenes, you may find it easier to understand your firmware user interface's representation of that.

Configuring the boot process (from an operating system)

As we've noted above, unlike in the BIOS world, you can actually configure the UEFI boot process from the operating system level. If you have an insane firmware, you may have to do this in order to achieve what you want.

You can use the efibootmgr tool mentioned earlier to add, delete and modify entries in the UEFI boot manager configuration, and actually do quite a lot of other stuff with it too. You can change the boot order. You can tell it to boot some particular entry in the list on the next boot, instead of using the BootOrder list (if you or some other tool has configured this to happen, your efibootmgr -v output will include a BootNext item stating which menu entry will be loaded on the next boot). There are tools for Windows that can do this stuff from Windows, too. So if you're really struggling to manage to do whatever it is you want to do with UEFI boot configuration from your firmware interface, but you can boot a UEFI native operating system of some kind, you may want to consider doing your boot configuration from that operating system rather than from the firmware UI.

So to recap:

  • Your UEFI firmware contains something very like what you think of as a boot menu.
  • You can query its configuration with efibootmgr -v, from any UEFI-native boot of a Linux OS, and also change its configuration with efibootmgr (see the man page for details).
  • This 'boot menu' can contain entries that say 'boot this disk in BIOS compatibility mode', 'boot this disk in UEFI native mode via the fallback path' (which will use the 'look for BOOT(something).EFI' method described above), or 'boot the specific EFI format executable at this specific location (almost always on an EFI system partition)'.
  • The nice, clean design that the UEFI spec is trying to imply is that all operating systems should install a bootloader of their own to an EFI system partition, add entries to this 'boot menu' that point to themselves, and butt out from trying to take control of booting anything else.
  • Your firmware UI has free rein to represent this mechanism to you in whatever way it wants, and it may do this well, or it may do this poorly.

Installing operating systems to UEFI-based computers

Let's have a quick look at some specific consequences of the above that relate to installing operating systems on UEFI computers.

UEFI native and BIOS compatibility booting

Here's a very very simple one which people sometimes miss:

  • If you boot the installation medium in 'UEFI native' mode, it will do a UEFI native install of the operating system: it will try to write an EFI-format bootloader to an EFI system partition, and attempt to add an entry to the UEFI boot manager 'boot menu' which loads that bootloader.
  • If you boot the installation medium in 'BIOS compatibility' mode, it will do a BIOS compatible install of the operating system: it will try to write an MBR-type bootloader to the magic MBR space on a disk.

This applies (with one minor caveat I'm going to paper over for now) to all OSes of which I'm aware. So you probably want to make sure you understand how, in your firmware, you can choose to boot a removable device in UEFI native mode and how you can choose to boot it in BIOS compatibility mode, and make sure you pick whichever one you actually want to use for your installation.

You really cannot do a completely successful UEFI-native installation of an OS if you boot its installation medium in BIOS compatibility mode, because the installer will not be able to configure the UEFI boot manager (this is only possible when booted UEFI-native).

It is theoretically possible for an OS installer to install the OS in the BIOS style - that is, write a bootloader to a disk's MBR - after being booted in UEFI native mode, but most of them won't do this, and that's probably sensible.

Finding out which mode you're booted in

It is possible that you might find yourself with your operating system installer booted, and not sure whether it's actually booted in UEFI native mode or BIOS compatibility mode. Don't panic! It's pretty easy to find out which, in a few different ways. One of the easiest is just to try and talk to the UEFI boot manager. If what you have booted is a Linux installer or environment, and you can get to a shell (ctrl-alt-f2 in the Fedora installer, for instance), run efibootmgr -v. If you're booted in UEFI native mode, you'll get your UEFI boot manager configuration, as shown above. If you're booted in BIOS compatibility mode, you'll get something like this:

Fatal: Couldn't open either sysfs or procfs directories for accessing EFI variables.
Try 'modprobe efivars' as root.

If you've booted some other operating system, you can try running a utility native to that OS which tries to talk to the UEFI boot manager, and see if you get sensible output or a similar kind of error. Or you can examine the system logs and search for 'efi' and/or 'uefi', and you'll probably find some kind of indication.

Enabling UEFI native boot

To be bootable in UEFI native mode, your OS installation medium must obviously actually comply with all this stuff we've just described: it's got to have a GPT partition table, and an EFI system partition with a bootloader in the correct 'fallback' path - \EFI\BOOT\BOOTx64.EFI (or the other names for the other platforms). If you're having trouble doing a UEFI native boot of your installation medium and can't figure out why, check that this is actually the case. Notably, when using the livecd-iso-to-disk tool to write a Fedora image to a USB stick, you must pass the --efi parameter to configure the stick to be UEFI bootable.

Forcing BIOS compatibility boot

If your firmware seems to make it very difficult to boot from a removable medium in BIOS compatibility mode, but you really want to do that, there's a handy trick you can use: just make the medium not UEFI native bootable at all. You can do this pretty easily by just wiping all the EFI system partitions. (Alternatively, if using livecd-iso-to-disk to create a USB stick from a Fedora image, you can just leave out the --efi parameter and it won't be UEFI bootable). If at that point your firmware refuses to boot it in BIOS compatibility mode, commence swearing at your firmware vendor (if you didn't already).

Disk formats (MBR vs. GPT)

Here's another very important consideration:

  • If you want to do a 'BIOS compatibility' type installation, you probably want to install to an MBR formatted disk.
  • If you want to do a UEFI native installation, you probably want to install to a GPT formatted disk.

Of course, to make life complicated, many firmwares can boot BIOS-style from a GPT formatted disk. UEFI firmwares are in fact technically required to be able to boot UEFI-style from an MBR formatted disk (though we are not particularly confident that they all really can). But you really should avoid this if at all possible. This consideration is quite important, as it's one that trips up quite a few people. For instance, it's a bad idea to boot an OS installer in UEFI native mode and then attempt to install to an MBR formatted disk without reformatting it. This is very likely to fail. Most modern OS installers will automatically reformat the disk in the correct format if you allow them to completely wipe it, but if you try and tell the installer 'do a UEFI native installation to this MBR formatted disk and don't reformat it because it has data on it that I care about', it's very likely to fail, even though this configuration is technically covered in the UEFI specification. Specifically, Windows and Fedora at least explicitly disallow this configuration.

Checking the disk format

You can use the parted utility to check the format of a given disk:

[adamw@adam Downloads]$ sudo parted /dev/sda
GNU Parted 3.1
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: ATA C300-CTFDDAC128M (scsi)
Disk /dev/sda: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End    Size   Type     File system  Flags
 1      1049kB  525MB  524MB  primary  ext4         boot
 2      525MB   128GB  128GB  primary               lvm


See that Partition table: msdos? This is an MBR/MS-DOS formatted disk. If it was GPT-formatted, that would say gpt. You can reformat the disk with the other type of partition table by doing mklabel gpt or mklabel msdos from within parted. This will destroy the contents of the disk.

With most OS installers, if you pick a disk configuration that blows away the entire contents of the target disk, the installer will automatically reformat it using the most appropriate configuration for the type of installation you're doing, but if you want to use an existing disk without reformatting it, you're going to have to check how it's formatted and take this into account.

Handling EFI system partition if doing manual partitioning

I can only give authoritative advice for Fedora here, but the gist may be useful for other distros / OSes.

If you allow Fedora to handle partitioning for you when doing a UEFI native installation - and you use a GPT-formatted disk, or allow it to reformat the disk (by deleting all existing partitions) - it will handle the EFI system partition stuff for you.

If you use custom partitioning, though, it will expect you to provide an EFI system partition for the installer to use. If you don't do this, the installer will complain (with a somewhat confusing error message) and refuse to let you start the installation.

So if you're doing a UEFI native install and using custom partitioning, you need to ensure that a partition of the 'EFI system partition' type is mounted at /boot/efi - this is where Fedora expects to find the EFI system partition it's using. If there is an existing EFI system partition on the system, just set its mount point to /boot/efi. If there is not an EFI system partition yet, create a partition, set its type to EFI system partition, make it at least 200MB big (500MB is good), and set its mount point to /boot/efi.

A specific example

To boil down the above: if you bought a Windows 8 or later system, you almost certainly have a UEFI native install of Windows to a GPT-formatted disk. This means that if you want to install another OS alongside that Windows install, you almost certainly want to do a UEFI-native installation of your other OS. If you don't like all this UEFI nonsense and want to go back to the good old world you're familiar with, you will, I'm afraid, have to blow away the UEFI-native Windows installation, and it would be a good idea to reformat the disk to MBR.

Implications and Complications

So, that's how UEFI booting works, at least a reasonable approximation. When I describe it like that, it almost all makes sense, right?

However, all is not sweetness and light. There are problems. There always are.

Attentive readers may have noticed that I've talked about the UEFI spec providing a mechanism. This is accurate, and important. As the UEFI spec is a 'broad consensus' sort of thing, one of its major shortcomings (looked at from a particular perspective) is that it's nowhere near prescriptive enough.

If you read the UEFI spec critically, its basic approach is to define a set of functions that UEFI compliant firmwares must support. What it doesn't do a lot of at all is strictly requiring things to be done in any particular way, or not done in any particular way.

So: the spec says that a system firmware must do all the stuff I've described above, in order to be considered a UEFI-compliant firmware. The spec, however, doesn't talk about what operating systems 'should' or 'must' do at all, and it doesn't say that firmwares must not support (or no-one may expect them to support, or whatever)...anything at all. If you're making a UEFI firmware, in other words, you have to support GPT formatted disks, and FAT-formatted EFI system partitions, and you must read UEFI boot manager entries in the standard format, and you must do this and that and the other - but you can also do any other crap you like.

It's pretty easy to read certain implications from the spec - it carefully sets up this nice mechanism for handling OS (or other 'bootable thing') selection at the firmware level, for instance, with the clear implication "hey, it'd be great if all OSes were written to this mechanism". But the UEFI spec doesn't require that, and neither does any other widely-respected specification.

So, what happens in the real world is that we wind up with really dumb crap. Apple, for instance, ships at least some Macs with their bootloaders in an HFS+ partition. The spec says a UEFI-compliant firmware must support UEFI FAT partitions with the specific GPT partition type that identifies them as an "EFI system partition", but it doesn't say the firmware can't also recognize some other filesystem type and load a bootloader from that. (Whether you consider such a partition to be an "EFI system partition" or not is an interesting philosophical conundrum, but let's skate right over that for now).

The world would pretty clearly be a better place if everyone just damn well used the EFI system partition format the spec goes to such great pains to define, but Apple is Apple and we can't have nice things, so Apple went right ahead and wrote firmwares that also can read and load code from HFS+ partitions, and now everyone else has to deal with that or tell Macs to go and get boned. Apple also goes quite a long way beyond the spec in its boot process design, and if you want your alternative OS to show up on its graphical boot menu with a nice icon and things, you have to do more than what the UEFI spec would suggest.

There are various similar incredibly annoying corner cases we've come across, but let's not go into them all right now. This post is long enough.

Also, as we noted earlier, the spec makes no requirements as to how the mechanism should be represented to the user. So if a couple of software companies write OSes to behave 'nicely' according to the conventions the spec is clearly designed to back, and install EFI boot loaders and define EFI boot manager entries with nice clear names - like, oh say, "Fedora" and "Windows" - they are implicitly relying on the firmware to then give the user some kind of sane interface somewhere relatively discoverable that lets them choose to boot "Windows" or "Fedora". The more firmwares don't do a good job of this, the less willing OS engineers will be to rely on the 'proper' conventions, and the more likely they'll be to start rebuilding ugly hacks above the firmware level.

To be fair, we could do somewhat more at the OS level. We could present all those neat efibootmgr capabilities rather more obviously - we can use that 'don't respect BootOrder on the next boot, but instead boot this' capability, for instance, and have 'Reboot to Windows' as an option. It'd be kinda nice if someone looked at exposing all this functionality somewhere more obvious than efibootmgr. Windows 8 systems do use this, to some extent - you can reboot your system to the firmware UI from the Windows 8 settings menus, for instance. But still.

All this is really incredibly frustrating, because UEFI is so close to making things really a lot better. The BIOS approach doesn't provide any kind of convention or standard for multibooting at all - it has to be handled entirely above the firmware level. We (the industry) could have come up with some sort of convention for handling multiboot, but we never did, so it just became a multiple-decade epic fail, where each operating system came up with its own approach and lots of people wrote their own bootloaders which tried to subsume all the operating systems and all the operating systems and independent bootloaders merrily fought like cats in a sack. I mean, pre-UEFI multibooting is such a clusterf**k it's not even worth going into, it's broken sixteen ways from Sunday by definition.

If UEFI - or a spec built on top of it - had just mandated that everybody follow the conventions UEFI carefully establishes, and mandated that firmwares provide a sensible user interface, the win would have been epic. But it doesn't, so it's entirely possible that in a UEFI world things will be even worse than they were in the BIOS world. If many more firmwares show up that don't present a good UI for the UEFI boot manager mechanism, what could happen is that OS vendors give up on the UEFI boot manager mechanism (or decide to support it and alternatives, because choice!) and just reinvent the entire goddamn nightmare of BIOS multibooting on top of UEFI - and we'll all have to deal with all of that, plus the added complication of the UEFI boot manager layer. You'll have multiple bootloaders fighting to load multiple operating systems all on top of the whole UEFI boot manager mechanism which is just throwing a whole bunch of other variables into the equation.

This is not a prospect filling the mind of anyone who's had to think about it with joy.

Still, it's important to recognize that the sins of UEFI in this area are sins of omission - they are not sins of commission, and they're not really the result of evil intent on anyone's part. The entity you should really be angry with if you have an idiotic system firmware that doesn't give you good access to the UEFI boot manager mechanism is not the UEFI forum, or Microsoft, and it certainly isn't Fedora and even more certainly isn't me ;). The entity you should be angry at is your system/motherboard manufacturer and the goddamn incompetents they hired to write the firmware, because the UEFI spec makes it really damn clear to anyone with two brain cells to rub together that it would be a very good idea to provide some kind of useful user interface to the UEFI boot manager, and any firmware which doesn't do so is crap code by definition. Yes, the UEFI forum should've realized that firmware engineers couldn't code their way out of a goddamned paper bag and just ordered them to do so, but still, it's ultimately the firmware engineers who should be lined up against the nearest wall.

Wait, we can simplify that. "Any firmware is crap code". Usually pretty accurate.

Secure Boot

So now we come, finally, to Secure Boot.

Secure Boot is not magic. It's not complicated. OK, that's a lie, it's incredibly complicated, but the theory isn't very complicated. And no, Secure Boot itself is not evil. I am entirely comfortable stating this as a fact, and you should be too, unless you think GPG is evil.

Secure Boot is defined in chapter 28 of the UEFI spec (2.4a, anyway). It's actually a pretty clever mechanism. But what it does can be described very, very simply. It says that the firmware can contain a set of signatures, and refuse to run any EFI executable which is not signed with one of those signatures.

That's it. Well, no, it really isn't, but that's a reasonably acceptable simplification. Security is hard, so there are all kinds of wibbly bits to implementing a really secure bootchain using Secure Boot, and mjg59 can tell you all about them, or you can pour another large shot of gin and read the whole of chapter 28. But that's the basic idea.

Using public key cryptography to verify the integrity of something is hardly a radical or evil concept. Pretty much all Linux distributions depend on it - we sign our packages and have our package managers go AWOOGA AWOOGA if you try to install a package which isn't signed with one of our keys. This isn't us being evil, and I don't think anyone's ever accused an OS of being evil for using public key cryptographic signing to establish trust in this way. Secure Boot is literally this exact same perfectly widely accepted mechanism, applied to the boot chain. Yet because a bunch of journalists wildly grasped the wrong end of the stick, it's widely considered to be slightly more evil than Hitler.

Secure Boot, as defined in the UEFI spec, says nothing at all about what the keys the firmware trusts should be, or where they should come from. I'm not going to go into all the goddamn details, because it gets stultifyingly boring and this post is too long already. But the executive summary is that the spec is utterly and entirely about defining a mechanism for doing cryptographic verification of a boot chain. It does not really even consider any kind of icky questions about the policy for doing so. It does nothing evil. It is as flexible as it could reasonably be, and takes care to allow for all the mechanisms involved to be configurable at multiple levels. The word 'Microsoft' is not mentioned. It is not in any way, shape, or form a secret agenda for Microsoft's domination of the world. If you doubt this, at the very bloody least, go and read it. I've given you all the necessary pointers. There is literally not a single legitimate reason I can think of for anyone to be angry with the idea "hey, it'd be neat if there was a mechanism for optional cryptographic verification of bootloader code in this firmware specification". None. Not one.

Secure Boot in the real world

Most of the unhappiness about Secure Boot is not really about Secure Boot the mechanism - whether the people expressing that unhappiness think it is or not - but about specific implementations of Secure Boot in the real world.

The only one we really care about is Secure Boot as it's implemented on PCs shipped with Microsoft Windows 8 or higher pre-installed.

Microsoft has these things called the Windows Hardware Certification Requirements. There they are. They are not Top Secret, Eyes Only, You Will Be Fed To Bill Gates' Sharks After Reading - they're right there on the Internet for anyone to read.

If you want to get cheap volume licenses of Windows from Microsoft to pre-install on your computers and have a nice "reassuring" 'Microsoft Approved!' sticker or whatever on the case, you have to comply with these requirements. That's all the force they have: they are not actually a part of the law of the United States or any other country, whatever some people seem to believe. Bill Gates cannot feed you to his sharks if you sell a PC that doesn't comply with these requirements, so long as you don't want a cheap copy of Windows to pre-install and a nice sticker. There is literally no requirement for a PC sold outside the Microsoft licensing program to configure Secure Boot in any particular way, or include Secure Boot at all. A PC that claims to have a UEFI 2.2 or later compliant firmware must implement Secure Boot, but can ship with it configured in literally absolutely any way it pleases (including turned off).

If you're going to have very loud opinions about Secure Boot, you have zero excuse for not going and reading the Microsoft certification requirements. Right now. I'll wait. You can search for "Secure Boot" to get to the relevant bit. It starts at "System.Fundamentals.Firmware.UEFISecureBoot".

You should read it. But here is a summary of what it says.

Computers complying with the requirements must:

  • Ship with Secure Boot turned on (except for servers)
  • Have Microsoft's key in the list of keys they trust
  • Disable BIOS compatibility mode when Secure Boot is enabled (actually the UEFI spec requires this too, if I read it correctly)
  • Support signature blacklisting

x86 computers complying with the requirements must additionally:

  • Allow a physically present person to disable Secure Boot
  • Allow a physically present person to enable Custom Mode, and modify the list of keys the firmware trusts

ARM computers complying with the requirements must additionally:

  • NOT allow a physically present person to disable Secure Boot
  • NOT allow a physically present person to enable Custom Mode, and modify the list of keys the firmware trusts

Yes. You read that correctly. The Microsoft certification requirements, for x86 machines, explicitly require implementers to give a physically present user complete control over Secure Boot - turn it off, or completely control the list of keys it trusts. Another important note here is that while the certification requirements state that the out-of-the-box list of trusted keys must include Microsoft's key, they don't say, for e.g., that it must not include any other keys. The requirements explicitly and intentionally allow for the system to ship with any number of other trusted keys, too.

These requirements aren't present entirely out of the goodness of Microsoft's heart, or anything - they're present in large part because other people explained to Microsoft that if they weren't present, it'd have a hell of a lawsuit on its hands2 - but they are present. Anyone who actually understands UEFI and Secure Boot cannot possibly read the requirements any other way, they are extremely clear and unambiguous. They both clearly intend to and succeed in ensuring the owner of a certified system has complete control over Secure Boot.

If you have an x86 system that claims to be Windows certified but does not allow you to disable Secure Boot, it is in direct violation of the certification requirements, and you should certainly complain very loudly to someone. If a lot of these systems exist then we clearly have a problem and it might be time for that giant lawsuit, but so far I'm not aware of this being the case. All the x86-based, Windows-certified systems I've seen have had the 'disable Secure Boot' option in their firmwares.

Now, for ARM machines, the requirements are significantly more evil: they state exactly the opposite, that it must not be possible to disable Secure Boot and it must not be possible for the system owner to change the trusted keys. This is bad and wrong. It makes Microsoft-certified ARM systems into a closed shop. But it's worth noting it's no more bad or wrong than most other major ARM platforms. Apple locks down the bootloader on all iDevices, and most Android devices also ship with locked bootloaders.

If you're planning to buy a Microsoft-certified ARM device, be aware of this, and be aware that you will not be in control of what you can boot on it. If you don't like this, don't buy one. But also don't buy an iDevice, or an Android device with a locked bootloader (you can buy Android devices with unlocked or unlockable bootloaders, still, but you have to do your research).

As far as x86 devices go, though, right now, Microsoft's certification requirements actually explicitly protect your right to determine what can boot on your system. This is good.


The following are AdamW's General Recommendations On Managing System Boot, offered with absolutely no guarantees of accuracy, purity or safety.

  • If you can possibly manage it, have one OS per computer. If you need more than one OS, buy more computers, or use virtualization. If you can do this everything is very simple and it doesn't much matter if you have BIOS or UEFI firmware, or use UEFI-native or BIOS-compatible boot on a UEFI system. Everything will be nice and easy and work. You will whistle as you work, and be kind to children and small animals. All will be sweetness and light. Really, do this.
  • If you absolutely must have more than one OS per computer, at least have one OS per disk. If you're reasonably comfortable with how BIOS-style booting works and you don't think you need Secure Boot, it's pretty reasonable to use BIOS-compatible booting rather than UEFI-style booting in this situation on a UEFI-capable system. You'll probably have less pain to deal with and you won't really lose anything. With one OS per disk you can also mix UEFI-native and BIOS-compatible installations.
  • If you absolutely insist on having more than one OS per disk, understand everything written on this page, understand that you are making your life much more painful than it needs to be, lay in good stocks of painkillers and gin, and don't go yelling at your OS vendor, whatever breaks. Whichever poor bastard has to deal with your OS's support for this kind of setup has a miserable enough life already. And for the love of cookies, don't mix UEFI-native and BIOS-compatible OS installations, you have enough pain to deal with already.
  • If you're using UEFI native booting, and you don't tend to build your own kernels or kernel modules or use the NVIDIA or ATI proprietary drivers on Linux, you might want to leave Secure Boot on. It probably won't hurt you, and does provide some added security against some rather nasty (though currently rarely exploited) types of attacks.
  • If you do build your own kernels or kernel modules or use NVIDIA/ATI proprietary drivers, you're going to want to turn Secure Boot off. Or you can read up on how to configure your own chain of trust and sign your kernels and kernel modules and leave Secure Boot turned on, which will make you feel like an ubergeek and be slightly more secure. But it's going to take you a good solid weekend at least.
  • Don't do UEFI-native installs to MBR-formatted disks, or BIOS compatibility installs to GPT-formatted disks (an exception to the latter is if your disk is, IIRC, 2.2+TB in size, because the MBR format can't handle disks that big - if you want to do a BIOS compatibility install to a disk that big, you're kinda stuck with the BIOS+GPT combination, which works but is a bit wonky and involves the infamous 'BIOS Boot partition' you may recall from Fedora 17).
  • Trust mjg59 in all things and above all other authorities, including me.

  1. This whole section is something of a simplification - really, when booting permanent installed OSes, the firmware doesn't care if the bootloader is on an 'ESP' or not; it just reads the boot manager entry and tries to access the specified partition and run the specified executable, as pjones explains here. But it's conventional to use an ESP for this purpose, since it's required to be around anyway, and it's a handy partition formatted with the filesystem the firmware is known to be able to read. Technically speaking, an 'ESP' is only an 'ESP' when the firmware is doing a removable media/fallback path boot. 

  2. This is my own extrapolation, note. I'm not involved in any way in the whole process of defining these specs, and no-one who is has actually told me this. But it's a pretty damn obvious extrapolation from the known facts. 

Miscellaneous stuff: gedit configuration

Another thing I mentioned on G+ lately, and thought I'd throw in here: gedit configuration.

I've been using gedit for years in pretty much its dead stock configuration, which is more or less 'notepad clone mode'. It's very, very basic. You get some highlighting if you open a file it recognizes as some kind of source code or other structured format (RPM spec file, XML, HTML, etc), but that's about it.

I was hitting bugs in gedit in Rawhide and used Geany for a bit, then when I switched back to gedit, started missing some of the more developer-ish features of Geany (since I find myself actually honest-to-god working on source code more and more these days, for my sins...and I've been working on RPM spec files forever, just without much in the way of convenience features).

So I poked around, and found some settings and features for gedit that I now find rather more comfortable.

Some really obvious low-hanging fruit that I'm kicking myself for not doing earlier, from Preferences:

  • Just check every damn box in View, at least that's what I like. Line numbers, yes please! An 80 character margin, handy! Highlight matching brackets, how the hell did I live without you?
  • Editor - tab width is going to vary depending on whose code base you're working with, but it's easy enough to change it. Insert spaces instead of tabs, yes - apply elsewhere for an explanation of why tabs are evil, but they are. Enable automatic indentation, is kind of a personal call - sometimes it helps, sometimes it hurts. Turn on autosave if you like.
  • Plugins - there's some interesting stuff here, like Python Console and Modelines and stuff, but the main one that's extremely powerful but sorta hidden away is Snippets.


Snippets I came across in geany - Christoph Wicket has a great geany configuration in his github space - and had no idea gedit had tucked away until I went looking. Turn that plugin on! You want it!

Snippets are basically just a way of inserting pre-canned text - say you're always typing your name, you can define a snippet that contains your name, and then configure a trigger for inserting it (you can use a shortcut key combo to 'trigger' a snippet, or you can use a form of tab completion - e.g. typing 'name' then hitting tab would insert your name). Just this function is neat enough, but you can get much cleverer, because of course, since we're talking about geeks, you get quite a powerful syntax (that page is pretty crappy, it gives you two lines of introduction then dives into head-spinny mode, but you get the idea) for doing Clever Stuff in snippets.

You can set up snippets by going to Tools / Manage Snippets..., and for a simple one, just go to Global / Add a new snippet..., type some text in the Edit: box, assign a Tab trigger and/or a Shortcut key - the Tab trigger thing is described above - optionally give your snippet a name, and close the dialog. Now use your trigger and your snippet will show up.

Pretty obviously, you can make snippets global (like the one above) or only active for particular types of file (there's an extremely long list of types of structured files gedit distinguishes between on the left, and you can create type-specific snippets for any of them). I recreated a few of the RPM spec snippets from cwickert's geany config that I found most useful. A lot of these are just simple text strings - mainly RPM macros like %{_bindir}, %{_libdir} and the like. I did set up one slightly clever one, though, which serves to illustrate some of the 'clever' features of snippets. It looks like this:

* $(date +"%a %b %d %Y") Adam Williamson <> - ${1}
- ${2}

...and as you might be able to guess, it's just a snippet for inserting an RPM changelog entry. The gedit doc page I linked to earlier makes the 'placeholder' concept sound crazy complex, but really a simple placeholder is just 'a series of places you might want to manually insert text into the pre-canned snippet', and all the other types of placeholder are different ways of inserting some kind of dynamic content - i.e., something that will be different depending on when or where or in what context you invoke the snippet.

You could try and write something very snazzy to figure out the EVR (Epoch, Version, Release) for the package from the appropriate fields - effectively re-implementing what RPM does - but I just stick a simple placeholder at the appropriate point in the snippet (that's the ${1}) so when I insert the snippet, the next thing I do is type in the correct EVR manually.

Then obviously I can't include the actual changelog entry in the snippet, so I have a second simple placeholder - the ${2} - for typing that. So, when I insert this snippet, the cursor shows up at the end of the first line (with a very faint grey box to indicate that it's at a placeholder location) and I type the EVR, then I hit tab, and the cursor moves to the second line where the changelog entry goes, and I type the changelog entry, and we're all done.

That only leaves the other clever bit: that $(date +"%a %b %d %Y"). That's one of the simplest forms of a 'complex' placeholder. What that does is actually pretty simple: it's just the way you call out to an external command and stick the result into the snippet. It runs the command date +"%a %b %d %Y" when you invoke the snippet, and stuffs the result into the snippet at that location. And what that command does is give you a date string in RPM changelog format - "Tue Jan 21 2014" or whatever.

I gave this snippet the Tab trigger clog, so when I type 'clog' and hit tab, I get a properly-formatted RPM changelog entry into which I just have to type the EVR and the actual changelog entry. Veeeery handy.

And that's some gedit configuration stuff for ya! Hope it was useful. And yes, I'm extremely lame for not being some kind of emacs/vim magician by now, I know.

SELinux RPM bug, QA tasks during pre-F21 quiet time

So, as you may well have read by now, over the weekend a rather unfortunate bug in Fedora 20 cropped up. A stray change in an selinux-policy update, intended for Rawhide, mistakenly wound up in F20, and caused package transactions to fail once it had been installed, if SELinux was in enforcing mode.

This was particularly unfortunate as SELinux has been vastly improved in the last half dozen or so releases compared to many people's memories of it, and I hope this doesn't lead to a revival of the old 'everyone disable SELinux!' meme. It really was an unfortunate accident - the SELinux maintainers have a strict policy that stable release updates should only loosen policy, never tighten it, and this was purely an erroneous change.

From a QA perspective, obviously it's bad that we didn't catch this. Unfortunately this kind of 'delayed action' bug is rather tricky to catch in testing, because everything appears to be fine when you install the update, and it's not at all 'obvious' to anyone that an selinux-policy update might break package installs. So a few people install the update, everything appears to work fine, it fixes their bugs, they +1 the update, it gets queued for release...and then we realize we're boned.

If you're affected by the bug, the commonbugs note will help you out in fixing it up, and we're really very sorry for the inconvenience.

Longer term, automated testing should help us resolve this. We already have the proposed test case filed in Taskotron's tracker. But right now we're still full steam ahead on getting Taskotron itself up and running with the old AutoQA tests re-implemented in a sufficiently complete state to be useful for the F21 cycle, which has been our goal for a while. Once we're there, this will be one of the highest priority new test cases to write, as it will catch serious bugs that are difficult to catch with manual testing.

Aside from that...

I wrote up a post to test@ suggesting some things that QA folks can be working on during the current 'quiet' period, while Fedora 20 is released but the plans for Fedora 21 are not yet finalized (aside from the Taskotron work, of course). There are a lot of things that can usefully be done, from updates-testing work on F20 and F19 through Rawhide testing to creating new test cases and improving the release criteria and validation test cases ahead of F21. So if you're looking for something to do, dive in!

I also transferred a bunch of AutoQA test case suggestions from autoqa trac over to Taskotron phab, so they're ready once the team has time for working on new tests.

Also had a bit of time to play with my new hobby - as I mentioned on G+, I've been getting into OpenStreetMap mapping. I'm working on putting together a mass import of the City of Vancouver's open data street address dataset, which would give OSM street addresses for the whole of the city. It's been made rather easier by the previous work done by a local OSM contributor, Paul Norman, who's done some previous mass imports in the area and provided some useful tooling and figured out some of the legal issues (and has been kindly helping me out, so big thanks to him). I'm using his fork of the ogr2osm tool and basing my translation script loosely on the one he wrote for Surrey's street name data, and I've contacted the city to ask for a clarification that it's OK to use the dataset for OSM purposes - I'm hoping to be able to put the project together quite quickly after that. It's been a fun little side project, and I managed to do some stuff with regexs in Python, so...yay me?

Fonts on the web: how's that work, then?

I wrote an extremely long post about this, then decided to cut it down to the bare essentials, heavily based on Firefox. So here ya go.

The Basics

There is a sort of 'tug of war' between you (and your browser) and any given website, when you visit it, about what fonts will be used and what they'll look like.

If you're running Firefox, open Preferences and go to Content. See that 'Default font'? That's the font that will be used to display any text on the web unless the site specifies a different one. See that 'Size'? That's what size all text on the web will be, in pixels not points, unless the site specifies a different size. It's also your 'default size' - remember that, we'll come back to it later.

Now click Advanced...

Here you can pick your preferred Serif font, your preferred Sans-serif font. The line marked Proportional is basically the same thing as the settings we saw before we clicked Advanced..., shown differently - here, you specify whether your Serif or Sans-serif font is the 'default' one. (If you never touch the Advanced screen, Proportional is always set to Serif, and the Default font setting specifies the Serif font). The Size setting next to Proportional is just the same thing as the non-advanced Size setting.

If a website just says "here's some text, render it" and nothing else, it'll be rendered in your Proportional / Default font at the Size you specified. If the site says "this text is sans-serif" but nothing else, it'll be rendered in your chosen Sans-serif font at the Size you specified. Ditto "this text is serif" and "this text is monospace", except there's a separate Size setting for monospace, because sometimes people want proportional and monospace text to be different sizes, apparently.

However, websites can specify particular font faces and sizes. If you only set the settings mentioned so far, the website can override anything you say. It can say "render this text as Arial at 11px", and no matter what settings you have set, you'll see 11px Arial (or the closest font fontconfig can find to Arial - see my last post).

Now, see the setting marked Allow pages to choose their own fonts, instead of my selections above? That turns your 'polite requests' for font faces into 'demands'. If you uncheck that box, then whatever any site says about what specific font face to use, Firefox will ignore it and use your settings. All non-monospaced text will be in your Proportional / Default font, and all monospaced text will be in your Monospace font.

See the setting marked Minimum font size? That's pretty much exactly what it sounds like. Like all the other sizes on this dialog, it's specified in pixels not points. If you specify a minimum font size, Firefox will flatly refuse to render any text smaller than that size, no matter what the website asks for. Any text that would otherwise be smaller than this size will be rendered at exactly this size.

A bear trap

For a long time, I unchecked that "Allow pages to choose their own fonts" box. There is now a rather large bear trap if you do this, though. In recent years a mechanism called 'webfonts' has been invented to solve the problem where a page wants to render text in a particular font, but the user's computer doesn't have it: the page can embed the desired font, and the browser will download it and use it to render the text.

As you'd expect, the checkbox overrides webfonts: if you uncheck it, webfonts won't be downloaded or used. Some sites, however, have taken to abusing webfonts. They realized that font rendering engines are actually really good at making certain types of shapes - not just fonts - look nice on all sorts of screens. Including the 'symbolic' - monochrome / greyscale, and quite abstract - icons that are currently in vogue. If you look at the admin panel on a Wordpress site, or the top and side bars on Github, or in an increasing number of other places, those icons are not images: they are, technically, text characters. The sites put their icons into their webfonts in an area not used by any other characters, and use the web browser's font rendering capabilities to make them look nice. But if you override fonts, this breaks, and all you see where those icons should be is garbage. It's easy to try this: just uncheck that box, and go look at a Github site.

Smaller bear traps

Some sites render pretty weirdly if you override their font choices with something really radically different to what they're expecting in appearance or size. So unchecking that 'Allow pages to choose their own fonts' box, or setting a minimum size bigger than 11 or 12, can cause problems aside from the 'big bear trap' on some sites, even though you may well like its effects on most sites.

Avoiding the bear traps


In my last post, I explained how to override given font faces using fontconfig. Why did I go to all that trouble? It helps avoid these bear traps. By overriding several font faces that are specified by lots and lots of websites to Cantarell, I get much the same effect as unchecking that "Allow pages to choose their own fonts" box, only with much more fine-grained control. I can choose not to override particular fonts that cause problems when they're overridden. It's a bit more work, but you get more control.


How about sizes, though? I set ''Minimum font size'' to 12, but 12px is still pretty obnoxiously small for large chunks of text you might want to read on most monitors. Setting it any bigger, though, can cause text that really needs to be small - say, labels on menus which only have a small amount of screen area available - to render too large, and mess up sites quite badly.

What you can do, if you have sites you commonly visit that are affected by tiny-font-itis - like all of, damnit our CSS sucks - is override them via custom CSS on a site by site basis.

...please, stop screaming, it's really not that hard. How font size setting works on websites is screamingly complex - you get into all sorts of crap about sizes specified in px, pt, em, or %age - but you don't really need to know all that stuff in most cases. What you need to know is this.

First, open Firefox and go to a Fedora wiki page - ideally, do this with a very stock font configuration, don't set Minimum font size. You may well notice the text is pretty small. How do we fix that?

Your Firefox profile is at ~/.mozilla/firefox/(garbagestring).default/ - that (garbagestring) is different for everyone. Go to that directory and create a directory called chrome (if there isn't one already). So I have a directory ~/.mozilla/firefox/sh6llx3y.default/chrome. Create a file in it called userContent.css, and put this content in it:

@-moz-document domain( {
    font-size: 90% !important;

Now close your browser and open it again. You should notice the text on the Fedora wiki is now rather bigger.

What we did there, very approximately, is to say to Firefox "for all pages in the domain, I really want you to render most text at 90% of the browser's default font size".

This snippet is an extremely blunt hammer, but it actually works surprisingly well with most domains affected badly by tiny-text-itis. You can copy and paste the entire thing multiple times and change the domain name, and fiddle with the size:

@-moz-document domain( {
    font-size: 90% !important;

@-moz-document domain( {
    font-size: 90% !important;

@-moz-document domain( {
    font-size: 100% !important;

In practice, most of the stuff you want to make bigger will be covered by it, and probably not too much stuff that you wouldn't want to be bigger will be covered. It's not perfect, though, and if you find it gives odd results on a given domain, you'll have to take the advanced class, where you learn a bit about how CSS works, and use Firefox's Style Editor to inspect the site's CSS, experiment with changes to it, and then implement those changes in your userContent.css. But this is not that course.


This is a PSYCHOPATHICALLY complex topic, and I am not going to go into all the details (I tried that in the long version, and it was a nightmare). Here's Adam's Short Cut Approach To Font Scaling:

You probably have a fairly 'typical' computer monitor somewhere. That is, a monitor with a DPI of 90-110. That is, if you measure the physical height of your monitor's visible area - the bit where you actually see pixels - in inches and divide the number of pixels in your vertical resolution by that number, you get a result of 90-110. E.g. I'm typing this on a monitor whose visible area is about 18.5" high and whose vertical resolution is 1920 pixels, and 1920/18.5 is 103dpi. Pretty much any desktop monitor you have, unless it cost you several thousand dollars or you're very rich and very nerdy and you just bought a 4k monitor, will be one of these.

Fiddle with your desktop and browser font settings - and your CSS overrides, if you read the stuff above and got fancy - until everything looks right. Take a note of all those settings.

If all your monitors are 'typical' ones like this, you're now done - apply those settings to all your systems and be happy.

If you have any other monitors, though - you're rich and nerdy and bought a 4k monitor, or, much more likely, you own a relatively new but fairly normal laptop, with, say, a 1920x1080, 13" or 15" screen - you will find that if you apply all these settings to it, everything looks tiny.

Here's what you should do: apply all your normal settings to it anyway. Now, figure out the actual DPI of your screen, using the calculation mentioned above (or you can just cheat and use this page - it has a lot of common monitor types listed with their DPIs). Divide the DPI by 96, and you should probably get a number in the range of 1-1.6 or so. Note both these numbers down.

If you're running GNOME, install gnome-tweak-tool at this point, and go to its Fonts page. Set the Scaling Factor to be the 1.something number you just figured out.

Now, this is theoretically the 'correct' setting for your screen. Fun fact: if you print something and hold it up to the screen, fonts denoted in point sizes should be precisely the same size on the page as they are on the screen, at this point. But in practice, you'll probably find everything looks too big: people sit close to laptop screens, so things that are the 'right' size tend to look too big. So, tweak the number down a bit. Just tweak it down in 0.1 or 0.05 increments till you find a setting where everything outside your browser looks about right.

Stuff in your browser will still look wrong at this point, but no problem! Enter this URL in your browser: about:config, and hit enter. If you've never been to about:config before it'll pop up a warning, click through it. This is the secret config interface to Firefox - you can do all sorts of stuff here. Happily, Firefox has a setting which works precisely like GNOME's scaling factor, so we're on velvet. In the search bar at the top, type 'pixelsperpx'. This should find just one setting, layout.css.devPixelsPerPx. Double-click it, and enter the same values you set for GNOME. Aaand you're done. You will want to have a relatively recent version of Firefox, though, as this setting was kind of badly broken for a while.

If you're running anything other than GNOME, there may be a setting in your desktop's Font configuration stuff which works like GNOME's Scaling Factor setting, or just a straight DPI setting somewhere. If not, you can specify a DPI at the X level still, somehow, I think. If your display is a 'normal' one - in that 90-110 range - it's not a good idea to get cute and try to set a 'precisely correct' DPI or scaling factor, though. The 96dpi convention is strong, and unless your display is seriously out of whack with it, it's a good idea to stick to it. Setting a DPI that's nearly-but-not-quite 96 tends to lead to fairly squiffy font rendering.

Can I Make Fonts In Firefox Look Exactly Like Fonts On My Desktop?

No. No, you can't. And in fact, the closest you'll get on a lot of sites is actually to leave everything at the defaults.

Let's think about how we'd do this in theory. You can set your 'default' browser font size to a size in pixels which is as close as you can manage to your desktop font size in points.

11 is a fairly typical desktop font size in points, on a 'standard' display. With the 96dpi convention, 11 points is 14 2/3 pixels. OK, let's round it up to 15.

For some reason, though, the 'universal de facto standard' for the default font size in a browser is 16px. That's Firefox's default, and I think it's the default the other browsers use too.

Web designers have noticed this - that the default browser size is 'a bit big'. On most systems it's probably slightly bigger than what the user is using on the desktop.

So the 'good' web designers - who have abandoned specifying font sizes in pixels and started specifying them relative to the browser's default font size, as it advised by the W3C and other august bodies these days - often tend to set a universal scaling factor of something like 91% on their sites. If you poke through CSS on a few sites, you'll see this coming up again and again, on the ones that don't just say 12px or something similarly insane.

What's 91% of 16px? 14.56px...pretty darn close to 14.6 recurring px which we calculated as 11pt in pixels on the 96dpi standard.

So no, don't set your browser default size to 15px, as it'll make 'good' sites look a bit too small and do nothing at all to 'bad' sites. Just leave your browser default at 16px, get your scaling factor 'right' for your eyes, and fix the crappy sites with the 'minimum font size' setting and CSS overrides.

Computers. Who'd use 'em.

Fonts and font sizes in Fedora on the internet: hold onto your hats...

So I threw a somewhat disorganized note on some fiddling I'd been doing with my systems' font configuration on G+ earlier today. Harish Pilay asked for some more detailed notes, so I'm throwing together another gigantic blog post. Hold on to your hats, folks...

Fonts in Fedora: fontconfig

There's a rather important library for font handling in Fedora (and all other modern distros) called fontconfig. I remember the time years ago when various things didn't go through fontconfig, but these days, almost everything does. You can set quite a lot of font configuration at the fontconfig level and it'll be respected - to some extent...see later - by just about every graphical app you run.

fontconfig configures...well, really, everything about fonts at a systemwide level. If the question is 'where can I configure X about fonts?', the answer is almost always 'fontconfig'. This makes things nice and simple.

Fontconfig configuration layout

You can find Fedora's (or most any distro's) default fontconfig settings in /etc/fonts/fonts.conf and /etc/fonts/conf.d . Looking at the files in /etc/fonts/conf.d can teach you quite a lot about how fontconfig settings work. man fonts.conf has a lot of useful information too. It's important to know that fontconfig configuration files loaded earlier take precedence over ones loaded later, when they conflict.

Note the files /etc/fonts/conf.d/50-user.conf and /etc/fonts/conf.d/51-local.conf. For any user account, where ~ is the user's home directory as usual, 50-user.conf will load any files in ~/.config/fontconfig/conf.d, then the file ~/.config/fontconfig/fonts.conf (and then files in ~/.fonts.conf.d and then ~/.fonts.conf, but those last two are deprecated and should no longer be used).

/etc/fonts/conf.d/51-local.conf will load the file /etc/fonts/local.conf.

So for most local configuration changes, you can change systemwide settings in /etc/fonts/local.conf and per-user settings in ~/.config/fontconfig/fonts.conf. If you need to override any systemwide settings from files in /etc/fonts/conf.d whose filename starts with a number lower than 50 you'll need to drop a lower-numbered file in that directory, but this is pretty unusual.

Doin' Stuff with fontconfig

What did I actually do with fontconfig this morning? I created a file /etc/fonts/local.conf with this in it:

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">

# Preferred serif, sans and mono fonts

                    <family>Droid Serif</family>
                    <family>Droid Sans Mono</family>

# Replace some crappy fonts with ones I like

    <match target="pattern">
            <test qual="any" name="family"><string>verdana</string></test>
            <edit name="family" mode="assign" binding="same"><string>Cantarell</string></edit>
    <match target="pattern">
            <test qual="any" name="family"><string>arial</string></test>
            <edit name="family" mode="assign" binding="same"><string>Cantarell</string></edit>
    <match target="pattern">
            <test qual="any" name="family"><string>times</string></test>
            <edit name="family" mode="assign" binding="same"><string>Droid Serif</string></edit>
    <match target="pattern">
            <test qual="any" name="family"><string>helvetica</string></test>
            <edit name="family" mode="assign" binding="same"><string>Cantarell</string></edit>

Mostly, I set my preferred fonts for the standard 'generic' families, and I changed the aliases for some common font names. What does this mean? Read on!

fontconfig (and the font world in general, really) has a concept of 'font families' which I won't really go into in detail. All we really need to know right now is this: as most people are probably familiar with, there are three broad categories of type into which almost all of the text we deal with on a daily basis fall - serif, sans-serif and monospace. Serif fonts have those little curly bits, sans-serif fonts don't, and monospace fonts are, well, monospaced - that is, all their characters have the same width, so monospaced text always lines up nicely in columns, which you want, sometimes. There's an obvious layering violation here, in that monospaced fonts can be either serif or sans-serif yet we're treating those three attributes as if they were alternatives to each other, but ah well - it's the world we live in.

If a program (or other thing) doesn't try to specify a particular font to render its text in, it will either not specify anything at all and leave it to the system to pick a font, or it might say 'this text should be sans-serif' or 'this text should be serif' or 'this text should be monospace'. When this happens, if nothing at a higher level overrides it, fontconfig will decide what font is used.

The stock configuration for Latin text in Fedora can be seen in /etc/fonts/conf.d/60-latin.conf. It specifies several particular 'families' - which you can really just think of as individual fonts for this purpose - for each of the three major generic families. The first will be used if it exists, the second if the first doesn't exist, and so on. I think this fallback order is also used if a given character is missing in a font - so if a character is missing in the first font in the list, fontconfig will check if it's in the second font in the list and get it from there if so, and so on.

If you take a look, you can see that Fedora's system-wide stock 'serif' font is Bitstream Vera Serif, our system-wide stock 'sans-serif' font is Bitstream Vera Sans, and our system-wide stock 'monospace' font is Bitstream Vera Sans Mono, with first fallbacks to DejaVu, which are derived from Bitstream but add a lot more character coverage. This is a pretty common stock config for Linux distros these days. Let me tell you, the day the heroes at Bitstream (now sadly deceased - the company, not the heroes, AFAIK...) open sourced those fonts they did the F/OSS community a huge favour. We had some pretty poor fonts before then. Hands up if you remember Luxi.

But Bitstream fonts aren't actually my favourites. I'm very, very partial to the GNOME 3 stock font, which is a sans-serif font called Cantarell. I think it's awesome. I don't like serif fonts in general but if I have to look at one I prefer Droid Serif to Bitstream, and ditto for monospace - I quite like Droid Sans Mono. The Droid fonts were created for Android (though Android has now switched to a font called Roboto) and are available in the packages google-droid-sans-mono-fonts, google-droid-serif-fonts and google-droid-sans-fonts in Fedora.

So what the first chunk of that local.conf does is enforce my preferences: because of the order in which the config files are read, the effect is that my preferred fonts are put at the top of the lists from /etc/fonts/conf.d/60-latin.conf. I copied the skeleton from that file just to make it easy to edit - it's always a pain writing XML from scratch.

The second half of the file does something different. Here's the question: what happens if a program wants to render some text in a font you don't have?

Of course, the answer is determined by fontconfig. If you check the stock rules, various files have directives that 'alias' various commonly-used fonts that aren't always (or often) present on Linux systems to other fonts. The main one is /etc/fonts/conf.d/30-metric-aliases.conf , which makes an attempt to map a lot of commonly used fonts to their closest metric equivalents.

What's a metric equivalent? Well, it's pretty simple: it's a font whose characters are all the same size as another. Some F/OSS fonts exist explicitly to be metric equivalents of non-F/OSS fonts. The Liberation family, for instance, is designed to provide metric equivalents of the stock Windows fonts.

If you think about it for a second, it's obvious that the actual size of the characters in a given font is going to be very important in some contexts. The classic example is a word processed document. Say you write a document on one computer, carefully laying it out such that it fits perfectly on a single page...then you open it on another computer and it doesn't any more! Why? One of the common reasons is that the second computer doesn't have the font you used to write the document, and the font it substitutes is not a perfect metric equivalent. If, say, the characters in the 'replacement' font are 10% wider than those in the original, the text will wind up taking up near 10% more space than it did on the other computer, and suddenly your document doesn't fit precisely on one page any more.

Hence: metric equivalents. Metric equivalents also usually try to use the same broad 'style' as the fonts they're replacing, too - obviously they can't look the same, as that'd be copyright infringement, but if the original is a very conservatively-styled font, the replacement likely will be too, for instance.

So, yeah, stock freetype generally tries quite hard to replace commonly-used fonts with metric equivalents. But I don't actually run into cases where this is important to me very often. And web sites often specify fonts like Arial and Verdana and Helvetica that I don't have, and don't much like so I wouldn't install them anyway. I don't usually like their metric equivalents either.

So what the second part of my config does is override fontconfig's aliases for some fonts. Attentive readers may be wondering how this works given that metric-aliases.conf is numbered 30 - well done, a cookie for you! The answer is that the mechanism I use is a 'stronger' one than the one 30-metric-aliases.conf uses, so it wins out even though it gets loaded later. 30-metric-aliases.conf uses aliases, whereas my directives actually effectively 'rename' the fonts wherever they're encountered - they say 'when you run across verdana, just pretend it's Cantarell', and so forth. That means the aliases don't kick in, because the font basically isn't considered to be verdana any more.

You can do all sorts of stuff with fontconfig. Really, I'm not joking - it's incredibly powerful, and I only just scratched the surface. I actually have another bit in my config, which does this:

# Use medium hinting for Droid Sans Mono only

    <match target="pattern">
            <test qual="any" name="family"><string>Droid Sans Mono</string></test>
            <edit name="hintstyle" mode="assign"><const>hintmedium</const></edit>

What that does is use medium strength hinting just for the font Droid Sans Mono. Hinting is another insanely complex area of fonts, but at a very high level it more or less means 'when rendering this font, how much should the font rendering engine mess with it to make it the precise size and shape requested'. Really, don't go into any more detail unless you don't value your sanity or want to become a font designer, but it's somewhat useful to know it exists and it can affect how your fonts look rather drastically. In GNOME tweak tool, you can set a default hinting strength for all fonts, and see the changes on the fly. I find 'slight' hinting is best for almost everything on my desktop, but Droid Sans Mono looks a bit wonky with 'slight', and this config snippet overrides just that font to use 'medium' and lets everything else use what GNOME says. (I'm actually surprised GNOME's setting doesn't override fontconfig's in this case, but layering of font configuration is another rather complex area, as we'll see to some degree later).

That's just an example, anyway, you can do all sorts of crazy stuff.

Next post: how fonts work on the Web, and how to control them!

Please build this for me: GNOME sync

So, this is really just a random feature request being thrown out to the cold unfeeling lazyweb. But hey, it's easy to type.

GNOME folks, you have GNOME Online Accounts, which is one of the best features of GNOME 3 - a really nice and robust desktop-wide generic mechanism for pulling stuff from and pushing stuff to remote accounts. Awesome.

Now how much more awesome would it be if one of the account types was "Firefox Sync server", and GNOME could sync its configuration to one?! Oh god, I'm drooling already.

Firefox is one of my primary environments anyway, and Firefox Sync is so damn awesome for that: any time I get any new device or something that runs Firefox, all I have to do is hook up Firefox Sync and it starts behaving just like all my other devices that run Firefox. My extensions are there. My history is there. My configuration is there.

Wouldn't it be awesome if GNOME was the same? All I'd have to do is enter my Sync account details to GOA and it'd pull down all my GNOME configuration. Heck, it could pull all my other online accounts from it. Evo would have my mail server and identities configured. gedit would have my snippets and config all ready to go. That would make me so damn happy. Hell, I'd put up a damn bounty on it. Anyone? Anyone?

How to do manual multi-boot configuration with Fedora

One thing that always seems to be a pain point with Linux distributions and installers is handling multi-boot. There are some fairly notorious cases, and a lot more less commonly-considered ones which people making distributions and installers nevertheless have to consider.

Prior to Fedora 18, you could install Fedora's bootloader to a partition header, which was useful for a multi-booting strategy known as chainloading, where a 'master' bootloader in the MBR could 'chainload' slave bootloaders installed to partition headers. From Fedora 18 onwards this is no longer possible through the Fedora installer UI. However, there are other ways you can configure a manual multi-boot setup, and I thought it might be useful to write some of them down.

Setting up chainloading manually

Yes, you can actually still achieve a chainloading setup when doing an interactive installation of Fedora, it just takes a bit more work. It's crucial to know that, during Fedora installation, you can always access a root console on tty2 with 'ctrl-alt-f2'. You have a fairly complete environment, and if the installation has reached a point where the filesystem for the installed system has been created and mounted, you can access it at /mnt/sysimage. So our strategy here is simply to tell Fedora not to install a bootloader, complete the installation, and then do the bootloader installation ourself.

  1. Begin Fedora installation as usual.
  2. Enter the Installation Destination spoke.
  3. Click on Full disk summary and bootloader... at the bottom left
  4. Select the disk with a check and click Do not install bootloader
  5. Click Close, and proceed with installation as desired
  6. When installation is complete and Fedora tells you to reboot, don't! Instead hit ctrl-alt-f2, and run the following commands:
  7. chroot /mnt/sysimage
  8. echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub
  9. grub2-install --no-floppy --force /dev/sda1
  10. grub2-mkconfig -o /boot/grub2/grub.cfg

Now we're done with the Fedora side of things. It's safe to reboot from the installer. Boot to the OS that owns the 'master' bootloader, on the MBR. Now we just need to add an entry to the configuration of the 'master' bootloader, assuming it's grub2. You may also want to run echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub on the 'master' OS: what this command does is prevent grub2-mkconfig using its automatic multi-boot smarts, where it tries to add entries for other installed OSes to the bootloader configuration. If the OS that controls the configuration for your master bootloader uses grub2-mkconfig to update its configuration file, you should add the new entry to /etc/grub.d/40_custom (or similar - its location may differ between distros) then re-generate the config file.

menuentry "Fedora (chainload)" {
    insmod ext2
    insmod chain
    set root=(hd0,msdos1)
    chainloader +1

(hd0,msdos1) is the first partition on the first hard disk (unless it's GPT-labelled, in which case it would be (hd0,gpt1)) in grub2 lingo. If you installed Fedora somewhere else and wrote the Fedora bootloader somewhere else, of course, adjust accordingly. You can use something like search --set=root --fs-uuid=[UUID] --hint hd0,msdos1 directive instead of set root to set the partition by UUID, if you like.

That should be all! Now, booting the system should show you a Fedora (chainload) entry which will load Fedora's bootloader, from which you can of course boot Fedora.

Setting up configfile inclusion manually

If your 'master' bootloader is grub2, you can use an alternative approach known as configfile inclusion. It works very similarly to chainloading, only instead of actually loading the 'slave' bootloader, the master copy of grub2 just reads a different configuration file. So instead of actually installing a 'slave' OS's grub2 to a partition header, we just make sure each slave OS has a grub2 config file, and adjust the 'master' grub2's configuration accordingly.

  1. Begin Fedora installation as usual.
  2. Enter the Installation Destination spoke.
  3. Click on Full disk summary and bootloader... at the bottom left
  4. Select the disk with a check and click Do not install bootloader
  5. Click Close, and proceed with installation as desired
  6. When installation is complete and Fedora tells you to reboot, don't! Instead hit ctrl-alt-f2, and run the following commands:
  7. chroot /mnt/sysimage
  8. echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub
  9. grub2-install --no-floppy /dev/sda1 --grub-setup=/bin/true
  10. grub2-mkconfig -o /boot/grub2/grub.cfg

Now we're done, and we can reboot and edit the 'master' bootloader's configuration file. You will notice these steps are almost identical to the chainloading guide. The change is small, but crucial: passing --grub-setup=/bin/true to grub2-install causes it not to actually write the bootloader to the target location, but do everything else it would usually do, including set up some fonts and modules and things that we need. The grub2-mkconfig step, of course, creates the configuration file we will later chainload.

As per the chainloading guide, you may want to run echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub on the 'master' OS, and you will probably want to add the new entry to /etc/grub.d/40_custom (or similar - its location may differ between distros) then use grub2-mkconfig to re-generate the config file.

menuentry "Fedora (configfile)" {
    insmod ext2
    set root=(hd0,msdos1)
    configfile /grub2/grub.cfg

Again, this is very similar to the chainloading example, except we're using the configfile directive instead of chainload. That should be all! Now, booting the system should show you a Fedora (configfile) entry which will load Fedora's bootloader configuration file from the 'master' copy of grub2, from where you can of course boot Fedora.

The Future

Well, I hope that was useful. I'm hoping for Fedora 21 we can set it up so that choosing not to install a bootloader still generates a configuration file, rather than skipping all bootloader setup entirely. That would make this easier - all you'd have to do is edit the configuration of the 'master' bootloader. But we have to consider whether there are use cases for completely skipping bootloader configuration - not even generating a configuration file - first. Knowing this topic...there probably are. (If you know of any, please shout.)

Doings (including a not-too-short and possibly wildly inaccurate history of PHP class loading, for some reason), and OpenID is back (for now)


Edit: As you can probably see, I also decided to switch up my blog theme, finally. Wanted something minimalist and single-column, and this looks decent. I also put a bit of effort into getting half-decent rendering of preformatted/code blocks, and installed an extension to let me write blog entries in Markdown, yay. Sorry for any weirdness you noticed while I was hurriedly rewriting this post and editing the theme.

So I took a look at the bug that seemed to be breaking the OpenID Wordpress plugin for me - php-openid bug here, Wordpress plugin bug here - and found what looks like a simple fix for it, and it seems to work for me, so I've turned OpenID back on. You should be able to post comments without recaptcha by logging in with a valid OpenID now. (Also, I can use as my OpenID again...)

I've been posting miniblogs to G+ lately, which is bad of me, but it's hard to resist with the F/OSS community there. You can find me on G+ here (separate browser profile recommended when using evil social networks!)

On that topic - I wrote a G+ entry on running things like Facebook, G+ and other sites that do privacy-invading things in a separate profile, conveniently.

I spent about half of the Red Hat holiday shutdown running around fighting fires in Fedora 20 and Rawhide. Mostly I made things better, I think. Well, I broke Rhythmbox, but then I made it better again. I also fixed OwnCloud, fixed some GNOME crashing, fixed gtkpod crashing on start, and fixed the problem with disabling SELinux via the config file not working any more.

The second half of the holiday shutdown I spent, for some completely inexplicable reason, working on OwnCloud - I started out by building OC 6 for Fedora 20 so I could deploy it on my own OC instance and test it, but things sort of branched out crazily from there...

OC is a PHP web app. If you've never had cause to look into that sausage factory - you lucky, lucky person - PHPland is one of those development communities where it's become standard practice to bundle libraries you depend on. In fact, PHP is possibly worse than Java in this regard: you should count yourself lucky if they even bother bundling the entire library, or indeed telling you where the code came from at all. The standard approach of a PHP developer who needs a bit of code to do something and doesn't want to write it appears to be to Google around a bit, find a random PHP file containing a function that does what they want, dump a copy of it into their source tree, and start using it.

I exaggerate only slightly. I think if you show up at a PHP conference and start talking about things like library naming conventions and interface stability and conventions concerning code re-use they'd look at you like you'd sprouted an extra head and run you out of town on a rail.

So, anyway, the major challenge in packaging PHP web apps (and PHP stuff in general) is unbundling. Fedora, being a fairly traditional Linux distribution, is staffed by people who very keenly understand the importance of standardized use of shared resources, and has lots of strict and very sensible policies about same. Mapping PHP development practices onto the policies of Fedora (and most other distributions) is...a fun exercise.

Fedora packages are usually not allowed to bundle shared libraries. If the thing you're packaging wants to use an external library, the policy is that that external library should be packaged as an independent Fedora package, and the dependent package should use the packaged copy of its dependency. Bundling is endemic in PHP packages, and often done very poorly (see above). Usually 90% of the work of packaging anything substantial written in PHP is unpicking the external dependencies.

So it is with OwnCloud. Its use of external libraries is as chaotic as most PHP projects, though to OC's credit, at least its developers recognize that this is a problem and are responsive to submissions to improve the situation. If you take a look at the OwnCloud 6.0.0a tarball, you'll find directories named '3rdparty' tucked away here and there:


These contain OC's external dependencies (some PHP, some Javascript, some CSS, and some stuff like fonts). There are...a lot of them. We care more about unbundling the PHP than the Javascript or the CSS1, but that still leaves a lot. As with most PHP projects, there is some fairly gross insanity in there, often related to nested bundling: PHP projects often bundle things which bundle other things...sometimes the same thing as is bundled by some other thing.

OwnCloud 6.0.0a, for instance, contains two copies of dompdf and one of tcpdf. It contains two copies of Doctrine's cache component - one in 3rdparty/doctrine/common/lib/Doctrine/Common/Cache , one in apps/files_external/3rdparty/aws-sdk-php/Doctrine/Common/Cache (which as you can see is bundled by aws-sdk-php which in turn is bundled by an OwnCloud app) - which are nearly, but not quite, the same. It contains version 2.2 of Symfony's routing component in 3rdparty/symfony/routing , version 2.3 of Symfony's console component in 3rdparty/symfony/console, and God-knows-what version of Symfony's classloader and event dispatcher in apps/files_external/3rdparty/aws-sdk-php/Symfony (again, nested via aws-sdk-php). It contains an entire pure PHP Unicode framework - which it uses unconditionally (despite the fact that standard PHP extensions like intl are likely to be available and perfectly capable of doing the job in the majority of cases) to set a UTF-8 locale and do Unicode string normalization. And there's probably more I haven't found yet.

The main OwnCloud package maintainer, Gregor Tätzner, has done yeoman's work on unpicking all this stuff over the years, and with 5.x it was pretty close to being fully unbundled, but there's some new stuff to deal with in 6.x, and some things that still need working on. So I spent some time helping out with that.

My instinctive response to a lot of PHP bundling is "oh god can we kill that with fire?", so rather than just methodically working through bundled stuff and trying to package it for Fedora (the common unbundling approach), I tend to wind up trying to figure out why the code is bundled in the first place. This tends to lead me to poking at various bits of the upstream code, and from there with OwnCloud I wound up sidetracked into submitting various improvements to upstream.

Most of these were fairly trivial - this one looks like a big pull request, for instance, but actually it just tweaks the way several OC apps load external libraries so that they can be more easily unbundled without patching by downstream distributions2.

But there was one where I kind of surprised myself. Here's where it all started: PHP is a class based language. If you want to use something (a function, usually) that's in a class that's not actually a part of the file your code is in, you're going to need to load the file that contains that class somehow.

This, in PHPland, is...a topic with a history. Disclaimer: the following is a likely-deeply-flawed explanation of class loading in general and in PHP in particular which is the result of my four day self-taught crash course in PHP class loading. It incorporates a not-so-brief history of quite a lot of PHP development. If you know much about class loading and/or the history of PHP, you can happily either skim it, skip it, or (preferably) read it and explain to me what I got wrong. If you're as dumb as me, feel free to read the following, feel like you learned something, and go break someone's system. You're welcome!

Class loading background

The 'primitive' way of doing class loading is just to do it manually. In PHP, you can include some other source file with the include, include_once, require and require_once statements. So, the dumb way to do class loading is just to make sure your hierarchy of includes works manually. You still see this in many projects, it's not at all unusual.

It does kind of suck, though, and is often the source of silly bugs and unnecessary handiwork. So over time, there have been various attempts to do something better.

Classes are usually used, whether as the result of conscious planning or just organic development, to define some sort of namespace.

In layman's, imagine you've got ten pieces of code (could be files within a project, could be different projects, doesn't really matter). They all need a function that strips input in some way or another. Maybe they all call it strip_text().

If you want these pieces to share code in any way, you've now got a bit of a problem, because it's almost certainly the case that not all of those strip_text() functions actually do the same thing. But strip_text is a perfectly decent name for all of them!

When we talk about namespacing we're really just talking about this kind of problem. Say one of the pieces is called foo, and one is called bar. We need a way to say 'this is foo's strip_text() function, and this is bar's strip_text() function'. And that's the Idiot Monkey Guide To Namespacing.

Like anything else people do, there are inevitably five thousand different ways you can do this, with passionate arguments about which is the right one and why all the others are awful and probably cause cancer.

PHP: The Early Years, and PEAR

PHP, per se, doesn't enforce one. For some time, it didn't even endorse one. You put objects in classes, and you named the classes. How you named the classes, and how they related to the layout of your source tree, was your own business. So, of course, everyone named them differently, and laid out their source trees differently.

This is kind of a problem, especially with reusable components. If you're writing code and you want to use several external libraries, it's kind of a pain in the ass to keep track of how each ones names its classes and lays out its source tree to make sure you include the right files and don't ever wind up with namespace collisions or anything like that. So long as there is no standard or widely-accepted convention or, generally, coherence at all in naming classes or laying out source trees, it's also almost impossible to come up with anything cleverer than manual inclusion of required files.

Still a long way back, but after PHP had been around for a while, PEAR happened. PEAR was one of the first attempts to impose some kind of order on the chaos of code re-use in PHPland, and was really the only game in town for a long time. PEAR provided a repository, hosting system, and a set of conventions for PHP libraries. It's the conventions we're interested in here, because (AFAIK) PEAR provided the first widely-observed standard or convention for PHP namespacing.

As PHP still didn't have any actual namespacing features at the time PEAR came about, it defined a standard for namespacing via class names. Basically it said classes should be named in the style Project_Name_Subclass, and stated that "The PEAR class hierarchy is also reflected in the class name", without AFAICS quite defining what the "PEAR class hierarchy" is (in practice I believe this was referring to PEAR's standard top-level class prefixes like Crypt_, Auth_ and Net_).

In PEAR, it came to be standard practice - and was later made a policy - to reflect the class hierarchy in the layout of the source tree as delivered by PEAR. So putting together PEAR's package naming, class naming and source layout conventions, you get the quite nice structure you see in /usr/share/pear on a Fedora system. For instance, /usr/share/pear/MDB2.php is part of the "MDB2" package and defines the class MDB2. /usr/share/pear/MDB2/Extended.php is also part of the "MDB2" package and defines the class MDB2_Extended. /usr/share/pear/Net/Curl.php is part of the "Net_Curl" package and defines the class Net_Curl. And so on.

The Chaotic Interregnum

Ah, but that's just the first part of our story! Over time, a few things happened. More and more PHP development started happening outside PEAR, for a few reasons, which can be summarized as it erring too far on the side of maintaining central control over things (making it hard to get stuff added or make major changes), not keeping up with the times (they moved to SVN in 2010...they're still moving to git) and not sufficiently accounting for the trend towards dependency bundling (in which regard I'm 100% on their damn side).

This post illustrates what, I think, the 'typical PHP developer' saw as 'the problems with PEAR' (though right around the line "The nail in the coffin for me with PEAR is one of the biggest bug bears of the Gem system: system-wide installation" I develop an irresistible urge to stab the author in the head with a rusty fork).

Also, PHP itself got more mature. Very significantly, PHP 5 introduced frameworks for namespacing and class autoloading. Naturally, as we're dealing with humans here, the now-official way of doing namespacing in the PHP language was not the same as the one existing standard way of doing namespacing in the most widely-used external repository of reusable PHP code.

Since PHP 5.3, you can define namespaces (which can contain classes, interfaces, functions and constants) in PHP by using the namespace keyword. A backslash - \ - is used as a hierarchy separator.

So now we had a sort of ersatz namespacing convention originating in PEAR, based around class names and using underscores as a separator, and a formal namespacing feature of PHP itself, encapsulating class names and using backslashes as a separator.

This was dealt with in PHPland in just about precisely as careful, considered and organized a fashion as you would expect.

PHP 5 also introduced a framework for doing automatic class loading, which is where this post has been going - slowly, oh so very slowly - for a long time.

If you remember a few thousand words back, we noted that the manual approach to including the appropriate files for the external classes you want to use kind of sucks. Class autoloading is The Solution to this.

Basically, if you have a reliable relationship between class names and the layout of the source files, you don't need to do it manually. If you know that the class Foo_Bar is in the file /some/where/Foo/Bar.php and the class Monkeys_Chickens_Scrabble is in /some/where/Monkeys/Chicken/Scrabble.php...well, there's a pretty obvious possibility to add some convenience.

PHP 5 introduced a standard mechanism for implementing autoloading. If you defined a function called __autoload and then tried to invoke a class which was not already present in the files that had been included so far, instead of just dying with an error like it used to, PHP would call __autoload(classname), then try again. If your autoloader function successfully included the correct file for the class, you won. In PHP 5.1.2, they refined this design to allow multiple autoloaders to be stacked (yes, I know, I'm cringing too) with the spl_autoload_register function.

Anyhow, point is, there were now autoloading and namespacing features of PHP itself, and some large, influential and forward-thinking PHP projects started using them. This starts to stretch my search fu - it helps to have been around at the time - but Symfony, for instance, appears to have grown a basic autoloader in late 2006. Zend and Doctrine have had autoloaders for a long time, too. Smaller PHP projects started to grow their own.

Unfortunately, there was, at this time, no standard in the space, and the different implementations were often incompatible with each other. Not all projects used namespacing the same way, and not all projects mapped namespace and class names to file paths in the same way. PEAR would have been the most likely centre for some sort of standardization to develop, but PEAR was already on its way out at this point. Major frameworks were not part of PEAR, not delivered using PEAR's tools, and didn't comply with PEAR's standards. PEAR never even defined a namespacing standard.

Fast forward a few years and you have the typical PHP chaotic mess. This is, I think, why PHP 5.1.2 let you stack autoloaders. Multiple projects were using different conventions for namespacing, class naming and filesystem layout, and implementing their own autoloaders. If you wanted to depend on two projects which didn't use the same conventions, you needed to be able to use both their autoloaders.

OwnCloud, to come all the way back to where we started for a minute, implements its own autoloading for its own classes using different conventions to any other project I've seen, implements autoloading for one or two of its bundled dependencies with conditionals in its own autoloader implementation, and uses the different autoloaders of some of its other bundled dependencies as an entry point to those dependencies (e.g. php-opencloud).

For instance, remember I noted earlier that OwnCloud core includes some bits of symfony, while one of the official OwnCloud apps bundles php-aws-sdk which itself bundles other bits of symfony? Well, OwnCloud uses its own autoloader to autoload the bits of symfony it bundles directly, but the copy of php-aws-sdk bundled by one of the OC apps uses its bundled copy of symfony's autoloader both to load the bits of symfony it bundles and as an autoloader for php-aws-sdk itself (confused yet?)

A New Standard Emerges: PSR-0

Finally someone tried to do something about this kind of mess, and lo, PSR-0 was born!

(Boy, I recall the halcyon days of last Thursday or so when I first laid eyes on the PSR-0 page; I was still piecing together a lot of this background. It's a whole lot more easy to understand in context.)

PSR-0 is presented as an autoloading standard, but in effect it's really a combined standard for doing namespacing, class naming and filesystem layout in order to enable simple cross-compatible autoloading. It's defined by a group called PHP-FIG, the PHP Framework Interoperability Group, which is a consortium of major PHP frameworks and libraries, including Symfony, Doctrine, Zend and other big hitters. The main point of PHP-FIG is to define standards for these big frameworks to interoperate with each other, but given its size, anything it does tends to be watched pretty closely by - and sometimes adopted by - PHPland as a whole, it seems.

What PSR-0 really does in practice - as it looks to this monkey from the outside, anyway - is take the approach Symfony was already taking in practice and boil it up with a bit of backwards compatibility with PEAR. What I think they really wanted PSR-0 to say was this:

Define namespaces and put classes in the namespaces. The path to the class monkeys in the namespace Foo\Bar is Foo/Bar/monkeys.php .

But they wanted PSR-0 compliant autoloaders to be backwards compatible with projects that still use a layout based on the PEAR standards (remember I explained those?), so they baked PEAR compatibility into the standard. This adds rather a bit of complexity, mainly because of the different separators, \ and _. It can't simply unconditionally treat _ as a directory separator because there could be underscores in namespaces. If you define the namespace \Foo_Bar\Monkeys_Love_Peanuts and put the class Really inside it, the filesystem path should be /Foo_Bar/Monkeys_Love_Peanuts/Really.php , not /Foo/Bar/Monkeys/Love/Peanuts/Really.php. So without ever entirely clearly explaining why (which made figuring this out fun...) the PSR-0 standard ensures PEAR compatibility by treating an underscore as a directory separator in class names but not in namespaces.

For a given tree of PHP files to be "PSR-0 compliant" they must define at least one level of namespacing, and be laid out on the filesystem such that the file that implement a class maps to the namespace in which the class resides and the name of the class with \ taken as a directory separator in the namespace and _ taken as a directory separator in the class name. PEAR-compliant trees aren't technically "PSR-0 compliant" as they have no namespacing, but PSR-0 is inherently backwards-compatible with the PEAR standards: any PSR-0 compliant autoloader will be able to autoload classes from a PEAR-compliant tree.

Are We There Yet?

So now we have two conventions for class naming and source tree layout that make it practical to write an efficient and not overly complex autoloader implementation, and a standard which encapsulates them both. It took a while, and a lot of background, but we've arrived somewhere sane, haven't we? Oh god, please say we have.

Well, no, sadly, not entirely. No. That'd (still) be too easy. For a start, PSR-0 adoption seems to have been solid but nowhere near universal. Some people really don't like it. That was kind of a random Google hit, but in practice, there's still a lot of code out there in the wild that isn't either PEAR or PSR-0 compliant. So, that sucks.

But less randomly and more structurally, there's one more big upheaval in PHPland which affects this whole shebang, and we haven't covered it yet. If he's still reading at this point, my good friend Remi Collet is about to reach for his revolver, because it's called Composer (which comes in a package deal with its sidekick, Packagist).

Is It A Bird? Is It A Plane? No, It's Composer!

Taken together, Packagist and Composer are basically an attempt to replace PEAR, while being much much more friendly to library bundling. In fact, Remi (while waving his revolver) describes them as a bundling machine.

Composer is more or less explicitly a framework you stick into your PHP project to handle your external dependencies. It is designed such that you put Composer in your project, tell it what versions of what external dependencies you want to use, and Composer pulls copies of those into your source tree (from Packagist, which is its actual repository) and provides you with a bunch of convenience functions for updating them and autoloading them.

The metadata for a Composer-delivered project includes a spot where you can define how your project should be autoloaded - you tell it whether you're PSR-0/PEAR compliant, PSR-4 compliant (I'm coming to that, don't worry), or use some other wacky scheme, and Composer takes care of the rest. If you're writing a PHP project and you consistently use Composer to handle your external dependencies, you can pretty much just go ahead and invoke whatever classes you want from those deps and rely on Composer to handle the details for you.

If you're writing a PHP library or framework or whatever, you define some metadata in a json file and bung it into Packagist, and you're now delivered by Composer. It's all set up to be very low-bar-to-entry stuff.

Composer is a pretty good fit for today's favourite buzzwords - it's very decentralized and works well with a git-style workflow, for instance. My impression is that it's rather taken off like wildfire. Composer has become the primary upstream distribution channel for quite a lot of PHP libraries and frameworks (sometimes to the point where if you download a tarball, you just get a frozen copy of a Composer distribution of that project).

This comes with benefits and drawbacks, from a distribution point of view. On the one hand, PEAR was a lot more distribution-friendly, with its implicit assumption that the things distributed by PEAR would be installed system-wide and shared, not copied into each dependent project separately.

Composer - and the widespread adoption of Composer - effectively establishes library bundling as The Way PHP Does Things. If you think library bundling is a terrible long-term idea, like most people involved in Linux distributions tend to, this is obviously not good. And if you're a poor distro packager trying to reconcile PHP's 'let's throw sixteen copies of every library in there, so long as the code runs!' approach with your distro's probably strict policies about bundled code, Composer sure doesn't look like it's making anything easier.

On the other hand, you can look at the chicken and egg as being the other way around. If you take the view that PHPland has just about always had a tendency to bundle libraries, and it really took hold in the Chaotic Interregnum - a view which I think has a lot of truth to it - you could say that Composer isn't really the villain of the piece. This is closer to my view. This 'glass half full' view is that, while it enshrines bundling as accepted practice, Composer at least brings some order to it.

As noted with several examples from OwnCloud above, PHP projects are really bad at bundling things. They're not just addicted to doing it, they tend to do it in a really chaotic way. They throw random bits of other projects into their source trees in random places, often without ever indicating in any reliable way that they've included bits of other projects at all (a ritual of packaging some new PHP thing is to poke through the entire source tree finding the bits of code that have been ripped off from other places). They'll happily strip down external dependencies massively, and fiddle with their filesystem layouts. They'll happily patch their external dependencies, often without any easily locatable explanation of how they've patched them or even that they've patched them at all. It's a horrible, untenable mess.

Big projects can be even worse than small projects in this - some big projects are relatively organized, and arrange and document their external dependencies carefully, but some have accumulated whole piles of external dependencies in different locations in their source trees, each being laid out, documented (if at all) and loaded differently from the others.

If everyone adopts Composer, at least it should bring some order to this chaos. Distributions will still have lots of unbundling messes to unpick, but looking at the Composer metadata of any Composer-using project should at least give us an immediate indication of what external dependencies it actually has, what versions it's using, where it's keeping them, and so on. This makes unbundling somewhat easier, not harder.

Like PEAR, Composer also gives us a widely-observed standard that we can use to do efficiency-improving abstraction at the distribution level. Fedora and other distributions have a whole little infrastructure for packaging PEAR projects efficiently and consistently; packaging something delivered via PEAR is heavily automated. If Composer keeps up its current momentum, we'll get value out of doing the same thing with it - setting up a packaging infrastructure which lets you very easily generate a package for a Composer-delivered component. My other good friend Shawn Iwinski has done some experimental work down this avenue, and it looks pretty promising.

Widespread Composer adoption does somewhat lessen the chances of PHPland ever seeing the light and getting a lot less addicted to bundled dependencies, but frankly, that looked like a pretty long shot before Composer showed up.

PSR-4, Bringing It All Back Home

So, Composer is more or less directly responsible for the final wrinkle (so far...) in PHP class loading: PSR-4. PSR-4 - which was very recently adopted by PHP-FIG, and even more recently merged into Composer - is the second autoloader standard among PHP-FIG's five accepted standards so far, meaning they're batting a solid .400 in the 'produce nothing but autoloader specifications' stakes.

PSR-4 is explicitly designed to solve a fairly superficial problem that shows up when you mix a little PSR-0 and a little Composer. Dependency management frameworks like Composer, you see, have to do namespacing too. It would hardly be practical for Composer to stick every library it downloads into a single big directory. So, Composer came up with a perfectly 'common sense' form of namespacing for its problem space. If you're a package being delivered by Composer, you have a vendor name and a package name, and you're installed in the sub-directory vendorname/packagename/.

Unfortunately, when a Composer-delivered project is PSR-0 compliant, because packages that implement PSR-0 tend to use something very similar to "vendor name" and "package name" in their namespace layouts, you wind up with this kind of thing: my-awesome-php-project/vendor/symfony/routing/Symfony/Component/Routing/Router.php - which would be the file containing the class Router, in the Symfony\Component\Routing namespace. vendor/ is the top-level directory for all Composer-delivered components, and the Composer vendor name is 'symfony' and the Composer package name 'routing'. This is perfectly PSR-0 compliant from the base level my-awesome-php-project/vendor/symfony/routing (Composer's autoloader keeps track of where to start looking for what namespaces, PHP autoloaders tend to implement a 'prefix' or 'base directory' concept for this), but looks pretty absurd.

PSR-4 aims to address this by introducing the concept of mapping subsets of namespaces to arbitrary 'base directories'. To take the example above, with PSR-4, Symfony could declare that the 'base directory' for the 'namespace prefix' \Symfony\Component\Routing is /symfony/routing/ , and then set itself up such that when delivered by Composer, it is laid out as my-awesome-php-project/vendor/symfony/routing/Router.php.

Bits of the namespace beyond the 'prefix' are treated as under PSR-0, with \ as a directory separator, so the class \Symfony\Component\Routing\Matcher\UrlMatcher would be found at my-awesome-php-project/vendor/symfony/routing/Matcher/UrlMatcher.php.

It's arguable whether this 'problem' is significant enough to need a new alternative autoloader standard to solve it, but PHP-FIG decided it was (PSR-4 also drops the PEAR-compatibility stuff, so it doesn't have the added complexity of underscores to deal with, and this also solves a class of cases where PSR-0 compliant classes could be validly mapped to more than one directory path), and now we have one.

So now both PSR-0 and PSR-4 are active PHP-FIG standards and you can choose to lay out your project according to either (or, of course, neither). Composer, as mentioned above, now implements both PSR-4 and PSR-0 and you can mark a Composer project as being compliant with either. Composer's autoloader is a large (and, to be honest, quite impressively written) beast, which I've found invaluable as a reference in figuring all this stuff out. It implements classmap-based, PSR-4, PSR-0 and PEAR autoloading, falling back in that order.

classmap-based autoloading, which I didn't bother mentioning above, basically involves generating a static map of what filenames contain what classes and feeding that to the autoloader; some projects use this instead of implementing PSR-0 or PSR-4 or their own deterministic layout.

And....whew. That was Adam's Not-Very-Potted History Of PHP Class Loading. Finally, how does this tie back to my 'vacation' time? Well, I got sucked into this vortex by trying to clean up how OwnCloud loads its external dependencies, as the examples given above may have suggested. OwnCloud currently loads its external dependencies...very messily, and in ways which aren't at all conduicive to downstream unbundling. Eventually, in trying to get a handle on how it currently works and how (I think) it ought to work, it turned out to be necessary to figure out absolutely all of the above. After several false starts and bad ideas, what I was able to do with all that learnin' so far is two things:

  1. The above-linked pull request to clean up how several things that load external dependencies manually do so - what I actually did was have them, wherever possible, require_once the PSR-0 compliant relative class path, after adding the appropriate top-level directory to the PHP include path. This means that if a distribution packages the external dependency such that it's PSR-0 compliant with regard to the distribution's PHP include path - as both Fedora and Debian aim to do with PSR-0 compliant PHP packages - and then drops OwnCloud's bundled copy of the dependency, the require_once statement will find the system copy, and the distribution does not need to patch the require_once statement to give the right location.
  2. This pull request which (in my opinion, anyway...) improves OwnCloud's own autoloader quite a bit. It separates out the handling of OwnCloud classes and external classes more clearly, and makes the autoloader path for external classes PSR-0 compliant, and very similar to the PSR-0 reference implementation, with a prefixing mechanism (so OwnCloud can declare that a given set of classes is PSR-0 compliant with a root of 3rdparty/ or 3rdparty/symfony/routing or whatever) and a fallback to simply looking for the PSR-0 classpath relative to the include path, if the prefixed search doesn't work. This, again, allows for transparent unbundling, so long as the requisite dependency is present in a PSR-0 compliant location relative to the system PHP include path. In other words, Fedora can drop OwnCloud's bundled copy of, say, Symfony, and trust that OwnCloud will successfully autoload the needed classes from the system copy in /usr/share/php/Symfony , which is laid out in a PSR-0 compliant fashion (with /usr/share/php on Fedora's PHP include path).

The second patch, in particular, is 45 lines of code that took me (the monkey) a mere couple of hours to write (it would take someone who can actually code about ten minutes), but required about three days of research to be relatively sure I was getting it right. I haven't actually done anything with PSR-4 yet, but I sure needed to make sure I understood why it existed and what it was about. All this has merely confirmed to me that people who deal with this kind of stuff for a living are nuts, and PHP developers are - without exception - evil. But I had fun!

  1. Although under the Web Assets change we are supposed to be unbundling those now, which...yeah, ETA 3014. 

  2. It also adds lists of the libraries bundled in several of the 3rdparty directories with lots of details about what versions they are, where they came from, and how they've been modified, which is something I wish all upstreams would do as a matter of course.