FUDCon: days 1.5 to 3, and the aftermath

Finishing off my FUDCon Tempe write-up. So the second half of day 2, and all of day 3, were hackfests. I was a bit disappointed that we weren't given an opportunity to pitch the hackfests; I remember this being the setup for Toronto and Zurich, and it worked well. You could put a face to anyone proposing a hackfest you were interested in, and there was at least a vague plan of what hackfest would take place where. For Tempe, there was no organization of hackfests - people were just expected to magically find the other people who might want to work on the same things, and find a space to get together. It was about the only sub-optimal aspect of organization of the whole event, so that's not much to complain about! I'd just like to go back to the pitching/scheduling method in future.

Things worked out, though. In the afternoon of day 2 I tried to help Susmit and Christoph with Fedora Medical packaging. It involved some icky source trees with licensing issues and bizarre build systems and copies of system libraries so we didn't get much finished, but we did sort out a few things and develop a plan. I think I still owe Christoph an email with my unfinished .src.rpm though - sorry!

On day 3 I worked on a few different things; I spent a lot of time trying out the Fedora packager plugin for Eclipse which I was so impressed with at Andrew Overholt's lightning talk. It's definitely a great system but I hit a few issues trying it out, which Andrew and the rest of the team have been awesomely responsive to, and which should be sorted soon. It was great having Andrew right down the hall to wave issues at; 'I have feedback!' became my catchphrase for the afternoon (Andrew having bemoaned the lack of feedback in his lightning talk).

What I was trying the plugin out on was some trivial rebuilds of Sugar packages for Peter Robinson, which of course wound up taking far longer than they would have done the old way, but such is the price of progress!

I also continued my cunning plan of taking my hackfest to the masses. I had planned a hackfest to create package-specific test cases where I would get some developers to come along and create test cases for their packages, but as we didn't get to pitch hackfests, I never got to explain the idea. So instead I spent late night #2 and some of day #3 hackfest session wandering around and buttonholing people, starting in with 'so, what packages do YOU maintain?' and refusing to leave until I'd gotten a test case out of them. Mucho thanks to all those who contributed, we got test cases for Firefox, gammu, and smolt. Each one took less than five minutes to create. If you maintain a package and/or know how to use one, you can do it too! Just follow the SOP, and I'm here to help if you get stuck.

I can't mention that, of course, without mentioning the incredibly awesome Luke Macken, who implemented Bodhi support for package-specific test cases during the day #3 hackfest session; the staging Bodhi now displays all test cases associated with a package update right on the package update page, which is huge. Seeing this kind of thing come together is one of the main awesome things about FUDCon, for me.

The biggest concrete achievement of the day was a meeting with John 'J5' Palmieri and Christopher Aillon to discuss the GNOME 3 Test Day (which is happening RIGHT NOW in #fedora-test-day!) I was worried we wouldn't be able to do any significant co-ordination with the desktop team, as many of them were not able to make it to FUDCon, but J5 and Chris took the trouble to sit with James Laska and me and spend an hour or so providing very useful ideas for test cases, without which I would really have struggled to provide enough test case meat to make the event worthwhile. David Malcolm was also kind enough to sit in with us and provide a ton of useful suggestions, so thanks to him too! Later in the day I took some time to find a quiet room and get announcements of the test day out to several news outlets and mailing lists, and Larry Cafiero was good enough to mention it in his blog (and on microblogs too!).

The other highlight of the day #3 hackfest for me was Clint Savage being kind enough to give me the absolute idiot's guide to git, a tool which I've so far not so much used as bludgeoned into submission! He gave me some really useful basic advice in very easy-to-understand fashion and also did a great impromptu talk on git flow, a common and awesome git workflow, which I recorded with someone's Flip camera and need to upload soon.

As usual, though, the official sessions were only half - or even less - of the FUDCon story. Discussions and work go on well into the night back at the hotel, and I had some of the most interesting conversations of FUDCon there, particularly with Jeroen, who had some really interesting ideas for QA which I'm hoping to do something with soon. There's all the little corner conversations you don't necessarily remember; the one that sticks out in my mind was letting Harish know about Andre Robatino's delta ISOs of Fedora releases and pre-releases - a simple five minute conversation that (I hope!) will provide a significant benefit.

Of course, there's also the social side. Some F/OSS conferences can be a little awkward for me, as I wouldn't always necessarily choose everyone at the conference as drinking buddies, but this turned out to be a great event from that perspective. FUDPub on Saturday was at the student union rec hall, with free bowling and pool, which was awesome; I'm both insanely competitive and utterly convinced I'm awesome at everything, so if you get me in a big group with free bar games I'm there till someone drags me away. I used to bowl a lot (Jeroen, I'll put that scan up tomorrow, I promise you on ONE AMERICAN DOLLAR!) but hadn't for a couple of years, but I still managed a 115, a 118 and a 106, which I was pretty happy with. It was good to meet a couple of the guys from Rackspace, who sponsored FUDPub and were presenting at the conference - we bowled and shot pool all night, and took probably far less advantage of the free bar than we ought to have done (and others clearly seem to have). After FUDPub ended (criminally early, thanks to odd Arizona laws or something) I wound up at Gordon Biersch with Peter until 1:30am, which was good times as always, and managed to get to bed around 2am - not too bad for a FUDCon.

Sunday was games night, though, and someone had brought a poker set...so that was it. I've never played live before, but it was a lot of fun...even though Sandro did clean up, and talked more shit than Phil Hellmuth doing it. After a hard night's gambling (for chips) and valiantly battling the FUDCon beer mountain, I finally made it to bed around 4am (with a couple of large glasses of water and some strategic Tylenol) and got up at 8 for the next day's sessions.

Monday night was impromptu hotel lobby games night, and by far the most fun of the entire event - we ran another poker game and played till 4:30, finishing off the beer mountain, buying Robyn appletinis, and calling each other jackwagons. Much fun was had by all, it was somehow the perfect group of people to sit around a table and talk complete nonsense with. Also, we replaced the kernel with hurd and switched to a rolling release cycle - just so you know.

After waking up at 8:30 again to catch my flight out, and making it home just barely conscious, I've spent the subsequent two days working on the GNOME Test Day. It's now 6:46am on Thursday and I've been working on this thing for 22 hours straight, I'm quite unreasonably tired. After the Test Day's done, I think I'm taking Friday off. :) My internet connection's currently saturated with live image upload for the Test Day, so still no photos; I've been working on them on and off (damn Mo and Tatica for teaching me GIMP stuff so I don't just upload untouched JPEGs any more!) and I'll aim to get most of the good ones up tomorrow while the Test Day's going on.

And that was FUDCon! See you at the next one...

GNOME 3 Test Day #1: Come try the new hotness

Of course, GNOME 3 is one of the big new features of Fedora 15 - and the free software desktop in general. Fedora will be running three Test Days to aid in the final polishing and stabilization of the GNOME 3 release, and make sure Fedora 15 provides a good desktop experience. This is a great opportunity to help both GNOME and Fedora development and help make sure you can work effectively in GNOME 3 when it lands on your desktop. Even though these are Fedora events, you don't have to run Fedora to join in, and since GNOME 3 will land in all the distributions soon, the testing will be just as valuable to your distribution: all the feedback will go to the GNOME developers for the benefit of all distributions. The first Test Day is this Thursday, 2011-02-03. You can participate just by visiting the wiki page, and following through the instructions you find there - it's really easy! There will be other testers, Fedora QA team members and GNOME developers in the IRC channel - #fedora-test-day on Freenode IRC - all day long to help out and discuss issues with. If you don't know how to use IRC, no problem - you can use WebIRC. If you click that link it will open the IRC channel (which is like a chat room) in a web page in any good browser.

The testing is broken down into small chunks and you don't have to do every test - you can help out in ten minutes (plus the time it takes to download the image). Almost all the tests can be run entirely from a live image that you will find linked right from the Test Day page - there's no need to have an installed copy of Fedora at all. You enter your results into the Wiki page, and you don't need a Fedora account to do it - the page is open to editing without an account. Even if you're busy on Thursday, you can do the tests earlier or later and your results will still be just as useful - all you're missing out on is the on-the-day live interaction with other testers and developers in #fedora-test-day on Freenode IRC. Please note that the tests and results table on the page aren't complete yet, and probably won't be until Wednesday!

The testing will mostly see you booting a live image and doing normal things with it - running applications, configuration tools, interacting with desktop utilities: there's nothing complicated or difficult in most of the testing, anyone can do it! You get to be one of the first to try GNOME 3 and help make sure it works well when it comes out, and the warm fuzzy feeling inside comes free. The more testers we get the better, so please come along and help make sure GNOME 3 really is made of easy.

FUDCon, day 1.5

I'm at FUDCon, and as always, don't have enough time to blog. I also neglected to a) charge my spare camera battery or b) bring a mini-USB cable, so no pictures uploaded yet!

The event's been as useful as it always is, though. The Fedora on ARM talk by Paul Whalen was great as a status update on the progress with bringing Fedora's ARM build more into the mainstream. Mel, Mo, Seb Dziallas and Chris Tyler ran a combined talk on various education-related Fedora projects, which was really interesting to hear about - especially Chris' work with Seneca College students, and the care he takes to make sure they work on real-world projects with direct relevance to upstream, and send their work upstream: it was really exciting to hear that he grades students only on work they succeed in putting into real-world use (or, at least, work they make the right effort to do that with).

Mike McGrath's talk on the cloud and why it means we're all screwed blew my mind for the rest of the day - it's definitely worth hearing his talk if you can get to it anywhere, because it's well worth an hour of anyone's time to sit down and really think about how different a lot things are likely to be In A World Where...nearly everything runs in a setup where concerns about hardware and storage and processing are largely abstracted away and you just deal with them as per-use costs. Even if the answer is 'very scary for my line of work' it's still probably something you should do...

We also had a very crowded session on the future of spins which probably didn't have enough time to really deal with the many issues it raised, but it was at least good to have most concerned people in a room together and coming to some broad agreements on where pain points are and at least roughly the goals we want to accomplish with spins. We're hoping there will be a hackfest where we can work in more detail on actually resolving at least some of the issues.

Finally for the first day I saw Dan Walsh's talk on Sandbox X, a way of using SELinux (and a few other tools and kernel features) to run individual X applications, or even an entire desktop session, in a highly-restricted environment, as a way to access untrusted content and so on. I understood about a fifth of the technical background, but the examples of using it to read PDFs or untrusted websites were easy to understand and extremely cool.

FUDPub was at the student union rec center, so we got to bowl and play pool for free all night - awesome. First time I've managed to go bowling for years. (Jeroen, I will post the printout when I get home - I swear on one American dollar).

So far today I've been to Mo's awesome talk on using Inkscape - I think Mo's the only person who could possibly teach me to do anything good in graphics tools. I even made up a logo which you may or may not see pop up on this site when I'm on a network where I can actually get out to my webserver. Followed that up with Maria 'tatica' Leandro's talk on photo editing with GIMP and other tools - really interesting to see her workflow and compare it with Mo's similar talk, and with the infinitely worse methods I use :)

Spot led a session where infrastructure team members pitched their ideas for the next big Fedora project and got feedback from the audience. All the ideas were pretty good and I wound up voting for all of them but one, which probably didn't help the team much, but hey. I was particularly keen on John Palmieri's idea to focus on developing the package section of Fedora Community into a comprehensive one-stop portal for package maintainers, which would be awesome if it worked out.

Finally (up to now) we had lightning talks, including a fantastic presentation on the Fedora packager plugin for Eclipse by Andrew Overholt which led me to install the plugin about 30 seconds into his talk. And now I'm going to find a hackfest, and plug in my laptop before the battery dies! More to come...

FUDCon Tempe

Yup, I'll be at FUDCon Tempe, putting away a few (dozen) beers with Peter and trying to make other people do work for me as per usual! I'm planning to run a workshop on creating package-specific test cases on one of the workshop days - do come along. No presentation this time, I think I've run the 'introduction to QA' talk enough times at FUDCon now!

I'll be hoping to work on some Sugar and Meego stuff with Peter and any other interested parties, some general spin stuff, and maybe even a bit of Unity if I get time. Looking forward to it...and I'll remember my camera this time.

Roundcube 0.5: really good

I don't know what they did to it, but Roundcube 0.5 is incredibly fast. I run it for my private webmail; it's actually been broken for me for a while, logging me out a few seconds after I logged in. With 0.5 I re-configured it from scratch, the bug is gone, and it runs like a native mail client - loads even huge folders in less than a second, loads mails almost instantly...it's awesome. Nice to have working webmail again...

First Fedora 15 Test Day this Thursday: network device re-naming

The first Test Day for Fedora 15 is coming up on Thursday - 2011-01-27 - and it's a big one! The subject is network device naming, but before you doze off, it's more significant than you might be thinking. On some systems, good ol' familar eth0 will be gone in Fedora 15, replaced by...em1? Get used to it! PCI network adapter naming will also change on supported systems.

The theory behind this is interesting and worth reading up on, but for QA, our job is simply to test it and make sure it does what it says on the tin. Our friends at Dell, particularly Narendra K., have worked really hard to make sure it's easy to help out with testing; there's a simple script to run to see if you have affected hardware, and very detailed instructions on testing. This change will land in most Linux distributions in future, so even if you're not a Fedora user, it's probably a good idea to have a look at the test cases and try it out on your system, so you know what's coming and so you can help us get any bugs fixed before they land in your favourite distribution! You've no excuse - go and visit the Test Day page now.

Some patent reflections on WebM

See what I did there?!

I thought I'd expand a bit on some thoughts I had while commenting on a Guardian story about the WebM controversy. I haven't seen anyone look at this angle, yet. (This is, of course, entirely my own personal opinion, and not that of my employer or any other organization I'm involved with. I am not a lawyer, a paralegal, or in any way involved in or an expert on any form of law or, for that matter, video encoding. I'm just an asshole with an opinion. This is not legal advice. If you want that, ask a lawyer.)

So, one of the concerns cited about WebM is its possible vulnerability to patents from entities other than Google. Yes, we have rock solid patent grants from Google covering the WebM spec, but it's possible other people have patents which cover functions you would need to implement in order to implement the WebM specification. So, the argument goes, you can't rely on the royalty-free-ness of WebM.

Well, that argument's true, as far as it goes. The trouble is it's also true of absolutely any software specification ever. There could always be some patent we don't know about. If that was going to stop us writing code, no-one would ever write code ever again to do anything. This of course is one of the major, major problems with software patents, but I don't want to go into that here. We've got the system we've got, and the practical upshot of it is that people still write code, and do the best analysis they can of the ridiculously over-large patent pool, and join things like the Open Innovation Network or the proprietary / secret equivalents, and cross their fingers.

To put it in practical terms - it's also possible that someone somewhere has a patent on a function you need to implement the H.264 spec. There's no legal requirement for them to have exercised it by now. The only guarantee we have that the MPEG-LA patent pool contains all the patents involved in H.264 functionality is the MPEG-LA's word for it. Hell, given the parlous state of the U.S. patent system, someone could go out tomorrow and retroactively patent some bit of the H.264 spec that no-one else got around to patenting yet, then sue the world. They'd eventually get slapped down in court, probably, but not before causing a huge amount of trouble and expense for everybody. It's perfectly true that some jackass waving a patent could cause trouble for WebM, but that's also true of just about every other piece of software.

Now, if we accept the premise that PWJs (Patent Waving Jackasses) are a general danger to civilized society, and think for a bit, it should become apparent that, relatively speaking, WebM is actually a pretty good bet. This is purely and simply because Google's now tied up its reputation and a degree of its interest in WebM. Google has gone out very publicly and said 'WebM is a royalty-free specification and we're going to back that up'. This is really very similar to MPEG-LA going out publicly and saying 'H.264 is a format covered by this exact set of patents and we're going to back that up'. Now, think about what happens if some PWJ comes forward claiming to have a patent that covers something in WebM.

What happens is, most likely, that Google makes the problem go away. This doesn't rely on Google's inherent altruism or anything ridiculous like that, but simply on Google behaving in its own best interests. Google has gone out and said people can implement WebM and not worry about patents; if people go and implement WebM and then get ambushed by a PWJ, and Google doesn't back them up, they're going to be really seriously narked at Google, and Google doesn't want that. Also, when Google says that it's backing WebM because having a royalty-free video format for the Web is in its own best interests it actually happens to be telling the truth. So both of those concerns will tend to cause Google to actually want to defend the royalty-free-ness of WebM.

So, if a PWJ does emerge, we can be reasonably sure that Google is going to go to bat for WebM. Google is a very good entity to have going to bat for WebM, for all the standard reasons in patent lawsuits: it's got deep pockets and good lawyers. Broadly what will happen is that Google will make the problem go away with money. Google's legal resources will help it make sure the amount of money involved is not too outrageous. First Google will assess the patent being waved. Possibly, if it's clearly trash, Google will call the PWJ's bluff and say 'sue us if you dare'; more likely is that the patent will have at least some degree of apparent merit. In this case Google's probably not going to actually go to law; instead it'll either try and buy the company represented by the PWJ, or pay the company to make a patent grant in the same style as Google's own. In most cases, Google is going to have easily the resources to do this. If anyone tries to screw too much money out of Google, they'll say 'okay, never mind, sue us; maybe you'll win, maybe you won't, but it's going to take years and cost you a damn fortune'. A company as big as Google can throw enough confusion into even the most apparently open-and-shut patent case to drag it on for years. Most owners of private companies, or shareholders of public ones, are going to take an immediate, definite payday over one that might be bigger but is uncertain and will definitely take a long time to arrive.

It's also worth considering that it's in Google's interests to actively seek out any submarine patents and deal with them as soon as possible, because it's all the easier to sell the legal uncertainty vs. potential return angle at this point than it would be after WebM gets serious traction. The potential value starts to look bigger at that point. So overall Google's probably going to have to spend less to deal with potential problem patents now than it will do in future, assuming WebM is successful.

Summary? Well, software patents are evil and PWJs are a menace to society. We knew that already. But when it comes to WebM, there is an extremely powerful entity in whose own interest it is to make sure PWJs are not a problem for WebM. So when it comes to worrying about patent threats, WebM is actually a pretty solid bet.

Of course, the ultimate PWJ for WebM is MPEG-LA. MPEG-LA would like to have us believe that it owns patents (or rather, it represents people who own patents) that cover WebM, and it damn well will not sell them to Google or make a public grant of them. It has rattled its saber as loudly as it possibly can to this effect, without publicly making specific claims about specific patents. (Some people will probably suggest this puts it in a very similar position SCO was in, regarding copyright claims). It is interesting, though, that it's been 'considering' this since April of last year, and has apparently done nothing.

There are conflicting opinions among those who actually know the icky details of video encoding techniques on whether WebM is likely to be subject to any of the patents in the H.264 pool. Dark Shikari - the author of x264, an extremely good F/OSS H.264 encoder (which is hence of course a patent infringer of comic proportions, and not officially distributed to the U.S.) - thinks it might be, while Carlo Daffara, who analyzed Shikari's analysis, suspects it isn't. I wouldn't want to venture a guess either way, but I would venture a small guess that it eventually winds up in court between MPEG-LA and Google, though I can also see that not happening, and MPEG-LA trying to pursue an exclusively FUD-based strategy without ever actually pulling the legal trigger. If that happens, it'll be a very interesting case, with potentially huge implications for all sorts of things.

A few people have suggested MPEG-LA will eventually make a very liberal patent grant on H.264 to make it effectively royalty-free for all web use, but I don't really see that happening. I don't see a way they can do that and keep any revenue stream on H.264, and MPEG-LA has absolutely no incentive to do anything without a patent-based royalty stream. It's an organization which exists entirely and exclusively for the purpose of generating patent-based revenue streams. All the individual holders of patents in the MPEG-LA pool could, I suppose, suddenly discover compelling business reasons to make such a grant, but I don't think it's especially likely (and it's even possible the terms of the MPEG-LA pooling agreement prevent them from doing so).

Potential project for someone: graphics card generations in Smolt

Here's something that just came up in the QA meeting which someone might find interesting to do. It's a small project related to Smolt which would be helpful for QA and graphics development purposes.

As most people know, there are thousands and thousands of graphics card models out there. Many of them, however, differ only slightly from each other. To manufacturers and graphics driver developers, it's usually more helpful to look at cards in terms of generations. As far as the graphics driver developer is concerned, when it comes to actual software graphics operations, all cards in a given generation work pretty much the same (this is not the case for other stuff like talking to monitors, but that's not we're concerned with here).

What we're immediately concerned with is the status of GNOME Shell; it'd be really useful to QA and the driver developers to know how good our hardware coverage is, when it comes to Shell support. This is something that goes with the generations of cards. It's unlikely that Shell would work on a GeForce 9400 but fail on a 9600, for instance; those cards are part of the same generation, and at the levels where support for Shell is implemented, all cards in a generation behave very much alike. So, we don't need to test on every graphics card in existence and track the status of each separately; we can simplify it down to the generations.

What we want to be able to do is to use Smolt to find out approximately how common each generation is out there in the real world of Linux users. It's relatively easy to look up one specific graphics card model in Smolt, but what we need is to group all the cards that are listed in Smolt into generations (for Intel, Radeon and Nouveau), so we can then see how many systems in total there are for each generation. Then we can prioritize the driver development work for the most popular generations (we already have a reasonable idea what these will be, but it always helps to have solid data), and we can use the results of the graphics card and GNOME 3 Test Days to determine which generations are working, which will help us with documentation and help us put a rough estimate on overall support for GNOME Shell (e.g. 'okay, roughly 70% of Radeon adapters will work with GNOME Shell at Fedora 15 release time', or whatever it turns out to be). Without the generation numbers, it gets much harder to do that.

What will you need to do this? Well, really you just need to generate the lists of what cards are members of what generations, and then figure out a way to do the grouping in Smolt. This may well require DB access to Smolt rather than just web front end access, but that can likely be arranged if necessary. You can get a start on the generations from that old standby, the Mandriva ldetect-lst table, which I used to maintain and which it looks like someone has been keeping up to date since I left. It doesn't separate out each generation, as it's only necessary to categorize generations whose support in the various open and proprietary drivers differs (so if two generations are supported exactly the same in all drivers, they're categorized together), but it'll get you started. Mostly consumer card naming is approximately in line with generations - so almost all NVIDIA GeForce 9xxx cards will be members of a single generation, 8xxx cards of a different generation, and so on. But there are lots of exception cases to look out for, usually the result of marketing shenanigans; for instance, ATI's Radeon 9100 was, if memory serves, just a rebadged 8xxx series part, not really part of the same generation as other 9xxx cards. There's also often differences between consumer, professional and mobile naming schemes, so mobile 5xxx parts may not be part of the same generation as desktop 5xxx parts. You'll get into it. =) Wikipedia and hardware review sites like HardOCP, Ars Technica and Tom's are helpful resources for sorting out these cases. Absolutely accuracy isn't critical as we just need a rough idea of the numbers, not a super-accurate headcount, but of course it's best to be as correct as possible.

This would probably make a great student project for someone, we will look into the various programs for those. But if it appeals to you and you'd like to have a go, please do get in touch with me or the QA group via the mailing list! We can move along from there.

Compiz 0.9 for Rawhide (Fedora 15): test repo up

This morning I thought I'd start on building Compiz 0.9 for Rawhide (Fedora 15) as a nice little weekend project to spend an hour or two on. Sixteen hours later...I've kind of finished.

Want to try Compiz 0.9 (actually, a bit past 0.9.2.1, and using the glib-mainloop branch of compiz itself, which is needed for Unity and will be merged into master at some point soonish) on Fedora Rawhide (15)? Stick this repo config file in /etc/yum.repos.d/ , install (or update) compiz-gnome , and run 'compiz-gtk --replace' , and Bob ought to be your uncle. It should work from desktop-effects too but there seems to be some kind of bug - you run desktop-effects, click on Compiz, it starts up perfectly happily, then desktop-effects claims it didn't actually start up so it's going to revert to the previous configuration, and promptly muffs everything up. It also seems to be using some fallback Metacity theme, not quite sure why. Aside from that, it seems to work pretty well for me. There's a ccsm package you can install and run to configure compiz (and changes you make in ccsm should actually apply and save...unlike how it's been in Fedora's compiz for ages).

We (myself and Adel, the main compiz maintainer) would quite like to put this into Rawhide itself if it turns out to work well enough and stably enough, so do let me know how you get on, via email or in the comments. The packages are signed with my redhat.com GPG key - key ID C1365CF0. Good luck! And of course, if it breaks, you get to keep both pieces.

I could build this for F14 quite easily, I just haven't bothered yet. I might get around to that tomorrow. Or not.

The reality distortion brigade

I continue to be frankly baffled by the vitriol being heaped upon Google for one of its good moves - removing support for h264-encoded content in the video tag in Chrome. As was hashed out at painful length in the initial debates, a patent-encumbered, pay-to-play codec has absolutely no business being in the HTML definition and h264 should never have been accepted as an acceptable codec for the tag in the first place. Any move to discourage its use should be widely applauded.

The reasons why have been rehearsed too many times to be worth going over, so I won't. What really baffles me are the two major objections to this, which seem to be:

1) Google is just doing it to prop up Flash!

To which the obvious answer is...why? Why would Google do that? How does it benefit Google? Google doesn't make any more money out of Flash video than any other form of content. What would be the interest, for Google, in 'propping up Flash'?

2) Google is just doing it because it 'owns' WebM!

Well, yeah, Google owns WebM. In the sense that it bought out the company that developed the codec and open sourced it. So it really doesn't own it very much at all, as the format that's been accepted for the video tag is already open sourced, Google can't take it back, and anyone can develop an alternative implementation (and this has in fact already happened). But...so what? Even if we accept this is true, what's the problem? I think people assume Google can do something Teh Evil with WebM in the long run, once they've suckered us all into using it. I'm as big a fan of Google conspiracy theories as anyone (check my history!) but that just really doesn't wash. Google can't. They can develop some new refinement of WebM, take it proprietary, and try and be evil all they like, but they'd quickly find it wouldn't work, because the 'follow-ups' or 'enhancements' to popular codecs have no built in market. No-one uses MP3 Pro, fr'instance. If you take a popular codec and develop an 'enhancement' of it which is evil, what happens is no-one uses it. So what does Google's 'ownership' of WebM mean, in practical terms? Nothing. It has no stick to beat anyone with. This is in contrast to h.264, which is certainly owned by somebody, in a much stricter sense of 'ownership' (the sense of 'we have a ton of patents and no you CAN'T develop alternative implementations; just ask the x264 folks) - the MPEG-LA. Notwithstanding the frankly impressive chutzpah of those attempting to paint the MPEG-LA as some sort of enlightened, guitar-strumming co-operative of artists and video wizards, what it is is a license fee extortion body which exists to try and screw as much money as possible out of people for the use of fairly ordinary codecs. As mentioned, I'm no particular fan of Google, but if it comes down to choosing between them and MPEG-LA, I'll bend over for Sergey any time. At least he'd whisper sweet nothings. The MPEG-LA would probably smack you around the head with a baseball bat.

As well as baffling, it's almost annoying that it seems people are perfectly happy to get angry at Google over something, just as long as it's what Steve Jobs has told them to get angry about. Which usually is exactly the wrong thing to be getting angry at Google about.