Here, children, gather round and let Uncle Adam tell you a story. Are you ready? Here we go!
And lo, Adam and Stephen were sad, because running these test cases manually sucked big-time.
"I know!", thought Adam. "Let's automate them!"
But Adam had doubts. "Hmm," he thought, "I'm pretty sure that company with the funky hats already has some automated tests for this. Maybe I should wait until Beaker is finally deployed properly for Fedora and use those tests instead, to save duplicating work?"
Then Stephen found himself moving on to different work, and Adam realized he would have to do the tests himself all the damn time. And Adam ran the tests manually one more time and halfway through remembering how to do static IP configuration via dracut on the kernel command line he didst think to himself, "screw this for a game of soldiers".
And so Adam didst go and read to himself the openQA advanced networking documentation. "Bloody hell," he thought, "this is tough going".
His travails were not yet ended, however. Next didst he, with much wailing and gnashing of teeth, manage to get the entire Open vSwitch configuration working via Ansible for the official openQA instances in Fedora Infrastructure.
Finally didst he, with much trial and error, refine and extend the tests until they covered several of the validation tests, and actually worked. And lo, didst he dump two huge pull requests on the tracker, in the manner of the famed "mic drop", and go for a beer.
Seriously, though, this is pretty cool and I'm happy I got it working. openQA by default uses qemu user-mode networking for the VMs that run the tests, which is fine for a lot of stuff - the VM can see the outside world just fine - but means they can't talk to each other. If you want to test something like FreeIPA where you need two or more tests (here, the server and a client) running simultaneously and talking to each other, you need to do some custom networking config.
openQA has some integration with Open vSwitch and it's what the SUSE folks use, so I went with that. You basically have to create a tap device for each worker instance and use something like OVS to connect those devices together with a virtual bridge or whatever so the test VMs can communicate. The VMs also need to access the per-job web server that os-autoinst runs for the worker to upload logs to and download scripts to run from (in some cases), so in the reference set up you have that bind to the bridge interface and ensure the firewalling is set up so the VMs can reach it. And if you need the VMs to have access to the external network, as we do for FreeIPA testing (dnf and rolekit just do not want to work without access to the repositories), you have to basically set up NAT routing for the traffic from the VMs. It's lots of network configuration fun!
But I worked it all out. So both the tests are configured as 'children' of the
install_default test for the Server DVD ISO, which just runs a default installation then uploads the hard disk image of the installed system.
Once that test is done, both the server and client tests boot up from that hard disk image. The client test recognizes it has to wait for the server, and just sits at the boot menu till the server signals that it's ready (this is all part of openQA's parallel test support). The server logs in, sets its hostname to an appropriate one for a FreeIPA server (FreeIPA does not like
localhost.localdomain, it requires an FQDN), and reboots.
Then it re-configures the network - since we're effectively on a private network now, with no DHCP server or anything - to use a static IP. Then it copies
/etc/resolv.conf from the worker host, which sounds like a terrible idea at first but is actually a pretty good way to ensure it can resolve names properly (I first thought about doing this as a joke, then found it was how SUSE do it and realized it's actually probably the best option). Now it deploys the 'domain controller' role (i.e. sets itself up as a FreeIPA server) and runs the first set of required rolectl sanity checks from QA:Testcase_Server_role_deploy. Then it sets an enrolment password for the client, creates a couple of user accounts, tweaks the login policy a bit, and then it sends the signal (actually a mutex lock) to the client that it's ready and the client can go ahead. Then it waits for the client to complete.
The client wakes up, does a kickstart install with client enrolment using the password the server set up and an appropriate static IP networking configuration. The FreeIPA server acts as a DNS server, so the client doesn't need to fiddle about with that. The installed system boots, and we log in as root, then we check that we did indeed become a member of the domain properly (as per QA:Testcase_realmd_join_kickstart). Then we kinit as each test user and set a permanent password, then we try logging in as each user, expecting success with the first and permission denied with the second (as was the policy we set on the server earlier).
At this point the client has fulfilled the requirements of the kickstart joining test case, QA:Testcase_domain_client_authenticate and QA:Testcase_FreeIPA_realmd_login, so it finishes up. The server sees that the client has finished, wakes up, does the rest of the rolectl sanity tests, and completes.
And Adam says "thank God I don't have to all that crap manually again", and goes for a beer. :)