Wednesday, July 24, 2019

OpenQA create custom image to use for validation tests

OpenQA has a job that is used to create images with the system under test (aka SUT). You can find this job in openqa.opensuse.org following the Job Groups >openSUSE Tumbleweed>Buildxxxxxx and search for create_hdd_gnome.

create_hdd_gnome creates an qcow2 image which is used to run other validations and verification under this. Usually you will find it as dependency before other test jobs. You can see those other jobs if you open the create_hdd_gnome and open the Dependencies tab. Then visiting any of the jobs in the left, you will see that the first module is the boot_to_desktop.

This help us to create one common image for many cases. this saves us time from run a full installation before any test of interest. As the #dependencies tab indicates, the job dependencies are created with the START_AFTER_TEST property. Thus in every job's settings you will find START_AFTER_TEST=create_hdd_gnome. you also you should see the HDD_1 which should have the qcow2 image created by the create_hdd_gnome.

Other jobs as the lvm+RAID1 though, they do not have such dependency. Consider that you want to run fast a custom test against lvm+RAID1. You have to run the full installation first before reach the point to test something else. This is what we will try to address with this howto.

The lvm+RAID1 job configures a system with 4 disks. We can check exactly what it does in the autoinst-log.txt. A sample is shown as the screenshot below

[2019-07-15T23:20:56.012 CEST] [debug] running /usr/bin/qemu-img create -f qcow2 /var/lib/openqa/pool/9/raid/hd0 20G
[2019-07-15T23:20:56.131 CEST] [debug] Formatting '/var/lib/openqa/pool/9/raid/hd0', fmt=qcow2 size=21474836480 cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2019-07-15T23:20:56.132 CEST] [debug] running /usr/bin/qemu-img create -f qcow2 /var/lib/openqa/pool/9/raid/hd1 20G
[2019-07-15T23:20:56.338 CEST] [debug] Formatting '/var/lib/openqa/pool/9/raid/hd1', fmt=qcow2 size=21474836480 cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2019-07-15T23:20:56.338 CEST] [debug] running /usr/bin/qemu-img create -f qcow2 /var/lib/openqa/pool/9/raid/hd2 20G
[2019-07-15T23:20:56.403 CEST] [debug] Formatting '/var/lib/openqa/pool/9/raid/hd2', fmt=qcow2 size=21474836480 cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2019-07-15T23:20:56.403 CEST] [debug] running /usr/bin/qemu-img create -f qcow2 /var/lib/openqa/pool/9/raid/hd3 20G
[2019-07-15T23:20:56.507 CEST] [debug] Formatting '/var/lib/openqa/pool/9/raid/hd3', fmt=qcow2 size=21474836480 cluster_size=65536 lazy_refcounts=off refcount_bits=16

Then it runs the image passing those four disks.

 [2019-07-15T23:20:56.651 CEST] [debug] starting: /usr/bin/qemu-system-x86_64 -only-migratable -chardev ringbuf,id=serial0,logfile=serial0,logappend=on -serial chardev:serial0 -soundhw ac97 -global isa-fdc.driveA= -m 1536 -cpu qemu64 -netdev user,id=qanet0 -device virtio-net,netdev=qanet0,mac=52:54:00:12:34:56 -boot once=d,menu=on,splash-time=5000 -device usb-ehci -device usb-tablet -smp 1 -enable-kvm -no-shutdown -vnc :99,share=force-shared -device virtio-serial -chardev socket,path=virtio_console,server,nowait,id=virtio_console,logfile=virtio_console.log,logappend=on -device virtconsole,chardev=virtio_console,name=org.openqa.console.virtio_console -chardev socket,path=qmp_socket,server,nowait,id=qmp_socket,logfile=qmp_socket.log,logappend=on -qmp chardev:qmp_socket -S -device virtio-scsi-pci,id=scsi0 -blockdev driver=file,node-name=hd0-file,filename=/var/lib/openqa/pool/9/raid/hd0,cache.no-flush=on -blockdev driver=qcow2,node-name=hd0,file=hd0-file,cache.no-flush=on -device virtio-blk,id=hd0-device,drive=hd0,serial=hd0 -blockdev driver=file,node-name=hd1-file,filename=/var/lib/openqa/pool/9/raid/hd1,cache.no-flush=on -blockdev driver=qcow2,node-name=hd1,file=hd1-file,cache.no-flush=on -device virtio-blk,id=hd1-device,drive=hd1,serial=hd1 -blockdev driver=file,node-name=hd2-file,filename=/var/lib/openqa/pool/9/raid/hd2,cache.no-flush=on -blockdev driver=qcow2,node-name=hd2,file=hd2-file,cache.no-flush=on -device virtio-blk,id=hd2-device,drive=hd2,serial=hd2 -blockdev driver=file,node-name=hd3-file,filename=/var/lib/openqa/pool/9/raid/hd3,cache.no-flush=on -blockdev driver=qcow2,node-name=hd3,file=hd3-file,cache.no-flush=on -device virtio-blk,id=hd3-device,drive=hd3,serial=hd3 -blockdev driver=file,node-name=cd0-overlay0-file,filename=/var/lib/openqa/pool/9/raid/cd0-overlay0,cache.no-flush=on -blockdev driver=qcow2,node-name=cd0-overlay0,file=cd0-overlay0-file,cache.no-flush=on -device scsi-cd,id=cd0-device,drive=cd0-overlay0,serial=cd0

All this means that we have to create four images representing the disks and make them available to the openQA to use them. To achieve that we take a look in what the create_hdd_gnome does. And what it does is to simulate an installation and publish the image using the PUBLISH_HDD_1. The PUBLISH_HDD_1 takes the name of the published qcow2 which after the job is done you can see and verify under /var/lib/openqa/factory/hdd of your local openQA. But because we need a system with lvm and RAID1 we can use the lvm+RAID1 job which does exactly this for us and ask from it to publish the image as create_hdd_gnome does, using the PUBLISH_HDD_x four times for each disk that the job has created. We can accomplish this with the clone-job that openQA provides

Now if we check the /var/lib/openqa/factory/hdd we should be able to see our disks.
Next step is to find a job to clone that we can boot with those disks. For the scheduling we will pass also the YAML_SCHEDULE which will contain the yaml file with the scheduling that we want. Ofcourse now, the first module should be boot_to_desktop which will boot the first disk that it will find with the HDD_1 and next is just a validation script, which can be whatever you want. This will look like the following:

Create and save this under the /var/lib/openqa/share/tests/os-autoinst-distri-opensuse/schedule and then run

Go to you openQA and see that the job is running.  

No comments: