For context ... I just built a new qcow image using packer based on Ubuntu 22.04. Most of our custom ansible roles are just reused from Ubuntu 20.04 and we are finding a few discrepancies.
As part of the customisations we install a service that will format a drive with label "ephemeral0". The problem is, on first boot, when this service runs, apparently the device is "busy"
Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/vdb' failed with exit code 1.14:32:46,509 - DataSourceConfigDrive.py[DEBUG]: devices=['/dev/sr0', '/dev/vdb'] dslist=['ConfigDrive', 'None']Device /dev/vdb is in use. Cannot proceed with format operation.WARNING: Device /dev/vdb already contains a 'vfat' superblock signature.Device /dev/vdb is not a valid LUKS device.
Once I access the instance, the device is still in this state, and if I try to run any mount or format commands (E.g. cryptsetup luksFormat
) I just get something like:
WARNING: Device /dev/vdb already contains a 'vfat' superblock signature.Device /dev/vdb is in use. Cannot proceed with format operation.
I've tried lsof
, mount
and all the other tools suggested out there but I cannot figure out what is holding access to my /dev/vdb
device.
Now if I simply reboot the instance, when it comes back up, the formatting service runs fine and I have my newly formatted LV, however due to the nature of our environment, this is not a valid workaround.
How can I figure out what is really accessing this device? Maybe in a non-default NS?