Skip to main content

First Boot Automation in Image Builder

·6 mins

I often get asked about building an image using Image Builder that runs a command on its first boot. Today, I want to show you how you can achieve this by embedding a custom service into the image.

Embedding custom configuration files #

Before we dive into first boot automation, let’s start with something simpler. This year, Image Builder introduced a new feature that allows you to put custom files under the /etc directory. This enables you to easily configure packages included in the image. For example, if you want to enable password-less sudo for users in the wheel group, you can simply add extra sudoers configuration to /etc/sudoers.d using the files customization in your blueprint:

[[customizations.files]]
path = "/etc/sudoers.d/wheel-passwordless-sudo"
mode = "0400"
data = """
%wheel ALL=(ALL) NOPASSWD: ALL
"""

Embedding a custom first-boot service #

Image Builder applies the files customization before the services customization. This enables a very useful feature: a way to inject custom systemd services into the image. Moreover, with a bit of systemd magic, you can ensure that the service is only run during the first boot. This gives you a convenient way to perform any necessary initialization tasks during the initial boot of your machine.

Let’s take a look at a simple example: Imagine that you want to create a virtual machine with two disks: one for system and another for data (e.g. mounted under /mnt/data). Image Builder can build the system one, and while configuring a new VM, you can add an additional disk drive to it. However, you need to partition the data disk and ensure it gets mounted on every boot.

To achieve this, you can define the following blueprint and save it as second-disk.toml:

name = "second-disk"

[[customizations.files]]
path = "/etc/systemd/system/prepare-data-disk.service"
data = """
[Unit]
Description=Prepare the data disk during the first boot
ConditionPathExists=!/var/lib/prepare-data-disk-first-boot

[Service]
Type=oneshot
ExecStart=mkfs.ext4 /dev/sdb
ExecStart=mkdir /mnt/data
ExecStart=mount /dev/sdb /mnt/data
ExecStart=bash -c "echo '/dev/sdb /mnt/data ext4 defaults 0 2' >>/etc/fstab"
ExecStartPost=touch /var/lib/prepare-data-disk-first-boot

[Install]
WantedBy=default.target
"""

[customizations.services]
enabled = ["prepare-data-disk"]

[[customizations.user]]
name = "user"
groups = ["wheel"]
key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPB1jFl4p6FTBixHT6wOk6X8nj/Z7eoPNQE/M0wK485K ondrej@budai.cz"

Let’s have a closer look at the blueprint:

Defining the custom service #

[[customization.files]] is used to create a custom systemd service under /etc/systemd/system.

The ExecStart options within the service handle the actual commands run during the first boot. In this case, the following steps are performed:

  1. A new ext4 partition is created on the /dev/sdb disk.
  2. The /mnt/data directory is created and the new partition is mounted there.
  3. To ensure that the partition is mounted on every boot, a persistent record is added to /etc/fstab

This text assumes that the data disk is visible to the system as /dev/sdb. However, it’s important to note that the disk name can vary depending on the hypervisor or system configuration. I strongly encourage you to use another method for referencing a disk, but that’s out of scope for this post.

The ConditionPathExists=!/var/lib/prepare-data-disk-first-boot and ExecStartPost=touch /var/lib/prepare-data-disk-first-boot options ensure that the service is only started on the first boot. The semantic is quite simple: On the first boot, the /var/lib/prepare-data-disk-first-boot file doesn’t exist, allowing the service to start due to the ConditionPathExists condition (note the ! operator). To disable subsequent boots from triggering the service, the file is created using the ExecStartPost=touch ... option after all commands are executed. Quite elegant, right?

There are other methods to ensure that a systemd service is started only once. I picked this one from the Fedora CoreOS documentation.

Enabling the custom service #

To enable the service, you can just simply use the [customizations.services] option.

Adding a custom user #

To verify that the image does what you need, you can define a user in the blueprint using the [[customizations.user]] option. In this case, you can use it to add an SSH key to the user, so you can simply use SSH to inspect the image.

You can also use cloud-init for creating custom users if you are building a cloud image, or ignition in the case of ostree images. I chose Image Builder to do this task because it felt like the easiest method for the purposes of this blog post.

Verifying #

Now, let’s build the image. In this example, we will be build a simple qcow2 image intended for booting with qemu, or tools built upon it such as libvirt, kubevirt, and OpenStack.

# Push the blueprint
$ composer-cli blueprints push second-disk.toml

# Start a qcow2 build
$ composer-cli compose start second-disk qcow2
Compose 14107a91-edbd-419b-820a-cb813f8063d6 added to the queue

# Wait for the build to finish
$ composer-cli compose list | grep 14107a91-edbd-419b-820a-cb813f8063d6
14107a91-edbd-419b-820a-cb813f8063d6   RUNNING    second-disk   0.0.1     qcow2

# ...
$ composer-cli compose list | grep 14107a91-edbd-419b-820a-cb813f8063d6
14107a91-edbd-419b-820a-cb813f8063d6   FINISHED   second-disk   0.0.1     qcow2

# Download the image
$ composer-cli compose image 14107a91-edbd-419b-820a-cb813f8063d6 --filename image.qcow2

After, the image build has finished, you can just simply create an empty 1GiB file using the truncate command and then boot the image using qemu:

$ truncate -s 1G /var/tmp/data-disk
$ qemu-system-x86_64 \
  -M accel=kvm \
  -m 2048 \
  -cpu host \
  -net nic,model=virtio \
  -net user,hostfwd=tcp::2222-:22 \
  -drive file=/var/tmp/data-disk,index=1,media=disk,format=raw \
  image.qcow2

Let me briefly explain some of the qemu options I used:

  • The -net arguments set up basic networking for the VM. They include forwarding the VM’s port 22 to the host’s port 2222, which allows easy SSH connection to the machine.
  • The -drive argument adds the secondary disk to the machine, backed by the empty file I’ve created using truncate.

Now, I can ssh into the machine using the key that I injected into the image and verify that my second drive got successfully formatted and mounted:

$ ssh admin@localhost -p 2222
vm $ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda      8:0    0    5G  0 disk
|-sda1   8:1    0    1M  0 part
|-sda2   8:2    0  200M  0 part /boot/efi
|-sda3   8:3    0  500M  0 part /boot
`-sda4   8:4    0  4.3G  0 part /
sdb      8:16   0    1G  0 disk /mnt/data
sr0     11:0    1 1024M  0 rom
zram0  252:0    0  1.9G  0 disk [SWAP]

The changes can surely survive a reboot:

vm $ systemctl reboot
$ ssh admin@localhost -p 2222
vm $ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda      8:0    0    5G  0 disk
|-sda1   8:1    0    1M  0 part
|-sda2   8:2    0  200M  0 part /boot/efi
|-sda3   8:3    0  500M  0 part /boot
`-sda4   8:4    0  4.3G  0 part /
sdb      8:16   0    1G  0 disk /mnt/data
sr0     11:0    1 1024M  0 rom
zram0  252:0    0  1.9G  0 disk [SWAP]

It seems like our first-boot service worked! 🎉

With this pattern, you should be able to run arbitrary code on the first boot, allowing you to further customize your instances. I’m planning to write up a follow-up to this post to show you more tricks that you can do, so stay tuned. 📻