OpenStack Linux image requirements For a Linux-based image to have full functionality in an OpenStack Compute cloud, there are a few requirements. For some of these, the requirement can be fulfilled by installing the cloud-init package. You should read this section before creating your own image to be sure that the image supports the OpenStack features you plan on using. Disk partitions and resize root partition on boot (cloud-init) No hard-coded MAC address information SSH server running Disable firewall Access instance using ssh public key (cloud-init) Process user data and other metadata (cloud-init) Paravirtualized Xen support in Linux kernel (Xen hypervisor only with Linux kernel version < 3.0)
Disk partitions and resize root partition on boot (cloud-init) When you create a new Linux image, the first decision you will need to make is how to partition the disks. The choice of partition method can affect the resizing functionality, as described below. The size of the disk in a virtual machine image is determined when you initially create the image. However, OpenStack lets you launch instances with different size drives by specifying different flavors. For example, if your image was created with a 5 GB disk, and you launch an instance with a flavor of m1.small, the resulting virtual machine instance will have (by default) a primary disk of 10GB. When an instance's disk is resized up, zeros are just added to the end. Your image needs to be able to resize its partitions on boot to match the size requested by the user. Otherwise, after the instance boots, you will need to manually resize the partitions if you want to access the additional storage you have access to when the disk size associated with the flavor exceeds the disk size your image was created with. Xen: 1 ext3/ext4 partition (no LVM, no /boot, no swap) If you are using the OpenStack XenAPI driver, the Compute service will automatically adjust the partition and filesystem for your instance on boot. Automatic resize will occur if the following are all true: auto_disk_config=True is set as a property on the image in the Image Registry. The disk on the image has only one partition. The file system on the one partition is ext3 or ext4. Therefore, if you are using Xen, we recommend that when you create your images, you create a single ext3 or ext4 partition (not managed by LVM). Otherwise, read on. Non-Xen with cloud-init/cloud-tools: 1 ext3/ext4 partition (no LVM, no /boot, no swap) Your image must be configured to deal with two issues: The image's partition table describes the original size of the image The image's filesystem fills the original size of the image Then, during the boot process: the partition table must be modified to be made aware of the additional space If you are not using LVM, you must modify the table to extend the existing root partition to encompass this additional space If you are using LVM, you can create add a new LVM entry to the partition table, create a new LVM physical volume, add it to the volume group, and extend the logical partition with the root volume the root volume filesystem must be resized The simplest way to support this in your image is to install the cloud-utils package (contains the growpart tool for extending partitions), the cloud-initramfs-tools package (which will support resizing root partition on the first boot), and the cloud-init package into your image. With these installed, the image will perform the root partition resize on boot (for example in /etc/rc.local). These packages are in the Ubuntu and Debian package repository, as well as the EPEL repository (for Fedora/RHEL/CentOS/Scientific Linux guests). If you are not able to install cloud-initramfs-tools, Robert Plestenjak has a github project called centos-image-resize that contains scripts that will update a ramdisk using growpart so that the image will resize properly on boot. If you are able to install the cloud-utils and cloud-init packages, we recommend that when you create your images, you create a single ext3 or ext4 partition (not managed by LVM). Non-Xen without cloud-init/cloud-tools: LVM If you cannot install cloud-init and cloud-tools inside of your guest, and you want to support resize, you will need to write a script that your image will run on boot to modify the partition table. In this case, we recommend using LVM to manage your partitions. Due to a limitation in the Linux kernel (as of this writing), you cannot modify a partition table of a raw disk that has partition currently mounted, but you can do this for LVM. Your script will need to do something like the following: Detect if there is any additional space on the disk (e.g., parsing output of parted /dev/sda --script "print free") Create a new LVM partition with the additional space (e.g., parted /dev/sda --script "mkpart lvm ...") Create a new physical volume (e.g., pvcreate /dev/sda6 ) Extend the volume group with this physical partition (e.g., vgextend vg00 /dev/sda6) Extend the logical volume contained the root partition by the amount of space (e.g., lvextend /dev/mapper/node-root /dev/sda6) Resize the root file system (e.g., resize2fs /dev/mapper/node-root). You do not need to have a /boot partition, unless your image is an older Linux distribution that requires that /boot is not managed by LVM. You may elect to use a swap per
No hard-coded MAC address information You must remove the network persistence rules in the image as their presence will result in the network interface in the instance coming up as an interface other than eth0. This is because your image has a record of the MAC address of the network interface card when it was first installed, and this MAC address will be different each time the instance boots up. You should alter the following files: Replace /etc/udev/rules.d/70-persistent-net.rules with an empty file (contains network persistence rules, including MAC address) Replace /lib/udev/rules.d/75-persistent-net-generator.rules with an empty file (this generates the file above) Remove the HWADDR line from /etc/sysconfig/network-scripts/ifcfg-eth0 on Fedora-based images If you delete the network persistent rules files, you may get a udev kernel warning at boot time, which is why we recommend replacing them with empty files instead.
Ensure ssh server runs You must install an ssh server into the image and ensure that it starts up on boot, or you will not be able to connect to your instance using ssh when it boots inside of OpenStack. This package is typically called openssh-server.
Disable firewall In general, we recommend that you disable any firewalls inside of your image and use OpenStack security groups to restrict access to instances. The reason is that having a firewall installed on your instance can make it more difficult to troubleshoot networking issues if you cannot connect to your instance.
Access instance using ssh public key (cloud-init) The typical way that users access virtual machines running on OpenStack is to ssh using public key authentication. For this to work, your virtual machine image must be configured to download the ssh public key from the OpenStack metadata service or config drive, at boot time. Using cloud-init to fetch the public key The cloud-init package will automatically fetch the public key from the metadata server and place the key in an account. The account varies by distribution. On Ubuntu-based virtual machines, the account is called "ubuntu". On Fedora-based virtual machines, the account is called "ec2-user". You can change the name of the account used by cloud-init by editing the /etc/cloud/cloud.cfg file and adding a line with a different user. For example, to configure cloud-init to put the key in an account named "admin", edit the config file so it has the line:user: admin Writing a custom script to fetch the public key If you are unable or unwilling to install cloud-init inside the guest, you can write a custom script to fetch the public and add it to a user account. To fetch the ssh public key and add it to the root account, edit the /etc/rc.local file and add the following lines before the line “touch /var/lock/subsys/local”. This code fragment is taken from the rackerjoe oz-image-build CentOS 6 template. if [ ! -d /root/.ssh ]; then mkdir -p /root/.ssh chmod 700 /root/.ssh fi # Fetch public key using HTTP ATTEMPTS=30 FAILED=0subl while [ ! -f /root/.ssh/authorized_keys ]; do curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null if [ \$? -eq 0 ]; then cat /tmp/metadata-key >> /root/.ssh/authorized_keys chmod 0600 /root/.ssh/authorized_keys restorecon /root/.ssh/authorized_keys rm -f /tmp/metadata-key echo "Successfully retrieved public key from instance metadata" echo "*****************" echo "AUTHORIZED KEYS" echo "*****************" cat /root/.ssh/authorized_keys echo "*****************" done Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with - (hyphen). If editing a file over a VNC session, make sure it's http: not http; and authorized_keys not authorized-keys.
Process user data and other metadata (cloud-init) In additional the ssh public key, an image may need to retrieve additional information from OpenStack, such as user data that the user submitted when requesting the image. For example, you may wish to set the host name of the instance to name given to the instance when it is booted. Or, you may wish to configure your image so that it executes user data content as a script on boot. This information is accessible via the metadata service or the config drive. As the OpenStack metadata service is compatible with version 2009-04-04 of the Amazon EC2 metadata service, consult the Amazon EC2 documentation on Using Instance Metadata for details on how to retrieve user data. The easiest way to support this type of functionality is to install the cloud-init package into your image, which is configured by default to treat user data as an executable script, and will set the host name.
Ensure image writes boot log to console You must configure the image so that the kernel writes the boot log to the ttyS0 device. In particular, the console=ttyS0 argument must be passed to the kernel on boot. If your image uses grub2 as the boot loader, there should be a line in the grub configuration file (for example, /boot/grub/grub.cfg) that looks something like this:linux /boot/vmlinuz-3.2.0-49-virtual root=UUID=6d2231e4-0975-4f35-a94f-56738c1a8150 ro console=ttyS0 If console=ttyS0 does not appear, you will need to modify your grub configuration. In general, you should not update the grub.cfg directly, since it is automatically generated. Instead, you should edit /etc/default/grub and modify the value of the GRUB_CMDLINE_LINUX_DEFAULT variable: GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0" Next, update the grub configuration. On Debian-based operating-systems such as Ubuntu, do:$ sudo update-grub On Fedora-based systems such as RHEL and CentOS, do:$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Paravirtualized Xen support in the kernel (Xen hypervisor only) Prior to Linux kernel version 3.0, the mainline branch of the Linux kernel did not have support paravirtualized Xen virtual machine instances (what Xen calls DomU guests). If you are running the Xen hypervisor with paravirtualization, and you want to create an image for an older Linux distribution that has a pre 3.0 kernel, you will need to ensure that the image boots a kernel that has been compiled with Xen support.
Managing the image cache Use options in nova.conf to control whether, and for how long, unused base images are stored in /var/lib/nova/instances/_base/. If you have configured live migration of instances, all your compute nodes share one common /var/lib/nova/instances/ directory. For information about libvirt images in OpenStack, refer to The life of an OpenStack libvirt image from Pádraig Brady.
Image cache management configuration options
Configuration option=Default value (Type) Description
preallocate_images=none (StrOpt) VM image preallocation mode: "none" => no storage provisioning is done up front, "space" => storage is fully allocated at instance start. If this is set to 'space', the $instance_dir/ images will be  fallocated to immediately determine if enough space is available, and to possibly improve VM I/O performance due to ongoing allocation avoidance, and better locality of block allocations.
remove_unused_base_images=True (BoolOpt) Should unused base images be removed? When set to True, the interval at which base images are removed are set with the following two settings. If set to False base images are never removed by Compute.
remove_unused_original_minimum_age_seconds=86400 (IntOpt) Unused unresized base images younger than this will not be removed. Default is 86400 seconds, or 24 hours.
remove_unused_resized_minimum_age_seconds=3600 (IntOpt) Unused resized base images younger than this will not be removed. Default is 3600 seconds, or one hour.
To see how the settings affect the deletion of a running instance, check the directory where the images are stored: $ sudo ls -lash /var/lib/nova/instances/_base/ Then look for the identifier in /var/log/compute/compute.log: 2012-02-18 04:24:17 41389 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810 a0d1d5d3_20 2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810 a0d1d5d3 /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20 2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removing base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3 Since 86400 seconds (24 hours) is the default time for remove_unused_original_minimum_age_seconds, you can either wait for that time interval to see the base image removed, or set the value to a shorter time period in nova.conf. Restart all nova services after changing a setting in nova.conf.