Clean up of Compute Admin guide

* Rearranges configuration, moving "post install config" to config chapter.
* Removes cactus-to-diablo section.
* Adds basic image management chaper.
* Needs more info about configuring ec2, configuring nova-api, understanding policy.json.
* Fixes date formatting problem that prevented builds from working.

Change-Id: I7af2b426140a262f7a9b4ec62e7307925b1b0101
This commit is contained in:
annegentle 2012-03-20 12:27:54 -05:00
parent 56d7670082
commit 81aebb337c
9 changed files with 856 additions and 3242 deletions

@ -0,0 +1,379 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_image_mgmt">
<title>Image Management</title>
<para>You can use OpenStack Image Services for discovering, registering, and retrieving virtual machine images.
The service includes a RESTful API that allows users to query VM image metadata and retrieve the actual image with
HTTP requests, or you can use a client class in your Python code to accomplish the same tasks.
</para>
<para>
VM images made available through OpenStack Image Service can be stored in a variety of locations from simple
file systems to object-storage systems like the OpenStack Object Storage project, or even use S3 storage either
on its own or through an OpenStack Object Storage S3 interface.</para>
<para>The backend stores that OpenStack Image Service can work with are as follows:</para>
<itemizedlist><listitem><para>OpenStack Object Storage - OpenStack Object Storage is the highly-available object storage project in OpenStack.</para></listitem>
<listitem><para>Filesystem - The default backend that OpenStack Image Service uses to store virtual machine images is the filesystem backend. This simple backend writes image files to the local filesystem.</para></listitem>
<listitem><para>S3 - This backend allows OpenStack Image Service to store virtual machine images in Amazons S3 service.</para></listitem>
<listitem><para>HTTP - OpenStack Image Service can read virtual machine images that are available via HTTP somewhere on the Internet. This store is readonly.</para></listitem></itemizedlist>
<section xml:id="deleting-instances">
<title>Deleting Instances</title>
<para>When you are done playing with an instance, you can tear the instance down
using the following command (replace $instanceid with the instance IDs from above or
look it up with euca-describe-instances):</para>
<para><literallayout class="monospaced">nova delete $server-id</literallayout></para></section>
<section xml:id="pausing-and-suspending-instances">
<title>Pausing and Suspending Instances</title>
<para>Since the release of the API in its 1.1 version, it is possible to pause and suspend
instances.</para>
<warning>
<para>
Pausing and Suspending instances only apply to KVM-based hypervisors and XenServer/XCP Hypervisors.
</para>
</warning>
<para> Pause/ Unpause : Stores the content of the VM in memory (RAM).</para>
<para>Suspend/ Resume : Stores the content of the VM on disk.</para>
<para>It can be interesting for an administrator to suspend instances, if a maintenance is
planned; or if the instance are not frequently used. Suspending an instance frees up
memory and vCPUS, while pausing keeps the instance running, in a "frozen" state.
Suspension could be compared to an "hibernation" mode.</para>
<section xml:id="pausing-instance">
<title>Pausing instance</title>
<para>To pause an instance :</para>
<literallayout class="monospaced">nova pause $server-id </literallayout>
<para>To resume a paused instance :</para>
<literallayout class="monospaced">nova unpause $server-id </literallayout>
</section>
<section xml:id="suspending-instance">
<title>Suspending instance</title>
<para> To suspend an instance :</para>
<literallayout class="monospaced">nova suspend $server-id </literallayout>
<para>To resume a suspended instance :</para>
<literallayout class="monospaced">nova resume $server-id </literallayout>
</section>
</section>
<section xml:id="creating-custom-images">
<info><author>
<orgname>CSS Corp- Open Source Services</orgname>
</author><title>Image management</title></info>
<para>by <link xlink:href="http://www.csscorp.com/">CSS Corp Open Source Services</link> </para>
<para>There are several pre-built images for OpenStack available from various sources. You can download such images and use them to get familiar with OpenStack. You can refer to <link xlink:href="http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html">http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html</link> for details on using such images.</para>
<para>For any production deployment, you may like to have the ability to bundle custom images, with a custom set of applications or configuration. This chapter will guide you through the process of creating Linux images of Debian and Redhat based distributions from scratch. We have also covered an approach to bundling Windows images.</para>
<para>There are some minor differences in the way you would bundle a Linux image, based on the distribution. Ubuntu makes it very easy by providing cloud-init package, which can be used to take care of the instance configuration at the time of launch. cloud-init handles importing ssh keys for password-less login, setting hostname etc. The instance acquires the instance specific configuration from Nova-compute by connecting to a meta data interface running on 169.254.169.254.</para>
<para>While creating the image of a distro that does not have cloud-init or an equivalent package, you may need to take care of importing the keys etc. by running a set of commands at boot time from rc.local.</para>
<para>The process used for Ubuntu and Fedora is largely the same with a few minor differences, which are explained below.</para>
<para>In both cases, the documentation below assumes that you have a working KVM installation to use for creating the images. We are using the machine called &#8216;client1&#8242; as explained in the chapter on &#8220;Installation and Configuration&#8221; for this purpose.</para>
<para>The approach explained below will give you disk images that represent a disk without any partitions. Nova-compute can resize such disks ( including resizing the file system) based on the instance type chosen at the time of launching the instance. These images cannot have &#8216;bootable&#8217; flag and hence it is mandatory to have associated kernel and ramdisk images. These kernel and ramdisk images need to be used by nova-compute at the time of launching the instance.</para>
<para>However, we have also added a small section towards the end of the chapter about creating bootable images with multiple partitions that can be be used by nova to launch an instance without the need for kernel and ramdisk images. The caveat is that while nova-compute can re-size such disks at the time of launching the instance, the file system size is not altered and hence, for all practical purposes, such disks are not re-sizable.</para>
<section xml:id="creating-a-linux-image"><title>Creating a Linux Image &#8211; Ubuntu &amp; Fedora</title>
<para>The first step would be to create a raw image on Client1. This will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>
<literallayout class="monospaced">
kvm-img create -f raw server.img 5G
</literallayout>
<simplesect><title>OS Installation</title>
<para>Download the iso file of the Linux distribution you want installed in the image. The instructions below are tested on Ubuntu 11.04 Natty Narwhal 64-bit server and Fedora 14 64-bit. Most of the instructions refer to Ubuntu. The points of difference between Ubuntu and Fedora are mentioned wherever required.</para>
<literallayout class="monospaced">
wget http://releases.ubuntu.com/natty/ubuntu-11.04-server-amd64.iso
</literallayout>
<para>Boot a KVM Instance with the OS installer ISO in the virtual CD-ROM. This will start the installation process. The command below also sets up a VNC display at port 0</para>
<literallayout class="monospaced">
sudo kvm -m 256 -cdrom ubuntu-11.04-server-amd64.iso -drive file=server.img,if=scsi,index=0 -boot d -net nic -net user -nographic -vnc :0
</literallayout>
<para>Connect to the VM through VNC (use display number :0) and finish the installation.</para>
<para>For Example, where 10.10.10.4 is the IP address of client1:</para>
<literallayout class="monospaced">
vncviewer 10.10.10.4 :0
</literallayout>
<para>During the installation of Ubuntu, create a single ext4 partition mounted on &#8216;/&#8217;. Do not create a swap partition.</para>
<para>In the case of Fedora 14, the installation will not progress unless you create a swap partition. Please go ahead and create a swap partition.</para>
<para>After finishing the installation, relaunch the VM by executing the following command.</para>
<literallayout class="monospaced">
sudo kvm -m 256 -drive file=server.img,if=scsi,index=0,boot=on -boot c -net nic -net user -nographic -vnc :0
</literallayout>
<para>At this point, you can add all the packages you want to have installed, update the installation, add users and make any configuration changes you want in your image.</para>
<para>At the minimum, for Ubuntu you may run the following commands</para>
<literallayout class="monospaced">
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install openssh-server cloud-init
</literallayout>
<para>For Fedora run the following commands as root</para>
<literallayout class="monospaced">
yum update
yum install openssh-server
chkconfig sshd on
</literallayout>
<para>Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance coming up as an interface other than eth0.</para>
<literallayout class="monospaced">
sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules
</literallayout>
<para>Shutdown the Virtual machine and proceed with the next steps.</para>
</simplesect>
<simplesect><title>Extracting the EXT4 partition</title>
<para>The image that needs to be uploaded to OpenStack needs to be an ext4 filesystem image. Here are the steps to create a ext4 filesystem image from the raw image i.e server.img</para>
<literallayout class="monospaced">
sudo losetup -f server.img
sudo losetup -a
</literallayout>
<para>You should see an output like this:</para>
<literallayout class="monospaced">
/dev/loop0: [0801]:16908388 ($filepath)
</literallayout>
<para>Observe the name of the loop device ( /dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>
<para>Now we need to find out the starting sector of the partition. Run:</para>
<literallayout class="monospaced">
sudo fdisk -cul /dev/loop0
</literallayout>
<para>You should see an output like this:</para>
<literallayout class="monospaced">
Disk /dev/loop0: 5368 MB, 5368709120 bytes
149 heads, 8 sectors/track, 8796 cylinders, total 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00072bd4
Device Boot Start End Blocks Id System
/dev/loop0p1 * 2048 10483711 5240832 83 Linux
</literallayout>
<para>Make a note of the starting sector of the /dev/loop0p1 partition i.e the partition whose ID is 83. This number should be multiplied by 512 to obtain the correct value. In this case: 2048 x 512 = 1048576</para>
<para>Unmount the loop0 device:</para>
<literallayout class="monospaced">
sudo losetup -d /dev/loop0
</literallayout>
<para>Now mount only the partition(/dev/loop0p1) of server.img which we had previously noted down, by adding the -o parameter with value previously calculated value</para>
<literallayout class="monospaced">
sudo losetup -f -o 1048576 server.img
sudo losetup -a
</literallayout>
<para>You&#8217;ll see a message like this:</para>
<literallayout class="monospaced">
/dev/loop0: [0801]:16908388 ($filepath) offset 1048576
</literallayout>
<para>Make a note of the mount point of our device(/dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>
<para>Copy the entire partition to a new .raw file</para>
<literallayout class="monospaced">
sudo dd if=/dev/loop0 of=serverfinal.img
</literallayout>
<para>Now we have our ext4 filesystem image i.e serverfinal.img</para>
<para>Unmount the loop0 device</para>
<literallayout class="monospaced">
sudo losetup -d /dev/loop0
</literallayout>
</simplesect>
<simplesect><title>Tweaking /etc/fstab</title>
<para>You will need to tweak /etc/fstab to make it suitable for a cloud instance. Nova-compute may resize the disk at the time of launch of instances based on the instance type chosen. This can make the UUID of the disk invalid. Hence we have to use File system label as the identifier for the partition instead of the UUID.</para>
<para>Loop mount the serverfinal.img, by running</para>
<literallayout class="monospaced">
sudo mount -o loop serverfinal.img /mnt
</literallayout>
<para>Edit /mnt/etc/fstab and modify the line for mounting root partition(which may look like the following)</para>
<programlisting>
UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c / ext4 errors=remount-ro 0 1
</programlisting>
<para>to</para>
<programlisting>
LABEL=uec-rootfs / ext4 defaults 0 0
</programlisting>
</simplesect>
<simplesect><title>Fetching Metadata in Fedora</title>
<para>Since, Fedora does not ship with cloud-init or an equivalent, you will need to take a few steps to have the instance fetch the meta data like ssh keys etc.</para>
<para>Edit the /etc/rc.local file and add the following lines before the line “touch /var/lock/subsys/local”</para>
<programlisting>
depmod -a
modprobe acpiphp
# simple attempt to get the user ssh key using the meta-data service
mkdir -p /root/.ssh
echo &gt;&gt; /root/.ssh/authorized_keys
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key | grep 'ssh-rsa' &gt;&gt; /root/.ssh/authorized_keys
echo &quot;AUTHORIZED_KEYS:&quot;
echo &quot;************************&quot;
cat /root/.ssh/authorized_keys
echo &quot;************************&quot;
</programlisting>
</simplesect></section>
<simplesect><title>Kernel and Initrd for OpenStack</title>
<para>Copy the kernel and the initrd image from /mnt/boot to user home directory. These will be used later for creating and uploading a complete virtual image to OpenStack.</para>
<literallayout class="monospaced">
sudo cp /mnt/boot/vmlinuz-2.6.38-7-server /home/localadmin
sudo cp /mnt/boot/initrd.img-2.6.38-7-server /home/localadmin
</literallayout>
<para>Unmount the Loop partition</para>
<literallayout class="monospaced">
sudo umount /mnt
</literallayout>
<para>Change the filesystem label of serverfinal.img to &#8216;uec-rootfs&#8217;</para>
<literallayout class="monospaced">
sudo tune2fs -L uec-rootfs serverfinal.img
</literallayout>
<para>Now, we have all the components of the image ready to be uploaded to OpenStack imaging server.</para>
</simplesect>
<simplesect><title>Registering with OpenStack</title>
<para>The last step would be to upload the images to Openstack Imaging Server glance. The files that need to be uploaded for the above sample setup of Ubuntu are: vmlinuz-2.6.38-7-server, initrd.img-2.6.38-7-server, serverfinal.img</para>
<para>Run the following command</para>
<literallayout class="monospaced">
uec-publish-image -t image --kernel-file vmlinuz-2.6.38-7-server --ramdisk-file initrd.img-2.6.38-7-server amd64 serverfinal.img bucket1
</literallayout>
<para>For Fedora, the process will be similar. Make sure that you use the right kernel and initrd files extracted above.</para>
<para>uec-publish-image, like several other commands from euca2ools, returns the prompt back immediately. However, the upload process takes some time and the images will be usable only after the process is complete. You can keep checking the status using the command &#8216;euca-describe-images&#8217; as mentioned below.</para>
</simplesect>
<simplesect><title>Bootable Images</title>
<para>You can register bootable disk images without associating kernel and ramdisk images. When you do not want the flexibility of using the same disk image with different kernel/ramdisk images, you can go for bootable disk images. This greatly simplifies the process of bundling and registering the images. However, the caveats mentioned in the introduction to this chapter apply. Please note that the instructions below use server.img and you can skip all the cumbersome steps related to extracting the single ext4 partition.</para>
<literallayout class="monospaced">
nova-manage image image_register server.img --public=T --arch=amd64
</literallayout>
</simplesect>
<simplesect><title>Image Listing</title>
<para>The status of the images that have been uploaded can be viewed by using euca-describe-images command. The output should like this:</para>
<literallayout class="monospaced">nova image-list</literallayout>
<programlisting>
+----+---------------------------------------------+--------+
| ID | Name | Status |
+----+---------------------------------------------+--------+
| 6 | ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | ACTIVE |
| 7 | ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd | ACTIVE |
| 8 | ttylinux-uec-amd64-12.1_2.6.35-22_1.img | ACTIVE |
+----+---------------------------------------------+--------+
</programlisting>
</simplesect></section>
<section xml:id="creating-a-windows-image"><title>Creating a Windows Image</title>
<para>The first step would be to create a raw image on Client1, this will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>
<literallayout class="monospaced">
kvm-img create -f raw windowsserver.img 20G
</literallayout>
<para>OpenStack presents the disk using aVIRTIO interface while launching the instance. Hence the OS needs to have drivers for VIRTIO. By default, the Windows Server 2008 ISO does not have the drivers for VIRTIO. Sso download a virtual floppy drive containing VIRTIO drivers from the following location</para>
<para><link xlink:href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/">http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/</link></para>
<para>and attach it during the installation</para>
<para>Start the installation by running</para>
<literallayout class="monospaced">
sudo kvm -m 1024 -cdrom win2k8_dvd.iso -drive file=windowsserver.img,if=virtio,boot=on -fda virtio-win-1.1.16.vfd -boot d -nographic -vnc :0
</literallayout>
<para>When the installation prompts you to choose a hard disk device you wont see any devices available. Click on “Load drivers” at the bottom left and load the drivers from A:\i386\Win2008</para>
<para>After the Installation is over, boot into it once and install any additional applications you need to install and make any configuration changes you need to make. Also ensure that RDP is enabled as that would be the only way you can connect to a running instance of Windows. Windows firewall needs to be configured to allow incoming ICMP and RDP connections.</para>
<para>For OpenStack to allow incoming RDP Connections, use euca-authorize command to open up port 3389 as described in the chapter on &#8220;Security&#8221;.</para>
<para>Shut-down the VM and upload the image to OpenStack</para>
<literallayout class="monospaced">
nova-manage image image_register windowsserver.img --public=T --arch=x86
</literallayout>
</section>
<section xml:id="creating-images-from-running-instances">
<title>Creating images from running instances with KVM and Xen</title>
<para>
It is possible to create an image from a running instance on KVM and Xen. This is a convenient way to spawn pre-configured instances; update them according to your needs ; and re-image the instances.
The process to create an image from a running instance is quite simple :
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Pre-requisites</emphasis>
</para>
<para> In order to use the feature properly, you will need qemu-img on it's 0.14
version. The imaging feature uses the copy from a snapshot for image files.
(e.g qcow-img convert -f qcow2 -O qcow2 -s $snapshot_name
$instance-disk).</para>
<para>On Debian-like distros, you can check the version by running :
<literallayout class="monospaced">dpkg -l | grep qemu</literallayout></para>
<programlisting>
ii qemu 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 dummy transitional pacakge from qemu to qemu
ii qemu-common 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 qemu common functionality (bios, documentati
ii qemu-kvm 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 Full virtualization on i386 and amd64 hardwa
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">Write data to disk</emphasis></para>
<para>
Before creating the image, we need to make sure we are not missing any
buffered content that wouldn't have been written to the instance's disk. In
order to resolve that ; connect to the instance and run
<command>sync</command> then exit.
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Create the image</emphasis>
</para>
<para> In order to create the image, we first need obtain the server id :
<literallayout class="monospaced">nova list</literallayout><programlisting>
+-----+------------+--------+--------------------+
| ID | Name | Status | Networks |
+-----+------------+--------+--------------------+
| 116 | Server 116 | ACTIVE | private=20.10.0.14 |
+-----+------------+--------+--------------------+
</programlisting>
Based on the output, we run :
<literallayout class="monospaced">nova image-create 116 Image-116</literallayout>
The command will then perform the image creation (by creating qemu snapshot) and will automatically upload the image to your repository.
<note>
<para>
The image that will be created will be flagged as "Private" (For glance : is_public=False). Thus, the image will be available only for the tenant.
</para>
</note>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Check image status</emphasis>
</para>
<para> After a while the image will turn from a "SAVING" state to an "ACTIVE"
one. <literallayout class="monospaced">nova image-list</literallayout> will
allow you to check the progress :
<literallayout class="monospaced">nova image-list </literallayout><programlisting>
+----+---------------------------------------------+--------+
| ID | Name | Status |
+----+---------------------------------------------+--------+
| 20 | Image-116 | ACTIVE |
| 6 | ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | ACTIVE |
| 7 | ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd | ACTIVE |
| 8 | ttylinux-uec-amd64-12.1_2.6.35-22_1.img | ACTIVE |
+----+---------------------------------------------+--------+
</programlisting>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Create an instance from the image</emphasis>
</para>
<para>You can now create an instance based on this image as you normally do for other images :<literallayout class="monospaced">nova boot --flavor 1 --image20 New_server</literallayout>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">
Troubleshooting
</emphasis>
</para>
<para> Mainly, it wouldn't take more than 5 minutes in order to go from a
"SAVING" to the "ACTIVE" state. If this takes longer than five minutes, here are several hints: </para>
<para>- The feature doesn't work while you have attached a volume (via
nova-volume) to the instance. Thus, you should dettach the volume first,
create the image, and re-mount the volume.</para>
<para>- Make sure the version of qemu you are using is not older than the 0.14
version. That would create "unknown option -s" into nova-compute.log.</para>
<para>- Look into nova-api.log and nova-compute.log for extra
information.</para>
</listitem>
</itemizedlist>
</para>
</section>
</chapter>

@ -26,7 +26,7 @@
</copyright>
<releaseinfo>trunk</releaseinfo>
<productname>OpenStack Compute</productname>
<pubdate>2012-3-11</pubdate>
<pubdate>2012-03-11</pubdate>
<legalnotice role="apache2">
<annotation>
<remark>Copyright details are filled in by the template.</remark>
@ -44,7 +44,7 @@
</abstract>
<revhistory>
<revision>
<date>2012-03-11</date>
<date>2012-03-20</date>
<revdescription>
<itemizedlist spacing="compact">
<listitem>
@ -80,10 +80,11 @@
<xi:include href="computeinstall.xml"/>
<xi:include href="computeconfigure.xml"/>
<xi:include href="../common/ch_identity_mgmt.xml"/>
<xi:include href="../common/ch_image_mgmt.xml"/>
<xi:include href="computehypervisors.xml"/>
<xi:include href="computenetworking.xml"/>
<xi:include href="computeadmin.xml"/>
<xi:include href="interfaces.xml"/>
<xi:include href="computeinterfaces.xml"/>
<xi:include href="computeautomation.xml"/>
<xi:include href="computetutorials.xml"/>
<xi:include href="../common/support.xml"/>

@ -173,360 +173,6 @@ chmod 600 test.pem
</literallayout>
</para>
</section>
<section xml:id="deleting-instances">
<title>Deleting Instances</title>
<para>When you are done playing with an instance, you can tear the instance down
using the following command (replace $instanceid with the instance IDs from above or
look it up with euca-describe-instances):</para>
<para><literallayout class="monospaced">nova delete $server-id</literallayout></para></section>
<section xml:id="pausing-and-suspending-instances">
<title>Pausing and Suspending Instances</title>
<para>Since the release of the API in its 1.1 version, it is possible to pause and suspend
instances.</para>
<warning>
<para>
Pausing and Suspending instances only apply to KVM-based hypervisors and XenServer/XCP Hypervisors.
</para>
</warning>
<para> Pause/ Unpause : Stores the content of the VM in memory (RAM).</para>
<para>Suspend/ Resume : Stores the content of the VM on disk.</para>
<para>It can be interesting for an administrator to suspend instances, if a maintenance is
planned; or if the instance are not frequently used. Suspending an instance frees up
memory and vCPUS, while pausing keeps the instance running, in a "frozen" state.
Suspension could be compared to an "hibernation" mode.</para>
<section xml:id="pausing-instance">
<title>Pausing instance</title>
<para>To pause an instance :</para>
<literallayout class="monospaced">nova pause $server-id </literallayout>
<para>To resume a paused instance :</para>
<literallayout class="monospaced">nova unpause $server-id </literallayout>
</section>
<section xml:id="suspending-instance">
<title>Suspending instance</title>
<para> To suspend an instance :</para>
<literallayout class="monospaced">nova suspend $server-id </literallayout>
<para>To resume a suspended instance :</para>
<literallayout class="monospaced">nova resume $server-id </literallayout>
</section>
</section>
<section xml:id="creating-custom-images">
<info><author>
<orgname>CSS Corp- Open Source Services</orgname>
</author><title>Image management</title></info>
<para>by <link xlink:href="http://www.csscorp.com/">CSS Corp Open Source Services</link> </para>
<para>There are several pre-built images for OpenStack available from various sources. You can download such images and use them to get familiar with OpenStack. You can refer to <link xlink:href="http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html">http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html</link> for details on using such images.</para>
<para>For any production deployment, you may like to have the ability to bundle custom images, with a custom set of applications or configuration. This chapter will guide you through the process of creating Linux images of Debian and Redhat based distributions from scratch. We have also covered an approach to bundling Windows images.</para>
<para>There are some minor differences in the way you would bundle a Linux image, based on the distribution. Ubuntu makes it very easy by providing cloud-init package, which can be used to take care of the instance configuration at the time of launch. cloud-init handles importing ssh keys for password-less login, setting hostname etc. The instance acquires the instance specific configuration from Nova-compute by connecting to a meta data interface running on 169.254.169.254.</para>
<para>While creating the image of a distro that does not have cloud-init or an equivalent package, you may need to take care of importing the keys etc. by running a set of commands at boot time from rc.local.</para>
<para>The process used for Ubuntu and Fedora is largely the same with a few minor differences, which are explained below.</para>
<para>In both cases, the documentation below assumes that you have a working KVM installation to use for creating the images. We are using the machine called &#8216;client1&#8242; as explained in the chapter on &#8220;Installation and Configuration&#8221; for this purpose.</para>
<para>The approach explained below will give you disk images that represent a disk without any partitions. Nova-compute can resize such disks ( including resizing the file system) based on the instance type chosen at the time of launching the instance. These images cannot have &#8216;bootable&#8217; flag and hence it is mandatory to have associated kernel and ramdisk images. These kernel and ramdisk images need to be used by nova-compute at the time of launching the instance.</para>
<para>However, we have also added a small section towards the end of the chapter about creating bootable images with multiple partitions that can be be used by nova to launch an instance without the need for kernel and ramdisk images. The caveat is that while nova-compute can re-size such disks at the time of launching the instance, the file system size is not altered and hence, for all practical purposes, such disks are not re-sizable.</para>
<section xml:id="creating-a-linux-image"><title>Creating a Linux Image &#8211; Ubuntu &amp; Fedora</title>
<para>The first step would be to create a raw image on Client1. This will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>
<literallayout class="monospaced">
kvm-img create -f raw server.img 5G
</literallayout>
<simplesect><title>OS Installation</title>
<para>Download the iso file of the Linux distribution you want installed in the image. The instructions below are tested on Ubuntu 11.04 Natty Narwhal 64-bit server and Fedora 14 64-bit. Most of the instructions refer to Ubuntu. The points of difference between Ubuntu and Fedora are mentioned wherever required.</para>
<literallayout class="monospaced">
wget http://releases.ubuntu.com/natty/ubuntu-11.04-server-amd64.iso
</literallayout>
<para>Boot a KVM Instance with the OS installer ISO in the virtual CD-ROM. This will start the installation process. The command below also sets up a VNC display at port 0</para>
<literallayout class="monospaced">
sudo kvm -m 256 -cdrom ubuntu-11.04-server-amd64.iso -drive file=server.img,if=scsi,index=0 -boot d -net nic -net user -nographic -vnc :0
</literallayout>
<para>Connect to the VM through VNC (use display number :0) and finish the installation.</para>
<para>For Example, where 10.10.10.4 is the IP address of client1:</para>
<literallayout class="monospaced">
vncviewer 10.10.10.4 :0
</literallayout>
<para>During the installation of Ubuntu, create a single ext4 partition mounted on &#8216;/&#8217;. Do not create a swap partition.</para>
<para>In the case of Fedora 14, the installation will not progress unless you create a swap partition. Please go ahead and create a swap partition.</para>
<para>After finishing the installation, relaunch the VM by executing the following command.</para>
<literallayout class="monospaced">
sudo kvm -m 256 -drive file=server.img,if=scsi,index=0,boot=on -boot c -net nic -net user -nographic -vnc :0
</literallayout>
<para>At this point, you can add all the packages you want to have installed, update the installation, add users and make any configuration changes you want in your image.</para>
<para>At the minimum, for Ubuntu you may run the following commands</para>
<literallayout class="monospaced">
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install openssh-server cloud-init
</literallayout>
<para>For Fedora run the following commands as root</para>
<literallayout class="monospaced">
yum update
yum install openssh-server
chkconfig sshd on
</literallayout>
<para>Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance coming up as an interface other than eth0.</para>
<literallayout class="monospaced">
sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules
</literallayout>
<para>Shutdown the Virtual machine and proceed with the next steps.</para>
</simplesect>
<simplesect><title>Extracting the EXT4 partition</title>
<para>The image that needs to be uploaded to OpenStack needs to be an ext4 filesystem image. Here are the steps to create a ext4 filesystem image from the raw image i.e server.img</para>
<literallayout class="monospaced">
sudo losetup -f server.img
sudo losetup -a
</literallayout>
<para>You should see an output like this:</para>
<literallayout class="monospaced">
/dev/loop0: [0801]:16908388 ($filepath)
</literallayout>
<para>Observe the name of the loop device ( /dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>
<para>Now we need to find out the starting sector of the partition. Run:</para>
<literallayout class="monospaced">
sudo fdisk -cul /dev/loop0
</literallayout>
<para>You should see an output like this:</para>
<literallayout class="monospaced">
Disk /dev/loop0: 5368 MB, 5368709120 bytes
149 heads, 8 sectors/track, 8796 cylinders, total 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00072bd4
Device Boot Start End Blocks Id System
/dev/loop0p1 * 2048 10483711 5240832 83 Linux
</literallayout>
<para>Make a note of the starting sector of the /dev/loop0p1 partition i.e the partition whose ID is 83. This number should be multiplied by 512 to obtain the correct value. In this case: 2048 x 512 = 1048576</para>
<para>Unmount the loop0 device:</para>
<literallayout class="monospaced">
sudo losetup -d /dev/loop0
</literallayout>
<para>Now mount only the partition(/dev/loop0p1) of server.img which we had previously noted down, by adding the -o parameter with value previously calculated value</para>
<literallayout class="monospaced">
sudo losetup -f -o 1048576 server.img
sudo losetup -a
</literallayout>
<para>You&#8217;ll see a message like this:</para>
<literallayout class="monospaced">
/dev/loop0: [0801]:16908388 ($filepath) offset 1048576
</literallayout>
<para>Make a note of the mount point of our device(/dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>
<para>Copy the entire partition to a new .raw file</para>
<literallayout class="monospaced">
sudo dd if=/dev/loop0 of=serverfinal.img
</literallayout>
<para>Now we have our ext4 filesystem image i.e serverfinal.img</para>
<para>Unmount the loop0 device</para>
<literallayout class="monospaced">
sudo losetup -d /dev/loop0
</literallayout>
</simplesect>
<simplesect><title>Tweaking /etc/fstab</title>
<para>You will need to tweak /etc/fstab to make it suitable for a cloud instance. Nova-compute may resize the disk at the time of launch of instances based on the instance type chosen. This can make the UUID of the disk invalid. Hence we have to use File system label as the identifier for the partition instead of the UUID.</para>
<para>Loop mount the serverfinal.img, by running</para>
<literallayout class="monospaced">
sudo mount -o loop serverfinal.img /mnt
</literallayout>
<para>Edit /mnt/etc/fstab and modify the line for mounting root partition(which may look like the following)</para>
<programlisting>
UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c / ext4 errors=remount-ro 0 1
</programlisting>
<para>to</para>
<programlisting>
LABEL=uec-rootfs / ext4 defaults 0 0
</programlisting>
</simplesect>
<simplesect><title>Fetching Metadata in Fedora</title>
<para>Since, Fedora does not ship with cloud-init or an equivalent, you will need to take a few steps to have the instance fetch the meta data like ssh keys etc.</para>
<para>Edit the /etc/rc.local file and add the following lines before the line “touch /var/lock/subsys/local”</para>
<programlisting>
depmod -a
modprobe acpiphp
# simple attempt to get the user ssh key using the meta-data service
mkdir -p /root/.ssh
echo &gt;&gt; /root/.ssh/authorized_keys
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key | grep 'ssh-rsa' &gt;&gt; /root/.ssh/authorized_keys
echo &quot;AUTHORIZED_KEYS:&quot;
echo &quot;************************&quot;
cat /root/.ssh/authorized_keys
echo &quot;************************&quot;
</programlisting>
</simplesect></section>
<simplesect><title>Kernel and Initrd for OpenStack</title>
<para>Copy the kernel and the initrd image from /mnt/boot to user home directory. These will be used later for creating and uploading a complete virtual image to OpenStack.</para>
<literallayout class="monospaced">
sudo cp /mnt/boot/vmlinuz-2.6.38-7-server /home/localadmin
sudo cp /mnt/boot/initrd.img-2.6.38-7-server /home/localadmin
</literallayout>
<para>Unmount the Loop partition</para>
<literallayout class="monospaced">
sudo umount /mnt
</literallayout>
<para>Change the filesystem label of serverfinal.img to &#8216;uec-rootfs&#8217;</para>
<literallayout class="monospaced">
sudo tune2fs -L uec-rootfs serverfinal.img
</literallayout>
<para>Now, we have all the components of the image ready to be uploaded to OpenStack imaging server.</para>
</simplesect>
<simplesect><title>Registering with OpenStack</title>
<para>The last step would be to upload the images to Openstack Imaging Server glance. The files that need to be uploaded for the above sample setup of Ubuntu are: vmlinuz-2.6.38-7-server, initrd.img-2.6.38-7-server, serverfinal.img</para>
<para>Run the following command</para>
<literallayout class="monospaced">
uec-publish-image -t image --kernel-file vmlinuz-2.6.38-7-server --ramdisk-file initrd.img-2.6.38-7-server amd64 serverfinal.img bucket1
</literallayout>
<para>For Fedora, the process will be similar. Make sure that you use the right kernel and initrd files extracted above.</para>
<para>uec-publish-image, like several other commands from euca2ools, returns the prompt back immediately. However, the upload process takes some time and the images will be usable only after the process is complete. You can keep checking the status using the command &#8216;euca-describe-images&#8217; as mentioned below.</para>
</simplesect>
<simplesect><title>Bootable Images</title>
<para>You can register bootable disk images without associating kernel and ramdisk images. When you do not want the flexibility of using the same disk image with different kernel/ramdisk images, you can go for bootable disk images. This greatly simplifies the process of bundling and registering the images. However, the caveats mentioned in the introduction to this chapter apply. Please note that the instructions below use server.img and you can skip all the cumbersome steps related to extracting the single ext4 partition.</para>
<literallayout class="monospaced">
nova-manage image image_register server.img --public=T --arch=amd64
</literallayout>
</simplesect>
<simplesect><title>Image Listing</title>
<para>The status of the images that have been uploaded can be viewed by using euca-describe-images command. The output should like this:</para>
<literallayout class="monospaced">nova image-list</literallayout>
<programlisting>
+----+---------------------------------------------+--------+
| ID | Name | Status |
+----+---------------------------------------------+--------+
| 6 | ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | ACTIVE |
| 7 | ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd | ACTIVE |
| 8 | ttylinux-uec-amd64-12.1_2.6.35-22_1.img | ACTIVE |
+----+---------------------------------------------+--------+
</programlisting>
</simplesect></section>
<section xml:id="creating-a-windows-image"><title>Creating a Windows Image</title>
<para>The first step would be to create a raw image on Client1, this will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>
<literallayout class="monospaced">
kvm-img create -f raw windowsserver.img 20G
</literallayout>
<para>OpenStack presents the disk using aVIRTIO interface while launching the instance. Hence the OS needs to have drivers for VIRTIO. By default, the Windows Server 2008 ISO does not have the drivers for VIRTIO. Sso download a virtual floppy drive containing VIRTIO drivers from the following location</para>
<para><link xlink:href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/">http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/</link></para>
<para>and attach it during the installation</para>
<para>Start the installation by running</para>
<literallayout class="monospaced">
sudo kvm -m 1024 -cdrom win2k8_dvd.iso -drive file=windowsserver.img,if=virtio,boot=on -fda virtio-win-1.1.16.vfd -boot d -nographic -vnc :0
</literallayout>
<para>When the installation prompts you to choose a hard disk device you wont see any devices available. Click on “Load drivers” at the bottom left and load the drivers from A:\i386\Win2008</para>
<para>After the Installation is over, boot into it once and install any additional applications you need to install and make any configuration changes you need to make. Also ensure that RDP is enabled as that would be the only way you can connect to a running instance of Windows. Windows firewall needs to be configured to allow incoming ICMP and RDP connections.</para>
<para>For OpenStack to allow incoming RDP Connections, use euca-authorize command to open up port 3389 as described in the chapter on &#8220;Security&#8221;.</para>
<para>Shut-down the VM and upload the image to OpenStack</para>
<literallayout class="monospaced">
nova-manage image image_register windowsserver.img --public=T --arch=x86
</literallayout>
</section>
<section xml:id="creating-images-from-running-instances">
<title>Creating images from running instances with KVM and Xen</title>
<para>
It is possible to create an image from a running instance on KVM and Xen. This is a convenient way to spawn pre-configured instances; update them according to your needs ; and re-image the instances.
The process to create an image from a running instance is quite simple :
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Pre-requisites</emphasis>
</para>
<para> In order to use the feature properly, you will need qemu-img on it's 0.14
version. The imaging feature uses the copy from a snapshot for image files.
(e.g qcow-img convert -f qcow2 -O qcow2 -s $snapshot_name
$instance-disk).</para>
<para>On Debian-like distros, you can check the version by running :
<literallayout class="monospaced">dpkg -l | grep qemu</literallayout></para>
<programlisting>
ii qemu 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 dummy transitional pacakge from qemu to qemu
ii qemu-common 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 qemu common functionality (bios, documentati
ii qemu-kvm 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 Full virtualization on i386 and amd64 hardwa
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">Write data to disk</emphasis></para>
<para>
Before creating the image, we need to make sure we are not missing any
buffered content that wouldn't have been written to the instance's disk. In
order to resolve that ; connect to the instance and run
<command>sync</command> then exit.
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Create the image</emphasis>
</para>
<para> In order to create the image, we first need obtain the server id :
<literallayout class="monospaced">nova list</literallayout><programlisting>
+-----+------------+--------+--------------------+
| ID | Name | Status | Networks |
+-----+------------+--------+--------------------+
| 116 | Server 116 | ACTIVE | private=20.10.0.14 |
+-----+------------+--------+--------------------+
</programlisting>
Based on the output, we run :
<literallayout class="monospaced">nova image-create 116 Image-116</literallayout>
The command will then perform the image creation (by creating qemu snapshot) and will automatically upload the image to your repository.
<note>
<para>
The image that will be created will be flagged as "Private" (For glance : is_public=False). Thus, the image will be available only for the tenant.
</para>
</note>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Check image status</emphasis>
</para>
<para> After a while the image will turn from a "SAVING" state to an "ACTIVE"
one. <literallayout class="monospaced">nova image-list</literallayout> will
allow you to check the progress :
<literallayout class="monospaced">nova image-list </literallayout><programlisting>
+----+---------------------------------------------+--------+
| ID | Name | Status |
+----+---------------------------------------------+--------+
| 20 | Image-116 | ACTIVE |
| 6 | ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | ACTIVE |
| 7 | ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd | ACTIVE |
| 8 | ttylinux-uec-amd64-12.1_2.6.35-22_1.img | ACTIVE |
+----+---------------------------------------------+--------+
</programlisting>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Create an instance from the image</emphasis>
</para>
<para>You can now create an instance based on this image as you normally do for other images :<literallayout class="monospaced">nova boot --flavor 1 --image20 New_server</literallayout>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">
Troubleshooting
</emphasis>
</para>
<para> Mainly, it wouldn't take more than 5 minutes in order to go from a
"SAVING" to the "ACTIVE" state. If this takes longer than five minutes, here are several hints: </para>
<para>- The feature doesn't work while you have attached a volume (via
nova-volume) to the instance. Thus, you should dettach the volume first,
create the image, and re-mount the volume.</para>
<para>- Make sure the version of qemu you are using is not older than the 0.14
version. That would create "unknown option -s" into nova-compute.log.</para>
<para>- Look into nova-api.log and nova-compute.log for extra
information.</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="understanding-the-compute-service-architecture">
<title>Understanding the Compute Service Architecture</title>

@ -11,9 +11,479 @@
<para>The OpenStack system has several key projects that are separate
installations but can work together depending on your cloud needs: OpenStack
Compute, OpenStack Object Storage, and OpenStack Image Store. You can
install any of these projects separately and then configure them either as
standalone or connected entities.</para>
Compute, OpenStack Object Storage, and OpenStack Image Store. There are basic configuration
decisions to make, and the <link xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/content/">OpenStack Install Guide</link>
covers a basic walkthrough.</para>
<section xml:id="configuring-openstack-compute-basics">
<?dbhtml stop-chunking?>
<title>Post-Installation Configuration for OpenStack Compute</title>
<para>Configuring your Compute installation involves many
configuration files - the <filename>nova.conf</filename> file,
the api-paste.ini file, and related Image and Identity
management configuration files. This section contains the basics
for a simple multi-node installation, but Compute can be
configured many ways. You can find networking options and
hypervisor options described in separate chapters.</para>
<section xml:id="setting-flags-in-nova-conf-file">
<title>Setting Configuration Options in the
<filename>nova.conf</filename> File</title>
<para>The configuration file <filename>nova.conf</filename> is
installed in <filename>/etc/nova</filename> by default. A
default set of options are already configured in
<filename>nova.conf</filename> when you install manually. </para>
<para>Starting with the default file, you must define the
following required items in
<filename>/etc/nova/nova.conf</filename>. The flag variables
are described below. You can place comments in the
<filename>nova.conf</filename> file by entering a new line
with a <literal>#</literal> sign at the beginning of the line.
To see a listing of all possible flag settings, refer to
<link xlink:href="http://wiki.openstack.org/NovaConfigOptions">http://wiki.openstack.org/NovaConfigOptions</link>.</para>
<table rules="all">
<caption>Description of <filename>nova.conf</filename> flags (not
comprehensive)</caption>
<thead>
<tr>
<td>Flag</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td><literal>sql_connection</literal></td>
<td>SQL Alchemy connect string (reference); Location of OpenStack
Compute SQL database</td>
</tr>
<tr>
<td><literal>s3_host</literal></td>
<td>IP address; Location where OpenStack Compute is hosting the
objectstore service, which will contain the virtual machine images
and buckets</td>
</tr>
<tr>
<td><literal>rabbit_host</literal></td>
<td>IP address; Location of RabbitMQ server</td>
</tr>
<tr>
<td><literal>verbose</literal></td>
<td>Set to <literal>1</literal> to turn on; Optional but helpful
during initial setup</td>
</tr>
<tr>
<td><literal>network_manager</literal></td>
<td><para>Configures how your controller will communicate with
additional OpenStack Compute nodes and virtual machines.
Options:</para><itemizedlist>
<listitem>
<para>
<literal>nova.network.manager.FlatManager</literal>
</para>
<para>Simple, non-VLAN networking</para>
</listitem>
<listitem>
<para>
<literal>nova.network.manager.FlatDHCPManager</literal>
</para>
<para>Flat networking with DHCP</para>
</listitem>
<listitem>
<para>
<literal>nova.network.manager.VlanManager</literal>
</para>
<para>VLAN networking with DHCP; This is the Default if no
network manager is defined here in nova.conf.</para>
</listitem>
</itemizedlist></td>
</tr>
<tr>
<td><literal>fixed_range</literal></td>
<td>IP address/range; Network prefix for the IP network that all
the projects for future VM guests reside on. Example:
<literal>192.168.0.0/12</literal></td>
</tr>
<tr>
<td><literal>ec2_host</literal></td>
<td>IP address; Indicates where the <command>nova-api</command>
service is installed.</td>
</tr>
<tr>
<td><literal>ec2_url</literal></td>
<td>Url; Indicates the service for EC2 requests.</td>
</tr>
<tr>
<td><literal>osapi_host</literal></td>
<td>IP address; Indicates where the <command>nova-api</command>
service is installed.</td>
</tr>
<tr>
<td><literal>network_size</literal></td>
<td>Number value; Number of addresses in each private subnet.</td>
</tr>
<tr>
<td><literal>glance_api_servers</literal></td>
<td>IP and port; Address for Image Service.</td>
</tr>
<tr>
<td><literal>use_deprecated_auth</literal></td>
<td>If this flag is present, the Cactus method of authentication
is used with the novarc file containing credentials.</td>
</tr>
</tbody>
</table>
<para>Here is a simple example <filename>nova.conf</filename> file for a
small private cloud, with all the cloud controller services, database
server, and messaging server on the same server.</para>
<programlisting>
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--verbose
--use_deprecated_auth
--ec2_host=184.106.239.134
--ec2_url=http://184.106.239.134:8773/services/Cloud
--osapi_host=http://184.106.239.134
--s3_host=184.106.239.134
--rabbit_host=184.106.239.134
--fixed_range=192.168.0.0/16
--network_size=8
--glance_api_servers=184.106.239.134:9292
--routing_source_ip=184.106.239.134
--sql_connection=mysql://nova:notnova@184.106.239.134/nova
</programlisting>
<para>Create a “nova” group, so you can set permissions on the
configuration file:</para>
<screen><prompt>$</prompt> <userinput>sudo addgroup nova</userinput></screen>
<para>The <filename>nova.config</filename> file should have its owner
set to <literal>root:nova</literal>, and mode set to
<literal>0640</literal>, since the file contains your MySQL servers
username and password. You also want to ensure that the
<literal>nova</literal> user belongs to the <literal>nova</literal>
group.</para>
<screen><prompt>$</prompt> <userinput>sudo usermod -g nova nova</userinput>
<prompt>$</prompt> <userinput>chown -R root:nova /etc/nova</userinput>
<prompt>$</prompt> <userinput>chmod 640 /etc/nova/nova.conf</userinput>
</screen>
</section>
<section xml:id="setting-up-openstack-compute-environment-on-the-compute-node">
<title>Setting Up OpenStack Compute Environment on the Compute
Node</title>
<para>These are the commands you run to ensure the database schema is
current, and then set up a user and project, if you are using built-in
auth with the <literal>--use_deprecated_auth</literal> flag rather than
the Identity Service:</para>
<screen><prompt>$</prompt> <userinput>nova-manage db sync</userinput>
<prompt>$</prompt> <userinput>nova-manage user admin <replaceable>&lt;user_name&gt;</replaceable></userinput>
<prompt>$</prompt> <userinput>nova-manage project create <replaceable>&lt;project_name&gt; &lt;user_name&gt;</replaceable></userinput>
<prompt>$</prompt> <userinput>nova-manage network create <replaceable>&lt;network-label&gt; &lt;project-network&gt; &lt;number-of-networks-in-project&gt; &lt;addresses-in-each-network&gt;</replaceable></userinput>
</screen>
<para>Here is an example of what this looks like with real values
entered:</para>
<screen><prompt>$</prompt> <userinput>nova-manage db sync</userinput>
<prompt>$</prompt> <userinput>nova-manage user admin dub</userinput>
<prompt>$</prompt> <userinput>nova-manage project create dubproject dub</userinput>
<prompt>$</prompt> <userinput>nova-manage network create novanet 192.168.0.0/24 1 256</userinput></screen>
<para>For this example, the number of IPs is <literal>/24</literal>
since that falls inside the <literal>/16</literal> range that was set in
<literal>fixed-range</literal> in <filename>nova.conf</filename>.
Currently, there can only be one network, and this set up would use the
max IPs available in a <literal>/24</literal>. You can choose values
that let you use any valid amount that you would like.</para>
<para>The nova-manage service assumes that the first IP address is your
network (like 192.168.0.0), that the 2nd IP is your gateway
(192.168.0.1), and that the broadcast is the very last IP in the range
you defined (192.168.0.255). If this is not the case you will need to
manually edit the sql db <literal>networks</literal> table.</para>
<para>When you run the <command>nova-manage network create</command>
command, entries are made in the <literal>networks</literal> and
<literal>fixed_ips</literal> tables. However, one of the networks listed
in the <literal>networks</literal> table needs to be marked as bridge in
order for the code to know that a bridge exists. The network in the Nova
networks table is marked as bridged automatically for Flat
Manager.</para>
</section>
<section xml:id="creating-credentials"><title>Creating Credentials</title>
<para>The credentials you will use to launch
instances, bundle images, and all the other assorted
API functions can be sourced in a single file, such as
creating one called /creds/openrc. </para>
<para>Here's an example openrc file you can download from
the Dashboard in Settings > Project Settings >
Download RC File. </para>
<para>
<programlisting>#!/bin/bash
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We
# will use the 1.1 *compute api*
export OS_AUTH_URL=http://50.56.12.206:5000/v2.0
export OS_TENANT_ID=27755fd279ce43f9b17ad2d65d45b75c
export OS_USERNAME=vish
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_AUTH_USER=norm
export OS_AUTH_KEY=$OS_PASSWORD_INPUT
export OS_AUTH_TENANT=27755fd279ce43f9b17ad2d65d45b75c
export OS_AUTH_STRATEGY=keystone
</programlisting>
</para>
<para>You also may want to enable EC2 access for the
euca2ools. Here is an example ec2rc file for enabling
EC2 access with the required credentials.</para>
<para>
<programlisting>export NOVA_KEY_DIR=/root/creds/
export EC2_ACCESS_KEY="EC2KEY:USER"
export EC2_SECRET_KEY="SECRET_KEY"
export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud"
export S3_URL="http://$NOVA-API-IP:3333"
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"</programlisting>
</para>
<para>Lastly, here is an example openrc file that works
with nova client and ec2
tools.</para><programlisting>export OS_PASSWORD=${ADMIN_PASSWORD:-secrete}
export OS_AUTH_URL=${OS_AUTH_URL:-http://$SERVICE_HOST:5000/v2.0}
export NOVA_VERSION=${NOVA_VERSION:-1.1}
export OS_REGION_NAME=${OS_REGION_NAME:-RegionOne}
export EC2_URL=${EC2_URL:-http://$SERVICE_HOST:8773/services/Cloud}
export EC2_ACCESS_KEY=${DEMO_ACCESS}
export EC2_SECRET_KEY=${DEMO_SECRET}
export S3_URL=http://$SERVICE_HOST:3333
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set</programlisting>
<para>Next, add these credentials to your environment
prior to running any nova client commands or nova
commands. </para>
<literallayout class="monospaced">cat /root/creds/openrc >> ~/.bashrc
source ~/.bashrc </literallayout>
</section>
<section xml:id="creating-certifications">
<title>Creating Certificates</title>
<para>You can create certificates contained within pem
files using these nova client
commands, ensuring you have set up your environment variables for the nova client:
<screen><prompt>#</prompt> <userinput>nova x509-get-root-cert</userinput>
<prompt>#</prompt> <userinput>nova x509-create-cert </userinput></screen>
</para>
</section>
<section xml:id="enabling-access-to-vms-on-the-compute-node">
<title>Enabling Access to VMs on the Compute Node</title>
<para>One of the most commonly missed configuration areas is not
allowing the proper access to VMs. Use the
<command>euca-authorize</command> command to enable access. Below, you
will find the commands to allow <command>ping</command> and
<command>ssh</command> to your VMs :</para>
<note>
<para>These commands need to be run as root only if the credentials
used to interact with <command>nova-api</command> have been put under
<filename>/root/.bashrc</filename>. If the EC2 credentials have been
put into another user's <filename>.bashrc</filename> file, then, it is
necessary to run these commands as the user.</para>
</note>
<screen><prompt>$</prompt> <userinput>nova  secgroup-add-rule default icmp - 1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova  secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
<para>Another common issue is you cannot ping or SSH to your instances
after issuing the <command>euca-authorize</command> commands. Something
to look at is the amount of <command>dnsmasq</command> processes that
are running. If you have a running instance, check to see that TWO
<command>dnsmasq</command> processes are running. If not, perform the
following:</para>
<screen><prompt>$</prompt> <userinput>sudo killall dnsmasq</userinput>
<prompt>$</prompt> <userinput>sudo service nova-network restart</userinput></screen>
<para>If you get the <literal>instance not found</literal> message while
performing the restart, that means the service was not previously
running. You simply need to start it instead of restarting it : </para>
<screen><prompt>$</prompt> <userinput>sudo service nova-network start</userinput></screen>
</section>
<section xml:id="configuring-multiple-compute-nodes">
<title>Configuring Multiple Compute Nodes</title>
<para>If your goal is to split your VM load across more than one server,
you can connect an additional <command>nova-compute</command> node to a
cloud controller node. This configuring can be reproduced on multiple
compute servers to start building a true multi-node OpenStack Compute
cluster.</para>
<para>To build out and scale the Compute platform, you spread out
services amongst many servers. While there are additional ways to
accomplish the build-out, this section describes adding compute nodes,
and the service we are scaling out is called
<command>nova-compute</command>.</para>
<para>For a multi-node install you only make changes to
<filename>nova.conf</filename> and copy it to additional compute nodes.
Ensure each <filename>nova.conf</filename> file points to the correct IP
addresses for the respective services. Customize the
<filename>nova.conf</filename> example below to match your environment.
The <literal><replaceable>CC_ADDR</replaceable></literal> is the Cloud
Controller IP Address.</para>
<programlisting>--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--flat_network_bridge=br100
--logdir=/var/log/nova
--state_path=/var/lib/nova
--verbose
--sql_connection=mysql://root:nova@<replaceable>CC_ADDR</replaceable>/nova
--s3_host=<replaceable>CC_ADDR</replaceable>
--rabbit_host=<replaceable>CC_ADDR</replaceable>
--ec2_api=<replaceable>CC_ADDR</replaceable>
--ec2_url=http://CC_ADDR:8773/services/Cloud
--network_manager=nova.network.manager.FlatManager
--fixed_range= network/CIDR
--network_size=number of addresses</programlisting>
<para>By default, Nova sets the bridge device based on the setting in
<literal>--flat_network_bridge</literal>. Now you can edit
<filename>/etc/network/interfaces</filename> with the following
template, updated with your IP information.</para>
<programlisting># The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto br100
iface br100 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address <replaceable>xxx.xxx.xxx.xxx</replaceable>
netmask <replaceable>xxx.xxx.xxx.xxx</replaceable>
network <replaceable>xxx.xxx.xxx.xxx</replaceable>
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable>
</programlisting>
<para>Restart networking:</para>
<screen><prompt>$</prompt> <userinput>sudo service networking restart</userinput></screen>
<para>With <filename>nova.conf</filename> updated and networking set,
configuration is nearly complete. First, bounce the relevant services to
take the latest updates:</para>
<screen><prompt>$</prompt> <userinput>sudo service libvirtd restart</userinput>
$ <userinput>sudo service nova-compute restart</userinput></screen>
<para>To avoid issues with KVM and permissions with Nova, run the
following commands to ensure we have VM's that are running
optimally:</para>
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
<para>If you want to use the 10.04 Ubuntu Enterprise Cloud images that
are readily available at
http://uec-images.ubuntu.com/releases/10.04/release/, you may run into
delays with booting. Any server that does not have
<command>nova-api</command> running on it needs this iptables entry so
that UEC images can get metadata info. On compute nodes, configure the
iptables with this next step:</para>
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
<para>Lastly, confirm that your compute node is talking to your cloud
controller. From the cloud controller, run this database query:</para>
<screen><prompt>$</prompt> <userinput>mysql -u<replaceable>$MYSQL_USER</replaceable> -p<replaceable>$MYSQL_PASS</replaceable> nova -e 'select * from services;'</userinput></screen>
<para>In return, you should see something similar to this:</para>
<screen><computeroutput>+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+</computeroutput> </screen>
<para>You can see that <literal>osdemo0{1,2,4,5}</literal> are all
running <command>nova-compute</command>. When you start spinning up
instances, they will allocate on any node that is running
<command>nova-compute</command> from this list.</para>
</section>
<section xml:id="determining-version-of-compute">
<title>Determining the Version of Compute</title>
<para>You can find the version of the installation by using the
<command>nova-manage</command> command:</para>
<screen><prompt>$</prompt> <userinput>nova-manage version list</userinput></screen>
</section>
</section>
<section xml:id="general-compute-configuration-overview">
<title>General Compute Configuration Overview</title>

File diff suppressed because it is too large Load Diff

@ -1,389 +0,0 @@
<?xml version="1.0"?>
<!-- Converted by db4-upgrade version 1.0 -->
<chapter xmlns="http://docbook.org/ns/docbook" version="5.0-extension RaxBook-1.0" xml:id="quick-guide-to-getting-started-with-keystone"><info><title>Quick Guide to Getting Started with the Identity
Service</title></info>
<para>The OpenStack Identity Service, Keystone, provides services for
authenticating and managing user, account, and role information
for OpenStack clouds running on OpenStack Compute and as an
authorization service for OpenStack Object Storage.</para>
<section xml:id="what-is">
<title>What is this Keystone anyway?</title>
<para>from <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://mirantis.blogspot.com/2011/09/what-is-this-keystone-anyway.html">Yuriy Taraday</link></para>
<para>The simplest way to authenticate a user is to ask for credentials
(login+password, login+keys, etc.) and check them over some database.
But when it comes to lots of separate services as it is in the
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://openstack.org/">OpenStack</link>
world, we have to rethink that. The main problem is an inability to use
one user entity to be authorized everywhere. For example, a user expects
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://nova.openstack.org/">Nova</link>
to get one's credentials and create or fetch some images in
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://launchpad.net/glance">Glance</link>
or set up networks in
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://wiki.openstack.org/Quantum">Quantum</link>.
This cannot be done without a central authentication and authorization system.</para>
<para>So now we have one more OpenStack project -
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://wiki.openstack.org/keystone">Keystone</link>.
It is intended to incorporate all common information about users and
their capabilities across other services, along with a list of these
services themselves. We have spent some time explaining to our friends
what, why, and how it is and now we decided to blog about it. What
follows is an explanation of every entity that drives Keystones life.
Of course, this explanation can become outdated in no time since the
Keystone project is very young and it has developed very fast.</para>
<para>The first basis is the user. Users are users, they represent someone
or something that can gain access through Keystone. Users come with
credentials that can be checked like passwords or API keys.</para>
<para>The second one is tenant. It represents what is called the project in
Nova, meaning something that aggregates the number of resources in each
service. For example, a tenant can have some machines in Nova, a number
of images in Swift/Glance, and couple of networks in Quantum. Users are
bound to a tenant by assigning them a role on that tenant.</para>
<para>The third and last authorization-related kinds of objects are roles.
A role represents a number of privileges or rights a user has or actions
they are allowed to perform. For example, a user who has an 'Admin' role
can take admin actions like view all tenants. Users
can be added to any role either globally or in a tenant. In the first
case, the user gains access implied by the role to the resources in all
tenants; in the second case, one's access is limited to resources of the
corresponding tenant. For example, the user can be an operator of all
tenants and an admin of his own playground.</para>
<para>Now let&apos;s talk about service discovery capabilities. With the
first three primitives, any service (Nova, Glance, Swift) can check
whether or not the user has access to resources. But to try to access
some service in the tenant, the user has to know that the service exists
and to find a way to access it. So the basic objects here are services.
They are actually just some distinguished names. The roles we've talked
about recently can be not only general but also bound to a service. For
example, when Swift requires administrator access to create some object,
it should not require the user to have administrator access to Nova too.
To achieve that, we should create two separate Admin roles - one bound to
Swift and another bound to Nova. After that admin access to Swift can be
given to user with no impact on Nova and vice versa.</para>
<para>To access a service, we have to know its endpoint. So there are
endpoint templates in Keystone that provide information about all
existing endpoints of all existing services. One endpoint template
provides a list of URLs to access an instance of service. These URLs are
public, private and admin ones. The public one is intended to be
accessible from the global world (like http://compute.example.com),
the private one can be used to access from a local network (like
http://compute.example.local), and the admin one is used in case admin
access to service is separated from the common access (like it is in
Keystone).</para>
<para>Now we have the global list of services that exist in our farm and
we can bind tenants to them. Every tenant can have its own list of
service instances and this binding entity is named the endpoint, which
&quot;plugs&quot; the tenant to one service instance. It makes it
possible, for example, to have two tenants that share a common image
store but use distinct compute servers.</para>
<para>This is a long list of entities that are involved in the process but
how does it actually work?</para>
<orderedlist>
<listitem>
<para>To access some service, users provide their credentials to
Keystone and receive a token. The token is just a string that is
connected to the user and tenant internally by Keystone. This token
travels between services with every user request or requests
generated by a service to another service to process the
user&apos;s request.</para>
</listitem>
<listitem>
<para>The users find a URL of a service that they need. If the user,
for example, wants to spawn a new VM instance in Nova, one can find
an URL to Nova in the list of endpoints provided by Keystone and
send an appropriate request.</para>
</listitem>
<listitem>
<para>After that, Nova verifies the validity of the token in Keystone
and should create an instance from some image by the provided image
ID and plug it into some network.</para>
<itemizedlist>
<listitem><para>At first Nova passes this token to Glance to get the
image stored somewhere in there.</para></listitem>
<listitem><para>After that, it asks Quantum to plug this new instance
into a network; Quantum verifies whether the user has access to
the network in its own database and to the interface of VM by
requesting info in Nova.</para></listitem>
</itemizedlist>
</listitem>
<listitem><para>All the way this token travels between services so that they can
ask Keystone or each other for additional information or some
actions.</para></listitem></orderedlist>
<para>Here is a rough diagram of this process:</para>
<figure><title>Keystone flowchart</title><mediaobject><imageobject>
<imagedata fileref="figures/keystone-flowchart.png" />
</imageobject></mediaobject></figure>
</section>
<section xml:id="Identity-Service-Concepts-e1362">
<title>Identity Service Concepts</title>
<para> The OpenStack Identity Service, Keystone, has several key concepts
which are important to understand: </para>
<variablelist>
<varlistentry>
<term>User</term>
<listitem><para>A digital representation of a person, system, or service who
uses OpenStack cloud services. The authentication services
will validate that incoming request are being made by the
user who claims to be making the call. Users have a login
and may be assigned tokens to access resources. Users may
be directly assigned to a particular tenant and behave as
if they are contained in that tenant.</para></listitem>
</varlistentry>
<varlistentry>
<term>Credentials</term>
<listitem><para>
Data that belongs to, is owned by, and generally only known by a user that the user can present
to prove they are who they are (since nobody else should know that data).
</para><para>Examples are:
<itemizedlist>
<listitem><para>a matching username and password</para></listitem>
<listitem><para>a matching username and API key</para></listitem>
<listitem><para>a token that was issued to you that nobody else knows of</para></listitem>
<listitem><para>A real life example, for illustration only, would be you showing up
and presenting a driver's license with a picture of you. The person
behind the desk can then 'authenticate' you and verify you are who you
say you are. Keystone performs effectively the same operation.</para></listitem>
</itemizedlist>
</para></listitem>
</varlistentry>
<varlistentry>
<term>Authentication</term>
<listitem><para> In the context of the Identity Service (Keystone),
authentication is the act of confirming the identity of a
user or the truth of a claim. The Identity Service will
confirm that incoming request are being made by the user
who claims to be making the call by validating a set of
claims that the user is making. These claims are initially
in the form of a set of credentials (username &amp;
password, or username and API key). After initial
confirmation, the Identity Service will issue the user a
token which the user can then provide to demonstrate that
their identity has been authenticated when making
subsequent requests. </para></listitem>
</varlistentry>
<varlistentry>
<term>Token</term>
<listitem><para>
A token is an arbitrary bit of text that is used to access
resources. Each token has a scope which describes which
resources are accessible with it. A token may be
revoked at anytime and is valid for a finite duration.
</para>
<para> While the Identity Service supports token-based
authentication in this release, the intention is for it to
support additional protocols in the future. The intent is
for it to be an integration service foremost, and not a
aspire to be a full-fledged identity store and management
solution. </para></listitem>
</varlistentry>
<varlistentry>
<term>Tenant</term>
<listitem><para> A container used to group or isolate resources and/or identity objects. Depending on the
service operator, a tenant may map to a customer, account, organization, or project. For
Compute, a tenant is a project. For Object Storage, a tenant is an account.</para></listitem>
</varlistentry>
<varlistentry>
<term>Service</term>
<listitem><para>
An OpenStack service, such as Compute (Nova), Object Storage (Swift), or Image Service (Glance). A service provides
one or more endpoints through which users can access resources and perform
(presumably useful) operations.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Endpoint</term>
<listitem> <para>
An network-accessible address, usually described by URL, where a service may be accessed. If using an extension for templates, you can create an endpoint template, which represents the templates of all the consumable services that are available across the regions.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Role</term>
<listitem><para> A personality that a user assumes when performing a specific set of operations.
A role includes a set of right and privileges. A user assuming that role inherits
those rights and privileges.
</para><para> In the Identity Service, a token that is issued to a user
includes the list of roles that user can assume. Services
that are being called by that user determine how they
interpret the set of roles a user has and which operations
or resources each roles grants access to. </para></listitem>
</varlistentry>
</variablelist>
<figure xml:id="KeystoneIdentityFigure">
<title>The Keystone Identity Manager flow</title>
<mediaobject>
<imageobject role="fo">
<imagedata scale="60" fileref="figures/SCH_5002_V00_NUAC-Keystone.png"/>
</imageobject>
</mediaobject>
</figure>
</section>
<section xml:id="installing-openstack-identity-service"><title>Installing the OpenStack Identity Service</title>
<para>You can install the Identity service from packages or from source. Refer to http://keystone.openstack.org for more information.</para>
</section>
<section xml:id="configuring-the-identity-service"><title>Configuring the Identity Service</title>
<para>Here are the steps to get started with authentication using Keystone, the project name for
the OpenStack Identity Service. </para>
<para>Typically a project that uses the OpenStack Identity Service
has settings in a configuration file:</para>
<para>
<itemizedlist>
<listitem>
<para>In Compute, the settings are in
etc/nova/api-paste.ini, but the Identity Service also
provides an example file in
keystone/examples/paste/nova-api-paste.ini. Restart the
nova-api service for these settings to be
configured.</para>
</listitem>
<listitem>
<para>In Image Service, the settings are in glance-api.conf and glance-registry.conf
configuration files in the examples/paste directory. Restart the glance-api service and
also ensure your environment contains OS_AUTH credentials which you can set up with tools/nova_to_os_env.sh provided by the Glance project.</para>
</listitem>
<listitem>
<para>In Object Storage, the settings are held in /etc/swift/proxy-server.conf in a
[filter:keystone] section. Use <code>swift-init main start</code> to restart Object
Storage with the new configuration. Here's an example
/etc/swift/proxy-server.conf:</para>
<literallayout class="monospaced">
[DEFAULT]
bind_port = 8888
user = &lt;user&gt;
[pipeline:main]
pipeline = catch_errors cache keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
[filter:keystone]
use = egg:keystone#swiftauth
keystone_admin_token = 999888777666
keystone_url = http://localhost:35357/v2.0
[filter:cache]
use = egg:swift#memcache
set log_name = cache
[filter:catch_errors]
use = egg:swift#catch_errors
</literallayout>
</listitem>
</itemizedlist>
</para></section>
<section xml:id="starting-identity-service"><title>Starting the Identity Service</title>
<para>By default, configuration parameters (such as the IP and port binding
for each service) are parsed from etc/keystone.conf, so ensure it is up-to-date
prior to starting the service.</para>
<para>To start up the Identity Service (Keystone), enter the
following:</para>
<literallayout class="monospaced">$ cd ~/keystone/bin &amp;&amp; ./keystone </literallayout>
<para>In return you should see something like this:</para>
<literallayout class="monospaced">Starting the Legacy Authentication component
Service API listening on 0.0.0.0:5000
Admin API listening on 0.0.0.0:35357</literallayout>
<para>Use this command for starting the auth server only which exposes the Service API:</para>
<literallayout class="monospaced">$ ./bin/keystone-auth</literallayout>
<para>Use this command for starting the admin server only which exposes the Admin API:</para>
<literallayout class="monospaced">$ ./bin/keystone-admin</literallayout>
<para>After starting the Identity Service or running
keystone-manage a keystone.db sqlite database should be created
in the keystone folder.</para>
</section>
<section xml:id="dependencies"><info><title>Dependencies</title></info>
<para>Once the Identity Service is installed you need to
initialize the database. You can do so with the keystone-manage
command line utility. The keystone-manage utility helps with
managing and configuring an Identity Service installation. You
configure the keystone-manage utility itself with a SQL Alchemy
connection configuration via a parameter passed to the
utility:</para>
<para>--sql_connection=CONN_STRING</para>
<para>Where the CONN_STRING is a proper SQLAlchemy connection string as described in
http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html?highlight=engine#sqlalchemy.create_engine.</para>
</section>
<section xml:id="creating-tenants-users-roles-tokens-and-endpoints"><title>Creating Tenants, Users, Roles, Tokens and Endpoints</title>
<para>Sample data entries are available in keystone/bin/sampledata.sh. The following are just
examples for a walk-through.</para>
<note><para>
Some reserved roles are defined (and can be modified) through the keystone.conf in the /etc folder.</para>
</note>
<para>Add two tenants, and administrative tenant and a tenant named demo. Tenants are equivalent to projects in the previous auth system in Compute. In Object Storage, Tenants are similar to accounts in the swauth system.</para>
<literallayout class="monospaced"> bin/keystone-manage tenant add admin
bin/keystone-manage tenant add demo</literallayout>
<para>Next add two users to the Identity Service and assign their passwords. The last value in the
list is the tenant name.</para>
<literallayout class="monospaced"> bin/keystone-manage user add admin p4ssw0rd admin
bin/keystone-manage user add demo p455w0rd demo</literallayout>
<para>Now you can assign roles, which includes a set of rights and privileges that are double-checked with the token that the user is issued.</para>
<literallayout class="monospaced"> bin/keystone-manage role add Admin
bin/keystone-manage role add Member
bin/keystone-manage role grant Admin admin</literallayout>
<para>Now define the endpointTemplates, which are URLs plus port values that indicate where a
service may be accessed. This example shows many services available to Compute including the
Image Service, the Object Storage service, as well as Identity itself. Since there is just one
zone in this example, it represents all the services across the single region (but could also
represent all the regions). The last two values are flags which indicate the template is
enabled and global. If an endpoint template is global, all tenants automatically have access
to the endpoint. Note that the URLs contain a %tenant_id% string which Keystone populates
at runtime. </para>
<literallayout class="monospaced"> HOST_IP=127.0.0.1
bin/keystone-manage endpointTemplates add RegionOne swift http://$HOST_IP:8080/v1/AUTH_%tenant_id% http://$HOST_IP:8080/ http://$HOST_IP:8080/v1/AUTH_%tenant_id% 1 1
bin/keystone-manage endpointTemplates add RegionOne nova_compat http://$HOST_IP:8774/v1.0/ http://$HOST_IP:8774/v1.0 http://$HOST_IP:8774/v1.0 1 1
bin/keystone-manage endpointTemplates add RegionOne nova http://$HOST_IP:8774/v1.1/%tenant_id% http://$HOST_IP:8774/v1.1/%tenant_id% http://$HOST_IP:8774/v1.1/%tenant_id% 1 1
bin/keystone-manage endpointTemplates add RegionOne glance http://$HOST_IP:9292/v1.1/%tenant_id% http://$HOST_IP:9292/v1.1/%tenant_id% http://$HOST_IP:9292/v1.1/%tenant_id% 1 1
bin/keystone-manage endpointTemplates add RegionOne identity http://$HOST_IP:5000/v2.0 http://$HOST_IP:35357/v2.0 http://$HOST_IP:5000/v2.0 1 1</literallayout>
<para> Now you add a default token for the admin user to get when requesting a token.</para>
<literallayout class="monospaced">bin/keystone-manage token add 999888777666 admin admin 2015-02-05T00:00</literallayout>
<para> If an endpoint template is not global, endpoints must be manually added using the
tenant name and endpoint template ID. You can retrieve the endpoint template
id by doing:</para>
<literallayout class="monospaced"> bin/keystone-manage endpointTemplates list</literallayout>
<para>You can then add endpoints manually by doing:</para>
<literallayout class="monospaced"> bin/keystone-manage endpoint add $TENANT $ENDPOINT_TEMPLATE_ID</literallayout>
<para>For example (assuming the new endpoint template has an ID of 6):</para>
<literallayout class="monospaced"> bin/keystone-manage endpointTemplates add RegionTwo nova http://$HOST_IP:8774/v1.1/%tenant_id% http://$HOST_IP:8774/v1.1/%tenant_id% http://$HOST_IP:8774/v1.1/%tenant_id% 1 0
bin/keystone-manage endpoint add admin 6
bin/keystone-manage endpoint add demo 6
</literallayout>
<para>You can configure Identity and Compute with a single region or multiple regions using
zones. You need to add a label for the endpoint for each region. Having a single region
doesn't require any work other than adding label.</para>
<para>
<literallayout class="monospaced">keystone-manage endpointTemplates add SWRegion identity http://%HOST_IP%:5000/v2.0 http://%HOST_IP%:35357/v2.0 http://%HOST_IP%:5000/v2.0 1 1</literallayout>
</para> </section>
</chapter>

@ -1,122 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml filename="ch_installing-openstack-imaging-service.html" ?>
<title>Installing and Configuring OpenStack Image Service</title>
<para>The OpenStack system has several key projects that are separate installations but can work
together depending on your cloud needs: OpenStack Compute, OpenStack Object Storage, and an
OpenStack Image Service with a project name of Glance. You can install any of these
projects separately and then configure them either as standalone or connected
entities.</para>
<section>
<?dbhtml filename="glance-system-requirements.html" ?>
<title>System Requirements for OpenStack Image Service (Glance)</title>
<para><emphasis role="bold">Hardware</emphasis>: OpenStack components are intended to run on
standard hardware.</para>
<para><emphasis role="bold">Operating System</emphasis>: The OpenStack Image Service
itself currently runs on Ubuntu but the images it stores may contain different operating
systems.</para>
<para><emphasis role="bold">Networking</emphasis>: 1000 Mbps are suggested. </para>
<para><emphasis role="bold">Database</emphasis>: Any SQLAlchemy-compatible database, such as
MySQL, Oracle, PostgreSQL, or SQLite. The reference registry server implementation that
ships with OpenStack Image Service uses a SQL database to store information about an
image, and publishes this information via an HTTP/REST-like interface.</para>
<para><emphasis role="bold">Permissions</emphasis>: You can install OpenStack imaging
service either as root or as a user with sudo permissions if you configure the sudoers
file to enable all the permissions. </para>
</section>
<section>
<?dbhtml filename="installing-openstack-imaging-service-on-ubuntu.html" ?>
<title>Installing OpenStack Image Service on Ubuntu </title><para>The installation of the Image Services themselves are separate from the storage of the virtual images to be retrieved. </para>
<section>
<title>Example Installation Architecture</title>
<para>These installation instructions have you set up the services on a single node, so the API server and registry services are on the same server. The images themselves can be stored either in OpenStack Object Storage, Amazon's S3 infrastructure, in a filesystem, or if you want read-only access, on a web server to be served via HTTP.</para></section>
<section>
<?dbhtml filename="installing-glance.html" ?>
<title>Installing OpenStack Image Service (Glance) </title>
<para>First, add the Glance PPA to your sources.lst. </para>
<para>
<literallayout class="monospaced">sudo add-apt-repository ppa:glance-core/trunk </literallayout></para>
<para>Run update. </para>
<para><literallayout class="monospaced">sudo apt-get update</literallayout></para>
<para>Now, install the Glance server. </para>
<para>
<literallayout class="monospaced">sudo apt-get install glance </literallayout></para>
<para>All dependencies should be automatically installed.</para>
<para>Refer to the <link xlink:href="http://glance.openstack.org/installing.html">Glance
developer documentation site to install from a Bazaar branch</link>. </para>
</section>
</section><section>
<?dbhtml filename="configuring-and-controlling-openstack-imaging-servers.html" ?>
<title>Configuring and Controlling Glance Servers</title>
<para>You start Glance either by calling the server program, glance-api, or using the server daemon wrapper program named glance-control.</para> <para>Glance ships with an etc/ directory that contains sample paste.deploy configuration files that you can copy to a standard configuration directory and adapt for your own uses.</para>
<para>If you do not specify a configuration file on the command line when starting the glance-api server, Glance attempts to locate a glance.conf configuration file in one of the following directories, and uses the first config file it finds in this order:</para>
<orderedlist>
<listitem><para>.</para></listitem>
<listitem><para>~/.glance</para></listitem>
<listitem><para>~/</para></listitem>
<listitem><para>/etc/glance/</para></listitem>
<listitem><para>/etc</para></listitem></orderedlist>
<para>If Glance doesn't find a configuration file in one of these locations, you see an error: <code>ERROR: Unable to locate any configuration file. Cannot load application glance-api</code>.</para>
<simplesect><title>Manually starting the server</title>
<para>To manually start the glance-api server, use a command like the following: </para>
<literallayout class="monospaced">sudo glance-api etc/glance.conf.sample --debug</literallayout>
<para>Supply the configuration file as the first argument (etc/glance.conf.sample in the above example) and then any common options you want to use. In the above example, the --debug option shows some of the debugging output that the server shows when starting up. Call the server program with --help to see all available options you can specify on the command line.</para>
<para>Note that the server does not daemonize itself when run manually from the terminal. You can force the server to daemonize using the standard shell backgrounding indicator, However, for most use cases, we recommend using the glance-control server daemon wrapper for daemonizing. See below for more details on daemonization with glance-control.</para></simplesect>
<simplesect><title>Starting the server with the glance-control wrapper script</title>
<para>The second way to start up a Glance server is to use the glance-control program. glance-control is a wrapper script that allows the user to start, stop, restart, and reload the other Glance server programs in a fashion that is more conducive to automation and scripting.</para>
<para>Servers started via the glance-control program are always daemonized, meaning that the server program process runs in the background.</para>
<para>To start a Glance server with glance-control, simply call glance-control with a server and the word “start”, followed by any command-line options you wish to provide. Start the server with glance-control in the following way:</para>
<literallayout class="monospaced"> sudo glance-control {SERVER} start [CONFPATH]</literallayout>
<para> Here is an example that shows how to start the glance-registry server with the glance-control wrapper script.</para>
<literallayout class="monospaced">sudo glance-control registry start etc/glance.conf.sample
Starting glance-registry with /home/jpipes/repos/glance/trunk/etc/glance.conf.sample</literallayout>
<para>To start all the Glance servers (currently the glance-api and glance-registry programs) at once, you can specify “all” for the {SERVER}.</para>
</simplesect>
<simplesect><title>Stopping a Glance server</title><para>You can use Ctrl-C to stop a Glance server if it was started manually. </para>
<para>If you started the Glance server using the glance-control program, you can use the glance-control program to stop it. Simply do the following:</para>
<literallayout class="monospaced">sudo glance-control {SERVER} stop</literallayout>
<para> as this example shows:
</para>
<literallayout class="monospaced">sudo glance-control registry stop
Stopping glance-registry pid: 17602 signal: 15
</literallayout>
</simplesect>
<simplesect><title>Restarting a Glance server</title>
<para>
You can restart a server with the glance-control program, as demonstrated here:
</para>
<literallayout class ="monospaced">
sudo glance-control registry restart etc/glance.conf.sample
Stopping glance-registry pid: 17611 signal: 15
Starting glance-registry with /home/jpipes/repos/glance/trunk/etc/glance.conf.sample</literallayout>
</simplesect>
</section>
</chapter>

@ -1,805 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml filename="ch_introduction-to-openstack-imaging-service.html" ?>
<title>OpenStack Image Service</title>
<para>You can use OpenStack Image Service for discovering, registering, and retrieving virtual machine images. The service includes a RESTful API that allows users to query VM image metadata and retrieve the actual image with HTTP requests, or you can use a client class in your Python code to accomplish the same tasks.
</para><para>
VM images made available through OpenStack Image Service can be stored in a variety of locations from simple file systems to object-storage systems like the OpenStack Object Storage project, or even use S3 storage either on its own or through an OpenStack Object Storage S3 interface.</para>
<section>
<?dbhtml filename="overview-of-architecture.html" ?>
<title>Overview of Architecture</title>
<para>There are two main parts to the Image Services architecture:</para>
<itemizedlist><listitem><para>API server</para></listitem>
<listitem><para>Registry server(s)</para>
</listitem>
</itemizedlist>
<para>OpenStack Image Service is designed to be as adaptable as possible for various back-end storage and registry database solutions. There is a main API server (the ``glance-api`` program) that serves as the communications hub between various client programs, the registry of image metadata, and the storage systems that actually contain the virtual machine image data.</para>
</section>
<section>
<?dbhtml filename="openstack-imaging-service-api-server.html" ?>
<title>OpenStack Image Service API Server</title>
<para>The API server is the main interface for OpenStack Image Service. It routes requests from clients to registries of image metadata and to its backend stores, which are the mechanisms by which OpenStack Image Service actually saves incoming virtual machine images.</para>
<para>The backend stores that OpenStack Image Service can work with are as follows:</para>
<itemizedlist><listitem><para>OpenStack Object Storage - OpenStack Object Storage is the highly-available object storage project in OpenStack.</para></listitem>
<listitem><para>Filesystem - The default backend that OpenStack Image Service uses to store virtual machine images is the filesystem backend. This simple backend writes image files to the local filesystem.</para></listitem>
<listitem><para>S3 - This backend allows OpenStack Image Service to store virtual machine images in Amazons S3 service.</para></listitem>
<listitem><para>HTTP - OpenStack Image Service can read virtual machine images that are available via HTTP somewhere on the Internet. This store is read-only.</para></listitem></itemizedlist>
</section>
<section>
<?dbhtml filename="openstack-imaging-service-registry-servers.html" ?>
<title>OpenStack Image Service Registry Servers</title>
<para>OpenStack Image Service registry servers are servers that conform to the OpenStack Image Service Registry API. OpenStack Image Service ships with a reference implementation of a registry server that complies with this API (bin/OpenStack Image Service-registry).</para></section>
<section>
<?dbhtml filename="installing-openstack-imaging-service.html" ?>
<title>Installing and Configuring OpenStack Image Service</title>
<para>The OpenStack system has several key projects that are separate installations but can work
together depending on your cloud needs: OpenStack Compute, OpenStack Object Storage, and an
OpenStack Image Service with a project name of Glance. You can install any of these
projects separately and then configure them either as standalone or connected
entities.</para>
<section>
<?dbhtml filename="glance-system-requirements.html" ?>
<title>System Requirements for OpenStack Image Service (Glance)</title>
<para><emphasis role="bold">Hardware</emphasis>: OpenStack components are intended to run on
standard hardware.</para>
<para><emphasis role="bold">Operating System</emphasis>: The OpenStack Image Service
itself currently runs on Ubuntu but the images it stores may contain different operating
systems.</para>
<para><emphasis role="bold">Networking</emphasis>: 1000 Mbps are suggested. </para>
<para><emphasis role="bold">Database</emphasis>: Any SQLAlchemy-compatible database, such as
MySQL, Oracle, PostgreSQL, or SQLite. The reference registry server implementation that
ships with OpenStack Image Service uses a SQL database to store information about an
image, and publishes this information via an HTTP/REST-like interface.</para>
<para><emphasis role="bold">Permissions</emphasis>: You can install OpenStack imaging
service either as root or as a user with sudo permissions if you configure the sudoers
file to enable all the permissions. </para>
</section>
<section>
<?dbhtml filename="installing-openstack-imaging-service-on-ubuntu.html" ?>
<title>Installing OpenStack Image Service on Ubuntu </title><para>The installation of the Image Services themselves are separate from the storage of the virtual images to be retrieved. </para>
<section><?dbhtml filename="example-installation-architecture-glance.html" ?>
<title>Example Installation Architecture</title>
<para>These installation instructions have you set up the services on a single node, so the API server and registry services are on the same server. The images themselves can be stored either in OpenStack Object Storage, Amazon's S3 infrastructure, in a filesystem, or if you want read-only access, on a web server to be served via HTTP.</para></section>
<section>
<?dbhtml filename="installing-glance.html" ?>
<title>Installing OpenStack Image Service (Glance) </title>
<para>First, add the Glance PPA to your sources.lst. </para>
<para>
<literallayout class="monospaced">sudo add-apt-repository ppa:glance-core/trunk </literallayout></para>
<para>Run update. </para>
<para><literallayout class="monospaced">sudo apt-get update</literallayout></para>
<para>Now, install the Glance server. </para>
<para>
<literallayout class="monospaced">sudo apt-get install glance </literallayout></para>
<para>All dependencies should be automatically installed.</para>
<para>Refer to the <link xlink:href="http://glance.openstack.org/installing.html">Glance
developer documentation site to install from a Bazaar branch</link>. </para>
</section>
</section><section>
<?dbhtml filename="configuring-and-controlling-openstack-imaging-servers.html" ?>
<title>Configuring and Controlling Glance Servers</title>
<para>You start Glance either by calling the server program, glance-api, or using the server daemon wrapper program named glance-control.</para> <para>Glance ships with an etc/ directory that contains sample paste.deploy configuration files that you can copy to a standard configuration directory and adapt for your own uses.</para>
<para>If you do not specify a configuration file on the command line when starting the glance-api server, Glance attempts to locate a glance.conf configuration file in one of the following directories, and uses the first config file it finds in this order:</para>
<orderedlist>
<listitem><para>.</para></listitem>
<listitem><para>~/.glance</para></listitem>
<listitem><para>~/</para></listitem>
<listitem><para>/etc/glance/</para></listitem>
<listitem><para>/etc</para></listitem></orderedlist>
<para>If Glance doesn't find a configuration file in one of these locations, you see an error: <code>ERROR: Unable to locate any configuration file. Cannot load application glance-api</code>.</para>
<simplesect><title>Manually starting the server</title>
<para>To manually start the glance-api server, use a command like the following: </para>
<literallayout class="monospaced">sudo glance-api etc/glance.conf.sample --debug</literallayout>
<para>Supply the configuration file as the first argument (etc/glance.conf.sample in the above example) and then any common options you want to use. In the above example, the --debug option shows some of the debugging output that the server shows when starting up. Call the server program with --help to see all available options you can specify on the command line.</para>
<para>Note that the server does not daemonize itself when run manually from the terminal. You can force the server to daemonize using the standard shell backgrounding indicator, However, for most use cases, we recommend using the glance-control server daemon wrapper for daemonizing. See below for more details on daemonization with glance-control.</para></simplesect>
<simplesect><title>Starting the server with the glance-control wrapper script</title>
<para>The second way to start up a Glance server is to use the glance-control program. glance-control is a wrapper script that allows the user to start, stop, restart, and reload the other Glance server programs in a fashion that is more conducive to automation and scripting.</para>
<para>Servers started via the glance-control program are always daemonized, meaning that the server program process runs in the background.</para>
<para>To start a Glance server with glance-control, simply call glance-control with a server and the word “start”, followed by any command-line options you wish to provide. Start the server with glance-control in the following way:</para>
<literallayout class="monospaced"> sudo glance-control {SERVER} start [CONFPATH]</literallayout>
<para> Here is an example that shows how to start the glance-registry server with the glance-control wrapper script.</para>
<literallayout class="monospaced">sudo glance-control registry start etc/glance.conf.sample
Starting glance-registry with /home/jpipes/repos/glance/trunk/etc/glance.conf.sample</literallayout>
<para>To start all the Glance servers (currently the glance-api and glance-registry programs) at once, you can specify “all” for the {SERVER}.</para>
</simplesect>
<simplesect><title>Stopping a Glance server</title><para>You can use Ctrl-C to stop a Glance server if it was started manually. </para>
<para>If you started the Glance server using the glance-control program, you can use the glance-control program to stop it. Simply do the following:</para>
<literallayout class="monospaced">sudo glance-control {SERVER} stop</literallayout>
<para> as this example shows:
</para>
<literallayout class="monospaced">sudo glance-control registry stop
Stopping glance-registry pid: 17602 signal: 15
</literallayout>
</simplesect>
<simplesect><title>Restarting a Glance server</title>
<para>
You can restart a server with the glance-control program, as demonstrated here:
</para>
<literallayout class ="monospaced">
sudo glance-control registry restart etc/glance.conf.sample
Stopping glance-registry pid: 17611 signal: 15
Starting glance-registry with /home/jpipes/repos/glance/trunk/etc/glance.conf.sample</literallayout>
</simplesect>
</section>
<section><?dbhtml filename="configuring-compute-to-use-glance.html" ?><title>Configuring Compute to use Glance</title>
<para>Once Glance is installed and the server is running, you should edit your nova.conf file to add or edit the following flags:</para>
<literallayout class="monospaced">
--glance_api_servers=GLANCE_SERVER_IP
--image_service=nova.image.glance.GlanceImageService</literallayout>
<para>Where the GLANCE_SERVER_IP is the IP address of the server running the glance-api service.</para></section>
</section>
<section><?dbhtml filename="configuring-logging-for-glance.html" ?><title>Configuring Logging for Glance</title>
<para>There are a number of configuration options in Glance that control how Glance servers log messages. The configuration options are specified in the glance.conf configuration file.</para>
<table rules="all">
<caption>Description of glance.conf flags for Glance logging</caption>
<thead>
<tr>
<td>Flag</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td>--log-config=PATH </td>
<td>default: none</td>
<td>Path name to a configuration file to use for configuring logging: Specified on the command line only. </td>
</tr>
<tr>
<td>--log-format </td>
<td>default: %(asctime)s %(levelname)8s [%(name)s] %(message)s</td>
<td>Format of log records: Because of a bug in the PasteDeploy package, this
option is only available on the command line. See the <link
xlink:href="http://docs.python.org/library/logging.html">Python logging
module documentation</link> for more information about the options in
the format string.</td>
</tr>
<tr>
<td>--log_file </td>
<td>default: none</td>
<td>Path name: The filepath of the file to use for logging messages from Glances servers. Without this setting, the default is to output messages to stdout, so if you are running Glance servers in a daemon mode (using glance-control) you should make sure that the log_file option is set appropriately.</td>
</tr>
<tr>
<td>--log_dir </td>
<td>default: none</td>
<td>Path name: The filepath of the directory to use for log files. If not specified (the default) the log_file is used as an absolute filepath.</td>
</tr>
<tr>
<td>--log_date_format </td>
<td>default: %Y-%m-%d %H:%M:%S</td>
<td>Python logging module formats: The format string for timestamps in the log
output. See the <link
xlink:href="http://docs.python.org/library/logging.html">Python logging
module documentation</link> for more information on setting this format
string.</td>
</tr>
</tbody>
</table>
</section>
<section><?dbhtml filename="openstack-imaging-service-glance-rest-api.html" ?>
<info><title>The Glance REST API</title></info>
<para>
Glance has a RESTful API that exposes both metadata about registered
virtual machine images and the image data itself.
</para>
<para>
A host that runs the <literal>bin/glance-api</literal> service is
said to be a <emphasis>Glance API Server</emphasis>.
</para>
<para>
Assume there is a Glance API server running at the URL
<literal>http://glance.example.com</literal>.
</para>
<para>
Let's walk through how a user might request information from this
server.
</para>
<section xml:id="requesting-a-list-of-public-vm-images"><?dbhtml filename="requesting-vm-list.html" ?><info><title>Requesting a List of Public VM Images</title></info>
<para>
We want to see a list of available virtual machine images that the
Glance server knows about.
</para>
<para>
We issue a <literal>GET</literal> request to
<literal>http://glance.example.com/images/</literal> to retrieve
this list of available <emphasis>public</emphasis> images. The
data is returned as a JSON-encoded mapping in the following
format:
</para>
<screen>
{'images': [
{'uri': 'http://glance.example.com/images/1',
'name': 'Ubuntu 10.04 Plain',
'disk_format': 'vhd',
'container_format': 'ovf',
'size': '5368709120'}
...]}
</screen>
<note><para>
All images returned from the above `GET` request are *public* images
</para></note>
</section>
<section xml:id="requesting-detailed-metadata-on-public-vm-images"><?dbhtml filename="requesting-vm-metadata.html" ?><info><title>Requesting Detailed Metadata on Public VM Images</title></info>
<para>
We want to see more detailed information on available virtual
machine images that the Glance server knows about.
</para>
<para>
We issue a <literal>GET</literal> request to
<literal>http://glance.example.com/images/detail</literal> to
retrieve this list of available <emphasis>public</emphasis>
images. The data is returned as a JSON-encoded mapping in the
following format:
</para>
<screen>
{'images': [
{'uri': 'http://glance.example.com/images/1',
'name': 'Ubuntu 10.04 Plain 5GB',
'disk_format': 'vhd',
'container_format': 'ovf',
'size': '5368709120',
'checksum': 'c2e5db72bd7fd153f53ede5da5a06de3',
'location': 'swift://account:key/container/image.tar.gz.0',
'created_at': '2010-02-03 09:34:01',
'updated_at': '2010-02-03 09:34:01',
'deleted_at': '',
'status': 'active',
'is_public': True,
'properties': {'distro': 'Ubuntu 10.04 LTS'}},
...]}
</screen>
<note><para>
All images returned from the above `GET` request are *public* images.
</para><para>
All timestamps returned are in UTC.</para>
<para>The `updated_at` timestamp is the timestamp when an image's metadata
was last updated, not its image data, as all image data is immutable
once stored in Glance.</para>
<para>The `properties` field is a mapping of free-form key/value pairs that
have been saved with the image metadata.</para>
<para>The `checksum` field is an MD5 checksum of the image file data.
</para></note>
</section>
<section xml:id="filtering-images-returned-via-get-images-and-get-imagesdetail"><info><title>Filtering Images Returned via <literal>GET /images</literal>
and <literal>GET /images/detail</literal></title></info>
<para>
Both the <literal>GET /images</literal> and
<literal>GET /images/detail</literal> requests take query
parameters that serve to filter the returned list of images. The
following list details these query parameters.
</para>
<itemizedlist>
<listitem>
<para>
<literal>name=NAME</literal>
</para>
<para>
Filters images having a <literal>name</literal> attribute
matching <literal>NAME</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>container_format=FORMAT</literal>
</para>
<para>
Filters images having a <literal>container_format</literal>
attribute matching <literal>FORMAT</literal>
</para>
<para>
For more information, see About Disk and Container
Formats.
</para>
</listitem>
<listitem>
<para>
<literal>disk_format=FORMAT</literal>
</para>
<para>
Filters images having a <literal>disk_format</literal>
attribute matching <literal>FORMAT</literal>
</para>
<para>
For more information, see About Disk and Container
Formats.
</para>
</listitem>
<listitem>
<para>
<literal>status=STATUS</literal>
</para>
<para>
Filters images having a <literal>status</literal> attribute
matching <literal>STATUS</literal>
</para>
<para>
For more information, see :doc:`About Image Statuses
&lt;statuses&gt;`
</para>
</listitem>
<listitem>
<para>
<literal>size_min=BYTES</literal>
</para>
<para>
Filters images having a <literal>size</literal> attribute
greater than or equal to <literal>BYTES</literal>
</para>
</listitem>
<listitem>
<para>
<literal>size_max=BYTES</literal>
</para>
<para>
Filters images having a <literal>size</literal> attribute less
than or equal to <literal>BYTES</literal>
</para>
</listitem>
</itemizedlist>
<para>
These two resources also accept sort parameters:
</para>
<itemizedlist>
<listitem>
<para>
<literal>sort_key=KEY</literal>
</para>
<para>
Results will be ordered by the specified image attribute
<literal>KEY</literal>. Accepted values include
<literal>id</literal>, <literal>name</literal>,
<literal>status</literal>, <literal>disk_format</literal>,
<literal>container_format</literal>, <literal>size</literal>,
<literal>created_at</literal> (default) and
<literal>updated_at</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>sort_dir=DIR</literal>
</para>
<para>
Results will be sorted in the direction
<literal>DIR</literal>. Accepted values are
<literal>asc</literal> for ascending or
<literal>desc</literal> (default) for descending.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="requesting-detailed-metadata-on-a-specific-image"><?dbhtml filename="requesting-metadata-specific-image.html" ?><info><title>Requesting Detailed Metadata on a Specific Image</title></info>
<para>
We want to see detailed information for a specific virtual machine
image that the Glance server knows about.
</para>
<para>
We have queried the Glance server for a list of public images and
the data returned includes the `uri` field for each available
image. This `uri` field value contains the exact location needed
to get the metadata for a specific image.
</para>
<para>
Continuing the example from above, in order to get metadata about
the first public image returned, we can issue a
<literal>HEAD</literal> request to the Glance server for the
image's URI.
</para>
<para>
We issue a <literal>HEAD</literal> request to
<literal>http://glance.example.com/images/1</literal> to retrieve
complete metadata for that image. The metadata is returned as a
set of HTTP headers that begin with the prefix
<literal>x-image-meta-</literal>. The following shows an example
of the HTTP headers returned from the above
<literal>HEAD</literal> request:
</para>
<screen>
x-image-meta-uri http://glance.example.com/images/1
x-image-meta-name Ubuntu 10.04 Plain 5GB
x-image-meta-disk-format vhd
x-image-meta-container-format ovf
x-image-meta-size 5368709120
x-image-meta-checksum c2e5db72bd7fd153f53ede5da5a06de3
x-image-meta-location swift://account:key/container/image.tar.gz.0
x-image-meta-created_at 2010-02-03 09:34:01
x-image-meta-updated_at 2010-02-03 09:34:01
x-image-meta-deleted_at
x-image-meta-status available
x-image-meta-is-public True
x-image-meta-property-distro Ubuntu 10.04 LTS
</screen>
<note><para>
All timestamps returned are in UTC.
</para>
<para>The `x-image-meta-updated_at` timestamp is the timestamp when an
image's metadata was last updated, not its image data, as all
image data is immutable once stored in Glance.</para>
<para>There may be multiple headers that begin with the prefix
`x-image-meta-property-`. These headers are free-form key/value pairs
that have been saved with the image metadata. The key is the string
after `x-image-meta-property-` and the value is the value of the header.</para>
<para>The response's `ETag` header will always be equal to the
`x-image-meta-checksum` value.</para>
</note>
</section>
<section xml:id="retrieving-a-virtual-machine-image"><?dbhtml filename="retrieving-vm-image.html" ?><info><title>Retrieving a Virtual Machine Image</title></info>
<para>
We want to retrieve that actual raw data for a specific virtual
machine image that the Glance server knows about.
</para>
<para>
We have queried the Glance server for a list of public images and
the data returned includes the `uri` field for each available
image. This `uri` field value contains the exact location needed
to get the metadata for a specific image.
</para>
<para>
Continuing the example from above, in order to get metadata about
the first public image returned, we can issue a
<literal>HEAD</literal> request to the Glance server for the
image's URI.
</para>
<para>
We issue a <literal>GET</literal> request to
<literal>http://glance.example.com/images/1</literal> to retrieve
metadata for that image as well as the image itself encoded into
the response body.
</para>
<para>
The metadata is returned as a set of HTTP headers that begin with
the prefix <literal>x-image-meta-</literal>. The following shows
an example of the HTTP headers returned from the above
<literal>GET</literal> request:
</para>
<screen>
x-image-meta-uri http://glance.example.com/images/1
x-image-meta-name Ubuntu 10.04 Plain 5GB
x-image-meta-disk-format vhd
x-image-meta-container-format ovf
x-image-meta-size 5368709120
x-image-meta-checksum c2e5db72bd7fd153f53ede5da5a06de3
x-image-meta-location swift://account:key/container/image.tar.gz.0
x-image-meta-created_at 2010-02-03 09:34:01
x-image-meta-updated_at 2010-02-03 09:34:01
x-image-meta-deleted_at
x-image-meta-status available
x-image-meta-is-public True
x-image-meta-property-distro Ubuntu 10.04 LTS
</screen>
<note><para>
All timestamps returned are in UTC.</para>
<para> The `x-image-meta-updated_at` timestamp is the timestamp when an
image's metadata was last updated, not its image data, as all
image data is immutable once stored in Glance.</para>
<para>There may be multiple headers that begin with the prefix
`x-image-meta-property-`. These headers are free-form key/value pairs
that have been saved with the image metadata. The key is the string
after `x-image-meta-property-` and the value is the value of the header.</para>
<para>The response's `Content-Length` header shall be equal to the value of
the `x-image-meta-size` header.</para>
<para>The response's `ETag` header will always be equal to the
`x-image-meta-checksum` value.</para>
<para>The image data itself will be the body of the HTTP response returned
from the request, which will have content-type of
`application/octet-stream`.</para>
</note>
</section>
<section xml:id="adding-a-new-virtual-machine-image"><?dbhtml filename="adding-vm-image.html" ?><info><title>Adding a New Virtual Machine Image</title></info>
<para>
We have created a new virtual machine image in some way (created a
"golden image" or snapshotted/backed up an existing
image) and we wish to do two things:
</para>
<itemizedlist>
<listitem>
<para>
Store the disk image data in Glance
</para>
</listitem>
<listitem>
<para>
Store metadata about this image in Glance
</para>
</listitem>
</itemizedlist>
<para>
We can do the above two activities in a single call to the Glance
API. Assuming, like in the examples above, that a Glance API
server is running at <literal>glance.example.com</literal>, we
issue a <literal>POST</literal> request to add an image to Glance:
</para>
<screen>
POST http://glance.example.com/images/
</screen>
<para>
The metadata about the image is sent to Glance in HTTP headers.
The body of the HTTP request to the Glance API will be the
MIME-encoded disk image data.
</para>
<section xml:id="adding-image-metadata-in-http-headers"><?dbhtml filename="adding-image-metadata-http-headers.html" ?><info><title>Adding Image Metadata in HTTP Headers</title></info>
<para>
Glance will view as image metadata any HTTP header that it
receives in a
</para>
<screen>
``POST`` request where the header key is prefixed with the strings
``x-image-meta-`` and ``x-image-meta-property-``.
</screen>
<para>
The list of metadata headers that Glance accepts are listed
below.
</para>
<itemizedlist>
<listitem>
<para>
<literal>x-image-meta-name</literal>
</para>
<para>
This header is required. Its value should be the name of the
image.
</para>
<para>
Note that the name of an image <emphasis>is not unique to a
Glance node</emphasis>. It would be an unrealistic
expectation of users to know all the unique names of all
other user's images.
</para>
</listitem>
<listitem>
<para>
<literal>x-image-meta-id</literal>
</para>
<para>
This header is optional.
</para>
<para>
When present, Glance will use the supplied identifier for
the image. If the identifier already exists in that Glance
node, then a <emphasis role="strong">409 Conflict</emphasis>
will be returned by Glance.
</para>
<para>
When this header is <emphasis>not</emphasis> present, Glance
will generate an identifier for the image and return this
identifier in the response (see below)
</para>
</listitem>
<listitem>
<para>
<literal>x-image-meta-store</literal>
</para>
<para>
This header is optional. Valid values are one of
<literal>file</literal>, <literal>s3</literal>, or
<literal>swift</literal>
</para>
<para>
When present, Glance will attempt to store the disk image
data in the backing store indicated by the value of the
header. If the Glance node does not support the backing
store, Glance will return a <emphasis role="strong">400 Bad
Request</emphasis>.
</para>
<para>
When not present, Glance will store the disk image data in
the backing store that is marked default. See the
configuration option <literal>default_store</literal> for
more information.
</para>
</listitem>
<listitem>
<para>
<literal>x-image-meta-disk-format</literal>
</para>
<para>
This header is optional. Valid values are one of
<literal>aki</literal>, <literal>ari</literal>,
<literal>ami</literal>, <literal>raw</literal>,
<literal>iso</literal>, <literal>vhd</literal>,
<literal>vdi</literal>, <literal>qcow2</literal>, or
<literal>vmdk</literal>.
</para>
<para>
For more information, see :doc:`About Disk and Container
Formats &lt;formats&gt;`
</para>
</listitem>
<listitem>
<para>
<literal>x-image-meta-container-format</literal>
</para>
<para>
This header is optional. Valid values are one of
<literal>aki</literal>, <literal>ari</literal>,
<literal>ami</literal>, <literal>bare</literal>, or
<literal>ovf</literal>.
</para>
<para>
For more information, see :doc:`About Disk and Container
Formats &lt;formats&gt;`
</para>
</listitem>
<listitem>
<para>
<literal>x-image-meta-size</literal>
</para>
<para>
This header is optional.
</para>
<para>
When present, Glance assumes that the expected size of the
request body will be the value of this header. If the length
in bytes of the request body <emphasis>does not
match</emphasis> the value of this header, Glance will
return a <emphasis role="strong">400 Bad Request</emphasis>.
</para>
<para>
When not present, Glance will calculate the image's size
based on the size of the request body.
</para>
</listitem>
<listitem>
<para>
<literal>x-image-meta-checksum</literal>
</para>
<para>
This header is optional. When present it shall be the
expected <emphasis role="strong">MD5</emphasis> checksum of
the image file data.
</para>
<para>
When present, Glance will verify the checksum generated from
the backend store when storing your image against this value
and return a <emphasis role="strong">400 Bad
Request</emphasis> if the values do not match.
</para>
</listitem>
<listitem>
<para>
<literal>x-image-meta-is-public</literal>
</para>
<para>
This header is optional.
</para>
<para>
When Glance finds the string "true"
(case-insensitive), the image is marked as a public image,
meaning that any user may view its metadata and may read the
disk image from Glance.
</para>
<para>
When not present, the image is assumed to be <emphasis>not
public</emphasis> and specific to a user.
</para>
</listitem>
<listitem>
<para>
<literal>x-image-meta-property-*</literal>
</para>
<para>
When Glance receives any HTTP header whose key begins with
the string prefix <literal>x-image-meta-property-</literal>,
Glance adds the key and value to a set of custom, free-form
image properties stored with the image. The key is the
lower-cased string following the prefix
<literal>x-image-meta-property-</literal> with dashes and
punctuation replaced with underscores.
</para>
<para>
For example, if the following HTTP header were sent:
</para>
<screen>
x-image-meta-property-distro Ubuntu 10.10
</screen>
<para>
Then a key/value pair of "distro"/"Ubuntu
10.10" will be stored with the image in Glance.
</para>
<para>
There is no limit on the number of free-form key/value
attributes that can be attached to the image. However, keep
in mind that the 8K limit on the size of all HTTP headers
sent in a request will effectively limit the number of image
properties.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="updating-an-image"><?dbhtml filename="updating-vm-image.html" ?><info><title>Updating an Image</title></info>
<para>
Glance will view as image metadata any HTTP header that it
receives in a
</para>
<screen>
``PUT`` request where the header key is prefixed with the strings
``x-image-meta-`` and ``x-image-meta-property-``.
</screen>
<para>
If an image was previously reserved, and thus is in the
<literal>queued</literal> state, then image data can be added by
including it as the request body. If the image already as data
associated with it (e.g. not in the <literal>queued</literal>
state), then including a request body will result in a
<emphasis role="strong">409 Conflict</emphasis> exception.
</para>
<para>
On success, the <literal>PUT</literal> request will return the
image metadata encoded as HTTP headers.
</para>
<para>
See more about image statuses here: :doc:`Image Statuses
&lt;statuses&gt;`
</para>
</section>
</section>
</section>
</chapter>