Clean up tutorial and some markup

* Change tuturial to use TryStack
* Update interfaces chapter inadvertently knowing VNC updates are in another patch
* Update <literallayout... to <screen> or <programlisting> where needed

Change-Id: I41da39a81682d17cf5b83e400931a96afc1742bb
This commit is contained in:
annegentle 2012-03-21 16:11:55 -05:00
parent 08f9edf593
commit 4a821e95d4
5 changed files with 633 additions and 726 deletions

@ -1,59 +1,184 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_image_mgmt">
<title>Image Management</title>
<para>You can use OpenStack Image Services for discovering, registering, and retrieving virtual machine images.
The service includes a RESTful API that allows users to query VM image metadata and retrieve the actual image with
HTTP requests, or you can use a client class in your Python code to accomplish the same tasks.
</para>
<para>
VM images made available through OpenStack Image Service can be stored in a variety of locations from simple
file systems to object-storage systems like the OpenStack Object Storage project, or even use S3 storage either
on its own or through an OpenStack Object Storage S3 interface.</para>
<para>The backend stores that OpenStack Image Service can work with are as follows:</para>
<itemizedlist><listitem><para>OpenStack Object Storage - OpenStack Object Storage is the highly-available object storage project in OpenStack.</para></listitem>
<para>You can use OpenStack Image Services for discovering,
registering, and retrieving virtual machine images. The
service includes a RESTful API that allows users to query VM
image metadata and retrieve the actual image with HTTP
requests, or you can use a client class in your Python code to
accomplish the same tasks. </para>
<para> VM images made available through OpenStack Image Service
can be stored in a variety of locations from simple file
systems to object-storage systems like the OpenStack Object
Storage project, or even use S3 storage either on its own or
through an OpenStack Object Storage S3 interface.</para>
<para>The backend stores that OpenStack Image Service can work
with are as follows:</para>
<itemizedlist>
<listitem>
<para>OpenStack Object Storage - OpenStack Object Storage
is the highly-available object storage project in
OpenStack.</para>
</listitem>
<listitem>
<para>Filesystem - The default backend that OpenStack
Image Service uses to store virtual machine images is
the filesystem backend. This simple backend writes
image files to the local filesystem.</para>
</listitem>
<listitem>
<para>S3 - This backend allows OpenStack Image Service to
store virtual machine images in Amazons S3
service.</para>
</listitem>
<listitem>
<para>HTTP - OpenStack Image Service can read virtual
machine images that are available via HTTP somewhere
on the Internet. This store is readonly.</para>
</listitem>
</itemizedlist>
<para>This chapter assumes you have a working installation of the
Image Service, with a working endpoint and users created in
the Identity service, plus you have sourced the environment
variables required by the nova client and glance
client.</para>
<section xml:id="starting-images">
<title>Starting Images</title><para>Once you have an installation, you want to get images that you can use in your Compute cloud.
We've created a basic Ubuntu image for testing your installation. First you'll download
the image, then use "uec-publish-tarball" to publish it:</para>
<listitem><para>Filesystem - The default backend that OpenStack Image Service uses to store virtual machine images is the filesystem backend. This simple backend writes image files to the local filesystem.</para></listitem>
<para><literallayout class="monospaced">
image="ubuntu1010-UEC-localuser-image.tar.gz"
wget http://c0179148.cdn1.cloudfiles.rackspacecloud.com/ubuntu1010-UEC-localuser-image.tar.gz
uec-publish-tarball $image [bucket-name] [hardware-arch]
</literallayout>
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Image</emphasis> : a tar.gz file that contains the
system, its kernel and ramdisk. </para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Bucket</emphasis> : a local repository that contains
images. </para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Hardware architecture</emphasis> : specify via "amd64"
or "i386" the image's architecture (32 or 64 bits). </para>
</listitem>
</itemizedlist>
</para>
<listitem><para>S3 - This backend allows OpenStack Image Service to store virtual machine images in Amazons S3 service.</para></listitem>
<para>Here's an example of what this command looks like with data:</para>
<listitem><para>HTTP - OpenStack Image Service can read virtual machine images that are available via HTTP somewhere on the Internet. This store is readonly.</para></listitem></itemizedlist>
<para><literallayout class="monospaced">uec-publish-tarball ubuntu1010-UEC-localuser-image.tar.gz dub-bucket amd64</literallayout></para>
<para>The command in return should output three references:<emphasis role="italic">
emi</emphasis>, <emphasis role="italic">eri</emphasis> and <emphasis role="italic"
>eki</emphasis>. You will next run nova image-list in order to obtain the ID of the
image you just uploaded.</para>
<para>Now you can schedule, launch and connect to the instance, which you do with the "nova"
command line. The ID of the image will be used with the <literallayout class="monospaced">nova boot</literallayout>command.</para>
<para>One thing to note here, once you publish the tarball, it has to untar before
you can launch an image from it. Using the 'nova list' command, and make sure the image
has it's status as "ACTIVE".</para>
<para><literallayout class="monospaced">nova image-list</literallayout></para>
<para>Depending on the image that you're using, you need a public key to connect to it. Some
images have built-in accounts already created. Images can be shared by many users, so it
is dangerous to put passwords into the images. Nova therefore supports injecting ssh
keys into instances before they are booted. This allows a user to login to the instances
that he or she creates securely. Generally the first thing that a user does when using
the system is create a keypair. </para>
<para>Keypairs provide secure authentication to your instances. As part of the first boot of
a virtual image, the private key of your keypair is added to roots authorized_keys
file. Nova generates a public and private key pair, and sends the private key to the
user. The public key is stored so that it can be injected into instances. </para>
<para>Keypairs are created through the api and you use them as a parameter when launching an
instance. They can be created on the command line using the following command :
<literallayout class="monospaced">nova keypair-add</literallayout>In order to list all the available options, you would run :<literallayout class="monospaced">nova help </literallayout>
Example usage:</para>
<literallayout class="monospaced">
nova keypair-add test > test.pem
chmod 600 test.pem
</literallayout>
<para>Now, you can run the instances:</para>
<literallayout class="monospaced">nova boot --image 1 --flavor 1 --key_name test my-first-server</literallayout>
<para>Here's a description of the parameters used above:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">--flavor</emphasis> what type of image to create. You
can get all the flavors you have by running
<literallayout class="monospaced">nova flavor-list</literallayout></para>
</listitem>
<listitem>
<para>
<emphasis role="bold">-key_ name</emphasis> name of the key to inject in to the
image at launch. </para>
</listitem>
</itemizedlist>
<para> The instance will go from “BUILD” to “ACTIVE” in a short time, and you should
be able to connect via SSH using the 'ubuntu' account, with the password 'ubuntu':
(replace $ipaddress with the one you got from nova list): </para>
<para>
<literallayout class="monospaced">ssh ubuntu@$ipaddress</literallayout></para>
<para>The 'ubuntu' user is part of the sudoers group, so you can escalate to 'root'
via the following command:</para>
<para>
<literallayout class="monospaced">sudo -i</literallayout>
</para>
</section>
<section xml:id="deleting-instances">
<title>Deleting Instances</title>
<title>Deleting Instances</title>
<para>When you are done playing with an instance, you can tear the instance down
using the following command (replace $instanceid with the instance IDs from above or
look it up with euca-describe-instances):</para>
<para>When you are done playing with an instance, you can tear
the instance down using the following command (replace
$instanceid with the instance IDs from above or look it up
with euca-describe-instances):</para>
<para><literallayout class="monospaced">nova delete $server-id</literallayout></para></section>
<para><literallayout class="monospaced">nova delete $server-id</literallayout></para>
</section>
<section xml:id="pausing-and-suspending-instances">
<title>Pausing and Suspending Instances</title>
<para>Since the release of the API in its 1.1 version, it is possible to pause and suspend
instances.</para>
<para>Since the release of the API in its 1.1 version, it is
possible to pause and suspend instances.</para>
<warning>
<para>
Pausing and Suspending instances only apply to KVM-based hypervisors and XenServer/XCP Hypervisors.
<para> Pausing and Suspending instances only apply to
KVM-based hypervisors and XenServer/XCP Hypervisors.
</para>
</warning>
<para> Pause/ Unpause : Stores the content of the VM in memory (RAM).</para>
<para>Suspend/ Resume : Stores the content of the VM on disk.</para>
<para>It can be interesting for an administrator to suspend instances, if a maintenance is
planned; or if the instance are not frequently used. Suspending an instance frees up
memory and vCPUS, while pausing keeps the instance running, in a "frozen" state.
Suspension could be compared to an "hibernation" mode.</para>
<section xml:id="pausing-instance">
<title>Pausing instance</title>
<para>To pause an instance :</para>
<literallayout class="monospaced">nova pause $server-id </literallayout>
<para>To resume a paused instance :</para>
<literallayout class="monospaced">nova unpause $server-id </literallayout>
</section>
<para> Pause/ Unpause : Stores the content of the VM in memory
(RAM).</para>
<para>Suspend/ Resume : Stores the content of the VM on
disk.</para>
<para>It can be interesting for an administrator to suspend
instances, if a maintenance is planned; or if the instance
are not frequently used. Suspending an instance frees up
memory and vCPUS, while pausing keeps the instance
running, in a "frozen" state. Suspension could be compared
to an "hibernation" mode.</para>
<section xml:id="pausing-instance">
<title>Pausing instance</title>
<para>To pause an instance :</para>
<literallayout class="monospaced">nova pause $server-id </literallayout>
<para>To resume a paused instance :</para>
<literallayout class="monospaced">nova unpause $server-id </literallayout>
</section>
<section xml:id="suspending-instance">
<title>Suspending instance</title>
<para> To suspend an instance :</para>
@ -62,70 +187,155 @@
<literallayout class="monospaced">nova resume $server-id </literallayout>
</section>
</section>
<section xml:id="creating-custom-images">
<info><author>
<orgname>CSS Corp- Open Source Services</orgname>
</author><title>Image management</title></info>
<para>by <link xlink:href="http://www.csscorp.com/">CSS Corp Open Source Services</link> </para>
<para>There are several pre-built images for OpenStack available from various sources. You can download such images and use them to get familiar with OpenStack. You can refer to <link xlink:href="http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html">http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html</link> for details on using such images.</para>
<para>For any production deployment, you may like to have the ability to bundle custom images, with a custom set of applications or configuration. This chapter will guide you through the process of creating Linux images of Debian and Redhat based distributions from scratch. We have also covered an approach to bundling Windows images.</para>
<para>There are some minor differences in the way you would bundle a Linux image, based on the distribution. Ubuntu makes it very easy by providing cloud-init package, which can be used to take care of the instance configuration at the time of launch. cloud-init handles importing ssh keys for password-less login, setting hostname etc. The instance acquires the instance specific configuration from Nova-compute by connecting to a meta data interface running on 169.254.169.254.</para>
<para>While creating the image of a distro that does not have cloud-init or an equivalent package, you may need to take care of importing the keys etc. by running a set of commands at boot time from rc.local.</para>
<para>The process used for Ubuntu and Fedora is largely the same with a few minor differences, which are explained below.</para>
<section xml:id="creating-custom-images">
<info>
<author>
<orgname>CSS Corp- Open Source Services</orgname>
</author>
<title>Image management</title>
</info>
<para>by <link xlink:href="http://www.csscorp.com/">CSS Corp
Open Source Services</link>
</para>
<para>There are several pre-built images for OpenStack
available from various sources. You can download such
images and use them to get familiar with OpenStack. You
can refer to <link
xlink:href="http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html"
>http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html</link>
for details on using such images.</para>
<para>For any production deployment, you may like to have the
ability to bundle custom images, with a custom set of
applications or configuration. This chapter will guide you
through the process of creating Linux images of Debian and
Redhat based distributions from scratch. We have also
covered an approach to bundling Windows images.</para>
<para>There are some minor differences in the way you would
bundle a Linux image, based on the distribution. Ubuntu
makes it very easy by providing cloud-init package, which
can be used to take care of the instance configuration at
the time of launch. cloud-init handles importing ssh keys
for password-less login, setting hostname etc. The
instance acquires the instance specific configuration from
Nova-compute by connecting to a meta data interface
running on 169.254.169.254.</para>
<para>While creating the image of a distro that does not have
cloud-init or an equivalent package, you may need to take
care of importing the keys etc. by running a set of
commands at boot time from rc.local.</para>
<para>The process used for Ubuntu and Fedora is largely the
same with a few minor differences, which are explained
below.</para>
<para>In both cases, the documentation below assumes that you have a working KVM installation to use for creating the images. We are using the machine called &#8216;client1&#8242; as explained in the chapter on &#8220;Installation and Configuration&#8221; for this purpose.</para>
<para>The approach explained below will give you disk images that represent a disk without any partitions. Nova-compute can resize such disks ( including resizing the file system) based on the instance type chosen at the time of launching the instance. These images cannot have &#8216;bootable&#8217; flag and hence it is mandatory to have associated kernel and ramdisk images. These kernel and ramdisk images need to be used by nova-compute at the time of launching the instance.</para>
<para>However, we have also added a small section towards the end of the chapter about creating bootable images with multiple partitions that can be be used by nova to launch an instance without the need for kernel and ramdisk images. The caveat is that while nova-compute can re-size such disks at the time of launching the instance, the file system size is not altered and hence, for all practical purposes, such disks are not re-sizable.</para>
<section xml:id="creating-a-linux-image"><title>Creating a Linux Image &#8211; Ubuntu &amp; Fedora</title>
<para>In both cases, the documentation below assumes that you
have a working KVM installation to use for creating the
images. We are using the machine called
&#8216;client1&#8242; as explained in the chapter on
&#8220;Installation and Configuration&#8221; for this
purpose.</para>
<para>The approach explained below will give you disk images
that represent a disk without any partitions. Nova-compute
can resize such disks ( including resizing the file
system) based on the instance type chosen at the time of
launching the instance. These images cannot have
&#8216;bootable&#8217; flag and hence it is mandatory to
have associated kernel and ramdisk images. These kernel
and ramdisk images need to be used by nova-compute at the
time of launching the instance.</para>
<para>However, we have also added a small section towards the
end of the chapter about creating bootable images with
multiple partitions that can be be used by nova to launch
an instance without the need for kernel and ramdisk
images. The caveat is that while nova-compute can re-size
such disks at the time of launching the instance, the file
system size is not altered and hence, for all practical
purposes, such disks are not re-sizable.</para>
<section xml:id="creating-a-linux-image">
<title>Creating a Linux Image &#8211; Ubuntu &amp;
Fedora</title>
<para>The first step would be to create a raw image on Client1. This will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>
<para>The first step would be to create a raw image on
Client1. This will represent the main HDD of the
virtual machine, so make sure to give it as much space
as you will need.</para>
<literallayout class="monospaced">
kvm-img create -f raw server.img 5G
</literallayout>
<simplesect><title>OS Installation</title>
<para>Download the iso file of the Linux distribution you want installed in the image. The instructions below are tested on Ubuntu 11.04 Natty Narwhal 64-bit server and Fedora 14 64-bit. Most of the instructions refer to Ubuntu. The points of difference between Ubuntu and Fedora are mentioned wherever required.</para>
<simplesect>
<title>OS Installation</title>
<para>Download the iso file of the Linux distribution
you want installed in the image. The instructions
below are tested on Ubuntu 11.04 Natty Narwhal
64-bit server and Fedora 14 64-bit. Most of the
instructions refer to Ubuntu. The points of
difference between Ubuntu and Fedora are mentioned
wherever required.</para>
<literallayout class="monospaced">
wget http://releases.ubuntu.com/natty/ubuntu-11.04-server-amd64.iso
</literallayout>
<para>Boot a KVM Instance with the OS installer ISO in the virtual CD-ROM. This will start the installation process. The command below also sets up a VNC display at port 0</para>
<para>Boot a KVM Instance with the OS installer ISO in
the virtual CD-ROM. This will start the
installation process. The command below also sets
up a VNC display at port 0</para>
<literallayout class="monospaced">
sudo kvm -m 256 -cdrom ubuntu-11.04-server-amd64.iso -drive file=server.img,if=scsi,index=0 -boot d -net nic -net user -nographic -vnc :0
</literallayout>
<para>Connect to the VM through VNC (use display number :0) and finish the installation.</para>
<para>For Example, where 10.10.10.4 is the IP address of client1:</para>
<para>Connect to the VM through VNC (use display
number :0) and finish the installation.</para>
<para>For Example, where 10.10.10.4 is the IP address
of client1:</para>
<literallayout class="monospaced">
vncviewer 10.10.10.4 :0
</literallayout>
<para>During the installation of Ubuntu, create a single ext4 partition mounted on &#8216;/&#8217;. Do not create a swap partition.</para>
<para>In the case of Fedora 14, the installation will not progress unless you create a swap partition. Please go ahead and create a swap partition.</para>
<para>During the installation of Ubuntu, create a
single ext4 partition mounted on &#8216;/&#8217;.
Do not create a swap partition.</para>
<para>In the case of Fedora 14, the installation will
not progress unless you create a swap partition.
Please go ahead and create a swap
partition.</para>
<para>After finishing the installation, relaunch the VM by executing the following command.</para>
<para>After finishing the installation, relaunch the
VM by executing the following command.</para>
<literallayout class="monospaced">
sudo kvm -m 256 -drive file=server.img,if=scsi,index=0,boot=on -boot c -net nic -net user -nographic -vnc :0
</literallayout>
<para>At this point, you can add all the packages you want to have installed, update the installation, add users and make any configuration changes you want in your image.</para>
<para>At the minimum, for Ubuntu you may run the following commands</para>
<para>At this point, you can add all the packages you
want to have installed, update the installation,
add users and make any configuration changes you
want in your image.</para>
<para>At the minimum, for Ubuntu you may run the
following commands</para>
<literallayout class="monospaced">
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install openssh-server cloud-init
</literallayout>
<para>For Fedora run the following commands as root</para>
<para>For Fedora run the following commands as
root</para>
<literallayout class="monospaced">
yum update
yum install openssh-server
chkconfig sshd on
</literallayout>
<para>Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance coming up as an interface other than eth0.</para>
<para>Also remove the network persistence rules from
/etc/udev/rules.d as their presence will result in
the network interface in the instance coming up as
an interface other than eth0.</para>
<literallayout class="monospaced">
sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules
</literallayout>
<para>Shutdown the Virtual machine and proceed with the next steps.</para>
<para>Shutdown the Virtual machine and proceed with
the next steps.</para>
</simplesect>
<simplesect><title>Extracting the EXT4 partition</title>
<para>The image that needs to be uploaded to OpenStack needs to be an ext4 filesystem image. Here are the steps to create a ext4 filesystem image from the raw image i.e server.img</para>
<simplesect>
<title>Extracting the EXT4 partition</title>
<para>The image that needs to be uploaded to OpenStack
needs to be an ext4 filesystem image. Here are the
steps to create a ext4 filesystem image from the
raw image i.e server.img</para>
<literallayout class="monospaced">
sudo losetup -f server.img
sudo losetup -a
@ -134,8 +344,11 @@ sudo losetup -a
<literallayout class="monospaced">
/dev/loop0: [0801]:16908388 ($filepath)
</literallayout>
<para>Observe the name of the loop device ( /dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>
<para>Now we need to find out the starting sector of the partition. Run:</para>
<para>Observe the name of the loop device ( /dev/loop0
in our setup) when $filepath is the path to the
mounted .raw file.</para>
<para>Now we need to find out the starting sector of
the partition. Run:</para>
<literallayout class="monospaced">
sudo fdisk -cul /dev/loop0
</literallayout>
@ -151,12 +364,19 @@ Disk identifier: 0x00072bd4
Device Boot Start End Blocks Id System
/dev/loop0p1 * 2048 10483711 5240832 83 Linux
</literallayout>
<para>Make a note of the starting sector of the /dev/loop0p1 partition i.e the partition whose ID is 83. This number should be multiplied by 512 to obtain the correct value. In this case: 2048 x 512 = 1048576</para>
<para>Make a note of the starting sector of the
/dev/loop0p1 partition i.e the partition whose ID
is 83. This number should be multiplied by 512 to
obtain the correct value. In this case: 2048 x 512
= 1048576</para>
<para>Unmount the loop0 device:</para>
<literallayout class="monospaced">
sudo losetup -d /dev/loop0
</literallayout>
<para>Now mount only the partition(/dev/loop0p1) of server.img which we had previously noted down, by adding the -o parameter with value previously calculated value</para>
<para>Now mount only the partition(/dev/loop0p1) of
server.img which we had previously noted down, by
adding the -o parameter with value previously
calculated value</para>
<literallayout class="monospaced">
sudo losetup -f -o 1048576 server.img
sudo losetup -a
@ -165,25 +385,39 @@ sudo losetup -a
<literallayout class="monospaced">
/dev/loop0: [0801]:16908388 ($filepath) offset 1048576
</literallayout>
<para>Make a note of the mount point of our device(/dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>
<para>Copy the entire partition to a new .raw file</para>
<para>Make a note of the mount point of our
device(/dev/loop0 in our setup) when $filepath is
the path to the mounted .raw file.</para>
<para>Copy the entire partition to a new .raw
file</para>
<literallayout class="monospaced">
sudo dd if=/dev/loop0 of=serverfinal.img
</literallayout>
<para>Now we have our ext4 filesystem image i.e serverfinal.img</para>
<para>Now we have our ext4 filesystem image i.e
serverfinal.img</para>
<para>Unmount the loop0 device</para>
<literallayout class="monospaced">
sudo losetup -d /dev/loop0
</literallayout>
</simplesect>
<simplesect><title>Tweaking /etc/fstab</title>
<para>You will need to tweak /etc/fstab to make it suitable for a cloud instance. Nova-compute may resize the disk at the time of launch of instances based on the instance type chosen. This can make the UUID of the disk invalid. Hence we have to use File system label as the identifier for the partition instead of the UUID.</para>
<para>Loop mount the serverfinal.img, by running</para>
<simplesect>
<title>Tweaking /etc/fstab</title>
<para>You will need to tweak /etc/fstab to make it
suitable for a cloud instance. Nova-compute may
resize the disk at the time of launch of instances
based on the instance type chosen. This can make
the UUID of the disk invalid. Hence we have to use
File system label as the identifier for the
partition instead of the UUID.</para>
<para>Loop mount the serverfinal.img, by
running</para>
<literallayout class="monospaced">
sudo mount -o loop serverfinal.img /mnt
</literallayout>
<para>Edit /mnt/etc/fstab and modify the line for mounting root partition(which may look like the following)</para>
<para>Edit /mnt/etc/fstab and modify the line for
mounting root partition(which may look like the
following)</para>
<programlisting>
UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c / ext4 errors=remount-ro 0 1
@ -194,9 +428,15 @@ UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c / ext4 errors=remoun
LABEL=uec-rootfs / ext4 defaults 0 0
</programlisting>
</simplesect>
<simplesect><title>Fetching Metadata in Fedora</title>
<para>Since, Fedora does not ship with cloud-init or an equivalent, you will need to take a few steps to have the instance fetch the meta data like ssh keys etc.</para>
<para>Edit the /etc/rc.local file and add the following lines before the line “touch /var/lock/subsys/local”</para>
<simplesect>
<title>Fetching Metadata in Fedora</title>
<para>Since, Fedora does not ship with cloud-init or
an equivalent, you will need to take a few steps
to have the instance fetch the meta data like ssh
keys etc.</para>
<para>Edit the /etc/rc.local file and add the
following lines before the line “touch
/var/lock/subsys/local”</para>
<programlisting>
depmod -a
@ -211,10 +451,15 @@ echo &quot;************************&quot;
cat /root/.ssh/authorized_keys
echo &quot;************************&quot;
</programlisting>
</simplesect></section>
<simplesect><title>Kernel and Initrd for OpenStack</title>
</simplesect>
</section>
<simplesect>
<title>Kernel and Initrd for OpenStack</title>
<para>Copy the kernel and the initrd image from /mnt/boot to user home directory. These will be used later for creating and uploading a complete virtual image to OpenStack.</para>
<para>Copy the kernel and the initrd image from /mnt/boot
to user home directory. These will be used later for
creating and uploading a complete virtual image to
OpenStack.</para>
<literallayout class="monospaced">
sudo cp /mnt/boot/vmlinuz-2.6.38-7-server /home/localadmin
sudo cp /mnt/boot/initrd.img-2.6.38-7-server /home/localadmin
@ -223,30 +468,59 @@ sudo cp /mnt/boot/initrd.img-2.6.38-7-server /home/localadmin
<literallayout class="monospaced">
sudo umount /mnt
</literallayout>
<para>Change the filesystem label of serverfinal.img to &#8216;uec-rootfs&#8217;</para>
<para>Change the filesystem label of serverfinal.img to
&#8216;uec-rootfs&#8217;</para>
<literallayout class="monospaced">
sudo tune2fs -L uec-rootfs serverfinal.img
</literallayout>
<para>Now, we have all the components of the image ready to be uploaded to OpenStack imaging server.</para>
<para>Now, we have all the components of the image ready
to be uploaded to OpenStack imaging server.</para>
</simplesect>
<simplesect><title>Registering with OpenStack</title>
<para>The last step would be to upload the images to Openstack Imaging Server glance. The files that need to be uploaded for the above sample setup of Ubuntu are: vmlinuz-2.6.38-7-server, initrd.img-2.6.38-7-server, serverfinal.img</para>
<simplesect>
<title>Registering with OpenStack</title>
<para>The last step would be to upload the images to
Openstack Imaging Server glance. The files that need
to be uploaded for the above sample setup of Ubuntu
are: vmlinuz-2.6.38-7-server,
initrd.img-2.6.38-7-server, serverfinal.img</para>
<para>Run the following command</para>
<literallayout class="monospaced">
uec-publish-image -t image --kernel-file vmlinuz-2.6.38-7-server --ramdisk-file initrd.img-2.6.38-7-server amd64 serverfinal.img bucket1
</literallayout>
<para>For Fedora, the process will be similar. Make sure that you use the right kernel and initrd files extracted above.</para>
<para>uec-publish-image, like several other commands from euca2ools, returns the prompt back immediately. However, the upload process takes some time and the images will be usable only after the process is complete. You can keep checking the status using the command &#8216;euca-describe-images&#8217; as mentioned below.</para>
<para>For Fedora, the process will be similar. Make sure
that you use the right kernel and initrd files
extracted above.</para>
<para>uec-publish-image, like several other commands from
euca2ools, returns the prompt back immediately.
However, the upload process takes some time and the
images will be usable only after the process is
complete. You can keep checking the status using the
command &#8216;euca-describe-images&#8217; as
mentioned below.</para>
</simplesect>
<simplesect><title>Bootable Images</title>
<para>You can register bootable disk images without associating kernel and ramdisk images. When you do not want the flexibility of using the same disk image with different kernel/ramdisk images, you can go for bootable disk images. This greatly simplifies the process of bundling and registering the images. However, the caveats mentioned in the introduction to this chapter apply. Please note that the instructions below use server.img and you can skip all the cumbersome steps related to extracting the single ext4 partition.</para>
<simplesect>
<title>Bootable Images</title>
<para>You can register bootable disk images without
associating kernel and ramdisk images. When you do not
want the flexibility of using the same disk image with
different kernel/ramdisk images, you can go for
bootable disk images. This greatly simplifies the
process of bundling and registering the images.
However, the caveats mentioned in the introduction to
this chapter apply. Please note that the instructions
below use server.img and you can skip all the
cumbersome steps related to extracting the single ext4
partition.</para>
<literallayout class="monospaced">
nova-manage image image_register server.img --public=T --arch=amd64
</literallayout>
</simplesect>
<simplesect><title>Image Listing</title>
<para>The status of the images that have been uploaded can be viewed by using euca-describe-images command. The output should like this:</para>
<literallayout class="monospaced">nova image-list</literallayout>
<simplesect>
<title>Image Listing</title>
<para>The status of the images that have been uploaded can
be viewed by using euca-describe-images command. The
output should like this:</para>
<literallayout class="monospaced">nova image-list</literallayout>
<programlisting>
+----+---------------------------------------------+--------+
| ID | Name | Status |
@ -256,42 +530,72 @@ nova-manage image image_register server.img --public=T --arch=amd64
| 8 | ttylinux-uec-amd64-12.1_2.6.35-22_1.img | ACTIVE |
+----+---------------------------------------------+--------+
</programlisting>
</simplesect></section>
<section xml:id="creating-a-windows-image"><title>Creating a Windows Image</title>
<para>The first step would be to create a raw image on Client1, this will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>
<literallayout class="monospaced">
</simplesect>
</section>
<section xml:id="creating-a-windows-image">
<title>Creating a Windows Image</title>
<para>The first step would be to create a raw image on
Client1, this will represent the main HDD of the virtual
machine, so make sure to give it as much space as you will
need.</para>
<literallayout class="monospaced">
kvm-img create -f raw windowsserver.img 20G
</literallayout>
<para>OpenStack presents the disk using aVIRTIO interface while launching the instance. Hence the OS needs to have drivers for VIRTIO. By default, the Windows Server 2008 ISO does not have the drivers for VIRTIO. Sso download a virtual floppy drive containing VIRTIO drivers from the following location</para>
<para><link xlink:href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/">http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/</link></para>
<para>and attach it during the installation</para>
<para>Start the installation by running</para>
<literallayout class="monospaced">
<para>OpenStack presents the disk using aVIRTIO interface
while launching the instance. Hence the OS needs to have
drivers for VIRTIO. By default, the Windows Server 2008
ISO does not have the drivers for VIRTIO. Sso download a
virtual floppy drive containing VIRTIO drivers from the
following location</para>
<para><link
xlink:href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/"
>http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/</link></para>
<para>and attach it during the installation</para>
<para>Start the installation by running</para>
<literallayout class="monospaced">
sudo kvm -m 1024 -cdrom win2k8_dvd.iso -drive file=windowsserver.img,if=virtio,boot=on -fda virtio-win-1.1.16.vfd -boot d -nographic -vnc :0
</literallayout>
<para>When the installation prompts you to choose a hard disk device you wont see any devices available. Click on “Load drivers” at the bottom left and load the drivers from A:\i386\Win2008</para>
<para>After the Installation is over, boot into it once and install any additional applications you need to install and make any configuration changes you need to make. Also ensure that RDP is enabled as that would be the only way you can connect to a running instance of Windows. Windows firewall needs to be configured to allow incoming ICMP and RDP connections.</para>
<para>For OpenStack to allow incoming RDP Connections, use euca-authorize command to open up port 3389 as described in the chapter on &#8220;Security&#8221;.</para>
<para>Shut-down the VM and upload the image to OpenStack</para>
<literallayout class="monospaced">
<para>When the installation prompts you to choose a hard disk
device you wont see any devices available. Click on “Load
drivers” at the bottom left and load the drivers from
A:\i386\Win2008</para>
<para>After the Installation is over, boot into it once and
install any additional applications you need to install
and make any configuration changes you need to make. Also
ensure that RDP is enabled as that would be the only way
you can connect to a running instance of Windows. Windows
firewall needs to be configured to allow incoming ICMP and
RDP connections.</para>
<para>For OpenStack to allow incoming RDP Connections, use
euca-authorize command to open up port 3389 as described
in the chapter on &#8220;Security&#8221;.</para>
<para>Shut-down the VM and upload the image to
OpenStack</para>
<literallayout class="monospaced">
nova-manage image image_register windowsserver.img --public=T --arch=x86
</literallayout>
</section>
<section xml:id="creating-images-from-running-instances">
<title>Creating images from running instances with KVM and Xen</title>
<para>
It is possible to create an image from a running instance on KVM and Xen. This is a convenient way to spawn pre-configured instances; update them according to your needs ; and re-image the instances.
The process to create an image from a running instance is quite simple :
<itemizedlist>
<title>Creating images from running instances with KVM and
Xen</title>
<para> It is possible to create an image from a running
instance on KVM and Xen. This is a convenient way to spawn
pre-configured instances; update them according to your
needs ; and re-image the instances. The process to create
an image from a running instance is quite simple : <itemizedlist>
<listitem>
<para>
<emphasis role="bold">Pre-requisites</emphasis>
<emphasis role="bold"
>Pre-requisites</emphasis>
</para>
<para> In order to use the feature properly, you will need qemu-img on it's 0.14
version. The imaging feature uses the copy from a snapshot for image files.
(e.g qcow-img convert -f qcow2 -O qcow2 -s $snapshot_name
<para> In order to use the feature properly, you
will need qemu-img on it's 0.14 version. The
imaging feature uses the copy from a snapshot
for image files. (e.g qcow-img convert -f
qcow2 -O qcow2 -s $snapshot_name
$instance-disk).</para>
<para>On Debian-like distros, you can check the version by running :
<para>On Debian-like distros, you can check the
version by running :
<literallayout class="monospaced">dpkg -l | grep qemu</literallayout></para>
<programlisting>
ii qemu 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 dummy transitional pacakge from qemu to qemu
@ -301,19 +605,22 @@ ii qemu-kvm 0.14.0~rc1+noroms-0ubuntu4~ppalucid1
</listitem>
<listitem>
<para>
<emphasis role="bold">Write data to disk</emphasis></para>
<para>
Before creating the image, we need to make sure we are not missing any
buffered content that wouldn't have been written to the instance's disk. In
order to resolve that ; connect to the instance and run
<command>sync</command> then exit.
</para>
<emphasis role="bold">Write data to
disk</emphasis></para>
<para> Before creating the image, we need to make
sure we are not missing any buffered content
that wouldn't have been written to the
instance's disk. In order to resolve that ;
connect to the instance and run
<command>sync</command> then exit. </para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Create the image</emphasis>
<emphasis role="bold">Create the
image</emphasis>
</para>
<para> In order to create the image, we first need obtain the server id :
<para> In order to create the image, we first need
obtain the server id :
<literallayout class="monospaced">nova list</literallayout><programlisting>
+-----+------------+--------+--------------------+
| ID | Name | Status | Networks |
@ -323,21 +630,27 @@ ii qemu-kvm 0.14.0~rc1+noroms-0ubuntu4~ppalucid1
</programlisting>
Based on the output, we run :
<literallayout class="monospaced">nova image-create 116 Image-116</literallayout>
The command will then perform the image creation (by creating qemu snapshot) and will automatically upload the image to your repository.
<note>
<para>
The image that will be created will be flagged as "Private" (For glance : is_public=False). Thus, the image will be available only for the tenant.
The command will then perform the image
creation (by creating qemu snapshot) and will
automatically upload the image to your
repository. <note>
<para> The image that will be created will
be flagged as "Private" (For glance :
is_public=False). Thus, the image will
be available only for the tenant.
</para>
</note>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Check image status</emphasis>
<emphasis role="bold">Check image
status</emphasis>
</para>
<para> After a while the image will turn from a "SAVING" state to an "ACTIVE"
one. <literallayout class="monospaced">nova image-list</literallayout> will
allow you to check the progress :
<para> After a while the image will turn from a
"SAVING" state to an "ACTIVE" one.
<literallayout class="monospaced">nova image-list</literallayout>
will allow you to check the progress :
<literallayout class="monospaced">nova image-list </literallayout><programlisting>
+----+---------------------------------------------+--------+
| ID | Name | Status |
@ -352,28 +665,36 @@ ii qemu-kvm 0.14.0~rc1+noroms-0ubuntu4~ppalucid1
</listitem>
<listitem>
<para>
<emphasis role="bold">Create an instance from the image</emphasis>
<emphasis role="bold">Create an instance from
the image</emphasis>
</para>
<para>You can now create an instance based on this image as you normally do for other images :<literallayout class="monospaced">nova boot --flavor 1 --image20 New_server</literallayout>
<para>You can now create an instance based on this
image as you normally do for other images
:<literallayout class="monospaced">nova boot --flavor 1 --image20 New_server</literallayout>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">
Troubleshooting
<emphasis role="bold"> Troubleshooting
</emphasis>
</para>
<para> Mainly, it wouldn't take more than 5 minutes in order to go from a
"SAVING" to the "ACTIVE" state. If this takes longer than five minutes, here are several hints: </para>
<para>- The feature doesn't work while you have attached a volume (via
nova-volume) to the instance. Thus, you should dettach the volume first,
create the image, and re-mount the volume.</para>
<para>- Make sure the version of qemu you are using is not older than the 0.14
version. That would create "unknown option -s" into nova-compute.log.</para>
<para>- Look into nova-api.log and nova-compute.log for extra
information.</para>
<para> Mainly, it wouldn't take more than 5
minutes in order to go from a "SAVING" to the
"ACTIVE" state. If this takes longer than five
minutes, here are several hints: </para>
<para>- The feature doesn't work while you have
attached a volume (via nova-volume) to the
instance. Thus, you should dettach the volume
first, create the image, and re-mount the
volume.</para>
<para>- Make sure the version of qemu you are
using is not older than the 0.14 version. That
would create "unknown option -s" into
nova-compute.log.</para>
<para>- Look into nova-api.log and
nova-compute.log for extra information.</para>
</listitem>
</itemizedlist>
</para>
</section>
</chapter>
</chapter>

@ -78,101 +78,7 @@ format="SVG" scale="60"/>
</listitem>
</itemizedlist>
<section xml:id="starting-images">
<title>Starting Images</title><para>Once you have an installation, you want to get images that you can use in your Compute cloud.
We've created a basic Ubuntu image for testing your installation. First you'll download
the image, then use "uec-publish-tarball" to publish it:</para>
<para><literallayout class="monospaced">
image="ubuntu1010-UEC-localuser-image.tar.gz"
wget http://c0179148.cdn1.cloudfiles.rackspacecloud.com/ubuntu1010-UEC-localuser-image.tar.gz
uec-publish-tarball $image [bucket-name] [hardware-arch]
</literallayout>
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Image</emphasis> : a tar.gz file that contains the
system, its kernel and ramdisk. </para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Bucket</emphasis> : a local repository that contains
images. </para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Hardware architecture</emphasis> : specify via "amd64"
or "i386" the image's architecture (32 or 64 bits). </para>
</listitem>
</itemizedlist>
</para>
<para>Here's an example of what this command looks like with data:</para>
<para><literallayout class="monospaced">uec-publish-tarball ubuntu1010-UEC-localuser-image.tar.gz dub-bucket amd64</literallayout></para>
<para>The command in return should output three references:<emphasis role="italic">
emi</emphasis>, <emphasis role="italic">eri</emphasis> and <emphasis role="italic"
>eki</emphasis>. You will next run nova image-list in order to obtain the ID of the
image you just uploaded.</para>
<para>Now you can schedule, launch and connect to the instance, which you do with the "nova"
command line. The ID of the image will be used with the <literallayout class="monospaced">nova boot</literallayout>command.</para>
<para>One thing to note here, once you publish the tarball, it has to untar before
you can launch an image from it. Using the 'nova list' command, and make sure the image
has it's status as "ACTIVE".</para>
<para><literallayout class="monospaced">nova image-list</literallayout></para>
<para>Depending on the image that you're using, you need a public key to connect to it. Some
images have built-in accounts already created. Images can be shared by many users, so it
is dangerous to put passwords into the images. Nova therefore supports injecting ssh
keys into instances before they are booted. This allows a user to login to the instances
that he or she creates securely. Generally the first thing that a user does when using
the system is create a keypair. </para>
<para>Keypairs provide secure authentication to your instances. As part of the first boot of
a virtual image, the private key of your keypair is added to roots authorized_keys
file. Nova generates a public and private key pair, and sends the private key to the
user. The public key is stored so that it can be injected into instances. </para>
<para>Keypairs are created through the api and you use them as a parameter when launching an
instance. They can be created on the command line using the following command :
<literallayout class="monospaced">nova keypair-add</literallayout>In order to list all the available options, you would run :<literallayout class="monospaced">nova help </literallayout>
Example usage:</para>
<literallayout class="monospaced">
nova keypair-add test > test.pem
chmod 600 test.pem
</literallayout>
<para>Now, you can run the instances:</para>
<literallayout class="monospaced">nova boot --image 1 --flavor 1 --key_name test my-first-server</literallayout>
<para>Here's a description of the parameters used above:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">--flavor</emphasis> what type of image to create. You
can get all the flavors you have by running
<literallayout class="monospaced">nova flavor-list</literallayout></para>
</listitem>
<listitem>
<para>
<emphasis role="bold">-key_ name</emphasis> name of the key to inject in to the
image at launch. </para>
</listitem>
</itemizedlist>
<para> The instance will go from “BUILD” to “ACTIVE” in a short time, and you should
be able to connect via SSH using the 'ubuntu' account, with the password 'ubuntu':
(replace $ipaddress with the one you got from nova list): </para>
<para>
<literallayout class="monospaced">ssh ubuntu@$ipaddress</literallayout></para>
<para>The 'ubuntu' user is part of the sudoers group, so you can escalate to 'root'
via the following command:</para>
<para>
<literallayout class="monospaced">
sudo -i
</literallayout>
</para>
</section>
<section xml:id="understanding-the-compute-service-architecture">
<title>Understanding the Compute Service Architecture</title>
@ -226,88 +132,8 @@ chmod 600 test.pem
users access key needs to be included in the request, and the request must be signed
with the secret key. Upon receipt of API requests, Compute will verify the signature and
execute commands on behalf of the user. </para>
<para>In order to begin using nova, you will need to create a user. This can be easily
accomplished using the user create or user admin commands in nova-manage. user create
will create a regular user, whereas user admin will create an admin user. The syntax of
the command is nova-manage user create username [access] [secretword]. For example: </para>
<literallayout class="monospaced">nova-manage user create john my-access-key a-super-secret-key</literallayout>
<para>If you do not specify an access or secret key, a random uuid will be created
automatically.</para>
<simplesect>
<title>Credentials</title>
<para>Nova can generate a handy set of credentials for a user. These credentials include
a CA for bundling images and a file for setting environment variables to be used by
novaclient. If you dont need to bundle images, you will only need the environment
script. You can export one with the project environment command. The syntax of the
command is nova-manage project environment project_id user_id [filename]. If you
dont specify a filename, it will be exported as novarc. After generating the file,
you can simply source it in bash to add the variables to your environment:</para>
<literallayout class="monospaced">
nova-manage project environment john_project john
. novarc</literallayout>
<para>If you do need to bundle images, you will need to get all of the credentials using
a project zipfile. Note that the zipfile will give you an error message if networks
havent been created yet. Otherwise zipfile has the same syntax as environment, only
the default file name is nova.zip. Example usage: </para>
<literallayout class="monospaced">
nova-manage project zipfile john_project john
unzip nova.zip
. novarc
</literallayout>
</simplesect>
<simplesect>
<title>Role Based Access Control</title>
<para>Roles control the API actions that a user is allowed to perform. For example, a
user cannot allocate a public ip without the netadmin role. It is important to
remember that a users de facto permissions in a project is the intersection of user
(global) roles and project (local) roles. So for John to have netadmin permissions
in his project, he needs to separate roles specified. You can add roles with role
add. The syntax is nova-manage role add user_id role [project_id]. Lets give a user
named John the netadmin role for his project:</para>
<literallayout class="monospaced"> nova-manage role add john netadmin
nova-manage role add john netadmin john_project</literallayout>
<para>Role-based access control (RBAC) is an approach to restricting system access to
authorized users based on an individual's role within an organization. Various
employee functions require certain levels of system access in order to be
successful. These functions are mapped to defined roles and individuals are
categorized accordingly. Since users are not assigned permissions directly, but only
acquire them through their role (or roles), management of individual user rights
becomes a matter of assigning appropriate roles to the user. This simplifies common
operations, such as adding a user, or changing a users department. </para>
<para>Novas rights management system employs the RBAC model and currently supports the
following five roles:</para>
<itemizedlist>
<listitem>
<para>Cloud Administrator. (cloudadmin) Users of this class enjoy complete
system access.</para>
</listitem>
<listitem>
<para>IT Security. (itsec) This role is limited to IT security personnel. It
permits role holders to quarantine instances.</para>
</listitem>
<listitem>
<para>System Administrator. (sysadmin)The default for project owners, this role
affords users the ability to add other users to a project, interact with
project images, and launch and terminate instances.</para>
</listitem>
<listitem>
<para>Network Administrator. (netadmin) Users with this role are permitted to
allocate and assign publicly accessible IP addresses as well as create and
modify firewall rules.</para>
</listitem>
<listitem>
<para>Developer. This is a general purpose role that is assigned to users by
default.</para>
</listitem>
<listitem>
<para>Project Manager. (projectmanager) This is a role that is assigned upon
project creation and can't be added or removed, but this role can do
anything a sysadmin can do.</para>
</listitem>
</itemizedlist>
<para>RBAC management is exposed through the dashboard for simplified user
management.</para>
</simplesect>
<para>In order to begin using nova, you will need to create a
user with the Identity Service. </para>
</section>
<section xml:id="managing-the-cloud">
@ -315,7 +141,7 @@ chmod 600 test.pem
the nova-manage command, and the novaclient or the Euca2ools commands. </para>
<para>The nova-manage command may only be run by users with admin privileges. Both
novaclient and euca2ools can be used by all users, though specific commands may be
restricted by Role Based Access Control in the deprecated nova auth system. </para>
restricted by Role Based Access Control in the deprecated nova auth system or in the Identity Management service. </para>
<simplesect><title>Using the nova-manage command</title>
<para>The nova-manage command may be used to perform many essential functions for
administration and ongoing maintenance of nova, such as network creation.</para>
@ -326,31 +152,6 @@ chmod 600 test.pem
<para>For example, to obtain a list of all projects: nova-manage project list</para>
<para>Run without arguments to see a list of available command categories: nova-manage</para>
<para>Command categories are: <simplelist>
<member>account</member>
<member>agent</member>
<member>config</member>
<member>db</member>
<member>drive</member>
<member>fixed</member>
<member>flavor</member>
<member>floating</member>
<member>host</member>
<member>instance_type</member>
<member>image</member>
<member>network</member>
<member>project</member>
<member>role</member>
<member>service</member>
<member>shell</member>
<member>user</member>
<member>version</member>
<member>vm</member>
<member>volume</member>
<member>vpn</member>
<member>vsa</member>
</simplelist></para>
<para>You can also run with a category argument such as user to see a list of all commands in that category: nova-manage user</para>
</simplesect><simplesect><title>Using the nova command-line tool</title>
<para>Installing the python-novaclient gives you a <code>nova</code> shell command that enables
@ -387,7 +188,15 @@ export OS_TENANT_NAME=coolu
export OS_AUTH_URL=http://hostname:5000/v2.0
export NOVA_VERSION=1.1
</programlisting>
</para></simplesect></section>
</para></simplesect>
<simplesect><title>Using the euca2ools commands</title>
<para>For a command-line interface to EC2 API calls, use
the euca2ools command line tool. It is documented at
<link
xlink:href="http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3"
>http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3</link></para>
</simplesect>
</section>
<section xml:id="managing-volumes">
<title>Managing Volumes</title>
<para>Nova-volume is the service that allows you to give extra block level storage to your

@ -340,8 +340,8 @@ source ~/.bashrc </literallayout>
necessary to run these commands as the user.</para>
</note>
<screen><prompt>$</prompt> <userinput>nova  secgroup-add-rule default icmp - 1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova  secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
<para>Another common issue is you cannot ping or SSH to your instances
after issuing the <command>euca-authorize</command> commands. Something
@ -376,32 +376,15 @@ source ~/.bashrc </literallayout>
<command>nova-compute</command>.</para>
<para>For a multi-node install you only make changes to
<filename>nova.conf</filename> and copy it to additional compute nodes.
Ensure each <filename>nova.conf</filename> file points to the correct IP
addresses for the respective services. Customize the
<filename>nova.conf</filename> example below to match your environment.
The <literal><replaceable>CC_ADDR</replaceable></literal> is the Cloud
Controller IP Address.</para>
<filename>nova.conf</filename> and copy it to additional
compute nodes. Ensure each <filename>nova.conf</filename> file
points to the correct IP addresses for the respective
services. </para>
<programlisting>--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--flat_network_bridge=br100
--logdir=/var/log/nova
--state_path=/var/lib/nova
--verbose
--sql_connection=mysql://root:nova@<replaceable>CC_ADDR</replaceable>/nova
--s3_host=<replaceable>CC_ADDR</replaceable>
--rabbit_host=<replaceable>CC_ADDR</replaceable>
--ec2_api=<replaceable>CC_ADDR</replaceable>
--ec2_url=http://CC_ADDR:8773/services/Cloud
--network_manager=nova.network.manager.FlatManager
--fixed_range= network/CIDR
--network_size=number of addresses</programlisting>
<para>By default, Nova sets the bridge device based on the setting in
<literal>--flat_network_bridge</literal>. Now you can edit
<filename>/etc/network/interfaces</filename> with the following
template, updated with your IP information.</para>
<para>By default, Nova sets the bridge device based on the
setting in <literal>flat_network_bridge</literal>. Now you can
edit <filename>/etc/network/interfaces</filename> with the
following template, updated with your IP information.</para>
<programlisting># The loopback network interface
auto lo

@ -131,11 +131,11 @@ QUANTUM_CLIENT_VERSION='0.1'
The instructions below are for Ubuntu, however, setuptools can be installed on a wide variety of platforms: <link xlink:href="http://pypi.python.org/pypi/setuptools">http://pypi.python.org/pypi/setuptools</link>
</para>
</note>
<literallayout class="monospaced">
<screen language="bash">
apt-get install -y python-setuptools
sudo easy_install virtualenv
python tools/install_venv.py
</literallayout>
</screen>
<para>On RedHat systems (eg CentOS, Fedora), you will also need to install
python-devel
<literallayout class="monospaced">yum install python-devel </literallayout></para>
@ -159,119 +159,5 @@ python tools/install_venv.py
format="PNG" />
</imageobject>
</mediaobject></section></section></section>
<section xml:id="getting-started-with-the-vnc-proxy"><info><title>Getting Started with the VNC Proxy</title></info>
<para>
The VNC Proxy is an OpenStack component that allows users of Nova to
access their instances through a websocket enabled browser (like
Google Chrome 4.0). See <link xlink:href="http://caniuse.com/#search=websocket">http://caniuse.com/#search=websocket</link> for a reference list of supported web browsers.</para>
<para>
A VNC Connection works like so:
</para>
<itemizedlist>
<listitem>
<para>
User connects over an API and gets a URL like
http://ip:port/?token=xyz
</para>
</listitem>
<listitem>
<para>
User pastes URL in browser
</para>
</listitem>
<listitem>
<para>
Browser connects to VNC Proxy though a websocket enabled client
like noVNC
</para>
</listitem>
<listitem>
<para>
VNC Proxy authorizes users token, maps the token to a host and
port of an instance's VNC server
</para>
</listitem>
<listitem>
<para>
VNC Proxy initiates connection to VNC server, and continues
proxying until the session ends
</para>
</listitem>
</itemizedlist>
<section xml:id="configuring-the-vnc-proxy"><info><title>Configuring the VNC Proxy</title></info>
<para>The nova-vncproxy requires a websocket enabled html client to work properly. At this time,
the only tested client is a slightly modified fork of noVNC, which you can at find <link
xmlns:xlink="http://www.w3.org/1999/xlink"
xlink:href="https://github.com/openstack/noVNC"
>https://github.com/openstack/noVNC</link>
</para>
<para>The noVNC tool must be in the location specified by --vncproxy_wwwroot, which defaults to
/var/lib/nova/noVNC. nova-vncproxy will fail to launch until this code is properly installed. </para>
<para>
By default, nova-vncproxy binds 0.0.0.0:6080. This can be
configured with:
</para>
<itemizedlist>
<listitem>
<para>
--vncproxy_port=[port]
</para>
</listitem>
<listitem>
<para>
--vncproxy_host=[host]
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="enabling-vnc-consoles-in-nova"><info><title>Enabling VNC Consoles in Nova</title></info>
<para>
At the moment, VNC support is supported only when using libvirt.
To enable VNC Console, configure the following flags in the nova.conf file:
</para>
<itemizedlist>
<listitem>
<para>
--vnc_console_proxy_url=http://[proxy_host]:[proxy_port] -
proxy_port defaults to 6080. This URL must point to
nova-vncproxy
</para>
</listitem>
<listitem>
<para>
--[no]vnc_enabled - defaults to enabled. If this flag is
disabled your instances will launch without VNC support.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="getting-an-instances-vnc-console"><info><title>Getting an Instance's VNC Console</title></info>
<para>
You can access an instance's VNC Console URL in the following
methods:
</para>
<itemizedlist>
<listitem>
<para>
Using the direct api: eg: 'stack --user=admin --project=admin
compute get_vnc_console instance_id=1'
</para>
</listitem>
<listitem>
<para>
Support for Dashboard, and the Openstack API will be
forthcoming
</para>
</listitem>
</itemizedlist><para>
At the moment, VNC Consoles are only supported through the web
browser, but more general VNC support is in the works.
</para>
</section>
</section>
</chapter>

@ -11,24 +11,15 @@
<para>In this OpenStack Compute tutorial, well walk through the creation of an elastic,
scalable cloud running a WordPress installation on a few virtual machines.</para>
<para>The tutorial assumes you have OpenStack Compute already installed on Ubuntu 10.04. You
can tell OpenStack Compute is installed by running "sudo nova-manage service list" to
ensure it is installed and the necessary services are running and ready. You should see
a set of nova- services in a response, and they should have a sideways smiley face in
each row, indicating they're running. You should run the tutorial as a root user or a
user with sudo access.</para>
<para>If you haven't installed OpenStack Compute yet, you can use an ISO image that is based
on a Ubuntu Linux Server 10.04 LTS distribution containing only the components needed to
run OpenStack Compute. See <link
xlink:href="http://sourceforge.net/projects/stackops/files/"
>http://sourceforge.net/projects/stackops/files/</link> for download files and
information, license information, and a README file to get started.</para>
<para>The tutorial assumes you have obtained a TryStack
account at http://trystack.org. It has a working
installation of OpenStack Compute, or you can install your
own using the installation guides. </para>
<para>We'll go through this tutorial in parts:</para>
<itemizedlist>
<listitem><para>Setting up a user, project, and network for this cloud.</para></listitem>
<listitem><para>Setting up a user on the TryStack cloud.</para></listitem>
<listitem><para>Getting images for your application servers.</para></listitem>
@ -36,233 +27,152 @@
</itemizedlist>
<section xml:id="part-i-setting-up-cloud-infrastructure">
<title>Part I: Setting Up the Cloud Infrastructure</title>
<para>In this part, we'll get the networking layer set up based on what we think most
networks would work like. We'll also create a user and a project to house our cloud
and its network. Onward, brave cloud pioneers! </para>
<simplesect>
<title>Configuring the network</title>
<para>Ideally on large OpenStack Compute deployments, each project is in a protected
network segment. Our project in this case is a LAMP stack running Wordpress with
the Memcached plugin for added database efficiency. So we need a public IP
address for the Wordpress server but we can use flat networking for this. Here's
how you set those network settings. </para>
<para>Usually networking is set in nova.conf, but VLAN-based networking with DHCP is
the default setting when no network manager is defined in nova.conf. To check
this network setting, open your nova.conf, typically in /etc/nova/nova.conf and
look for -network_manager. The possible options are:</para>
<itemizedlist>
<listitem>
<para>-network_manager=nova.network.manager.FlatManager for a simple,
no-VLAN networking type, </para>
</listitem>
<listitem>
<para>-network_manager=nova.network.manager.FlatDHCPManager for flat
networking with a built-in DHCP server, </para>
</listitem>
<listitem>
<para>-network_manager= nova.network.manager.VlanManager, which is the most
tested in production but requires network hardware with VLAN
tagging.</para>
</listitem>
</itemizedlist>
<para>Here is an example nova.conf for a single node installation of OpenStack
Compute.</para>
<programlisting>
# Sets the network type
--network_manager=nova.network.manager.FlatManager
# Sets whether to use IPV6 addresses
--use_ipv6=false
# DHCP bridge information
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=nova-dhcpbridge
--flat_network_bridge=br100
--logdir=/var/log/nova
# Top-level directory for maintaining nova's state
--state_path=/var/lib/nova
# These indicate where nova-api services are installed
--s3_host=184.106.239.134
--rabbit_host=184.106.239.134
--ec2_api=184.106.239.134
--ec2_url=http://184.106.239.134:8773/services/Cloud
# Block of IP addresses that are fixed IPs
--fixed_range=192.168.0.0/12
# Number of addresses in each private subnet
--network_size=24
# FlatDHCP bridges to this interface if set, be very careful setting it on an interface that does not already have an IP associated with it
--flat_interface=eth0
# Public IP of the server running nova-network, when instances without a floating IP hit the internet, traffic is snatted to this IP
--routing_source_ip=184.106.239.134
# Not required, but handy for debugging
--verbose
# Tells nova where to connect for database
--sql_connection=mysql://nova:notnova@184.106.239.134/nova
</programlisting>
<para>Now that we know the networking configuration, let's set up the network for
our project. With Flat DHCP, the host running nova-network acts as the gateway
to the virtual nodes, so ideally this will have a public IP address for our
tutorial. Be careful when setting up --flat_interface in nova.conf, if you
specify an interface that already has an IP it will break and if this is the
interface you are connecting through with SSH, you cannot fix it unless you have
ipmi/console access. Also the --flat_network_bridge is now required.</para>
<para>For this tutorial, we set a 24 value for network_size, the number of addresses
in each private subnet, since that falls inside the /12 CIDR-notated range
that's set in fixed-range in nova.conf. We probably won't use that many at
first, but it's good to have the room to scale.</para>
<para>Currently, there can only be one network set in nova.conf. When you issue the
nova-manage network create command, it uses the settings in the nova.conf flag
file. From the --fixed_range setting, iptables are set. Those iptables are
regenerated each time the nova-network service restarts, also. </para>
<note>
<para>The nova-manage service assumes that the first IP address is your network
(like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that
the broadcast is the very last IP in the range you defined (192.168.0.255).
If this is not the case you will need to manually edit the sql db networks
table.o but that scenario shouldn't happen for this tutorial.</para>
</note>
<para>Run this command as root or sudo:</para>
<literallayout class="monospaced">nova-manage network create public 192.168.3.0/12 1 256</literallayout>
<para>On running this command, entries are made in the networks and fixed_ips
table in the nova database. However, one of the networks listed in the
networks table needs to be marked as bridge in order for the code to know that
a bridge exists. The Network is marked as bridged automatically based on the
type of network manager selected. </para>
<para>Next you want to integrate this network bridge, named <emphasis role="bold"
>br100</emphasis>, into your network. A bridge connects two Ethernet
segments together.</para>
<para/>
</simplesect>
<simplesect>
<title>Ensure the Database is Up-to-date</title>
<para>The first command you run using nova-manage is one called db sync, which
ensures that your database is updated. You must run this as root.</para>
<literallayout class="monospaced">nova-manage db sync</literallayout>
</simplesect>
<simplesect>
<title>Creating a user</title>
<para>OpenStack Compute can run many projects for many users, so for our tutorial
we'll create a user and project just for this scenario. </para>
<para>We control the actions a user can take through roles, such as admin for
Administrator who has complete system access, itsec for IT Security, netadmin
for Network Administrator, and so on.</para>
<para>In addition to these roles controlling access to the Eucalyptus API,
credentials are supplied and bundled by OpenStack compute in a zip file when you
create a project. The user accessing the cloud infrastructure through ec2
commands are given an access and secret key through the project itself. Let's
create a user that has the access we want for this project.</para>
<para>To add an admin user named cloudypants, use:</para>
<literallayout class="monospaced">nova-manage user admin cloudypants</literallayout>
</simplesect>
<simplesect>
<title>Creating a project and related credentials</title>
<para>Next we'll create the project, which in turn gives you certifications in a zip
file.</para>
<para>Enter this command to create a project named wpscales as the admin user,
cloudypants, that you created above.</para>
<literallayout class="monospaced">nova-manage project create wpscales cloudypants</literallayout>
<para>Great, now you have a project that is set apart from the rest of the clouds
you might control with OpenStack Compute. Now you need to give the user some
credentials so they can run commands for the instances with in that project's
cloud. </para>
<para>These are the certs you will use to launch instances, bundle images, and all
the other assorted API and command-line functions.</para>
<para>First, we'll create a directory that'll house these credentials, in this case
in the root directory. You need to sudo here or save this to your own directory
with mkdir -p ~/creds so that the credentials match the user and are stored in
their home.</para>
<literallayout class="monospaced">mkdir p /root/creds</literallayout>
<para>Now, run nova-manage to create a zip file for your project called wpscales
with the user cloudypants (the admin user we created previously). </para>
<literallayout class="monospaced">sudo nova-manage project zipfile wpscales cloudypants /root/creds/novacreds.zip</literallayout>
<para>Next, you can unzip novacreds.zip in your home directory, and add these
credentials to your environment. </para>
<literallayout class="monospaced">unzip /root/creds/novacreds.zip -d /root/creds/</literallayout>
<para>Sending that information and sourcing it as part of your .bashrc file
remembers those credentials for next time.</para>
<literallayout class="monospaced">cat /root/creds/novarc >> ~/.bashrc
source ~/.bashrc</literallayout>
<para>Okay, you've created the basic scaffolding for your cloud so that you can get
some images and run instances. Onward to Part II!</para>
<para/>
</simplesect>
<title>Part I: Setting Up as a TryStack User</title>
<para>In this part, we'll get a TryStack account using our
Facebook login. Onward, brave cloud pioneers! </para>
<para>Go to the TryStack Facebook account at <link
xlink:href="https://www.facebook.com/groups/269238013145112/"
>https://www.facebook.com/groups/269238013145112/</link>
and request to join the group. </para>
<para>Once you've joined the group, go to the TryStack
dashboard and click <guilabel>Login using
Facebook</guilabel>. </para>
<para>Enter your Facebook login information to receive
your username and password that you can use with the
Compute API.</para>
<para>Next, install the python-novaclient and set up your
environment variables so you can use the client with
your username and password already entered. Here's
what works well on Mac
OS.<screen> <prompt>$</prompt> pip install -e git+https://github.com/openstack/python-novaclient.git#egg=python-novaclient </screen>
Next, create a file named openrc to contain your
TryStack credentials, such
as:<programlisting>export OS_USERNAME=joecool
export OS_PASSWORD=coolword
export OS_TENANT_NAME=coolu
export OS_AUTH_URL=http://trystack.org:5000/v2.0
export NOVA_VERSION=1.1</programlisting>
Lastly, run this file to source your credentials.
<screen>$ source openrc</screen></para>
<para>You can always retrieve your username and password
from <link
xlink:href="https://trystack.org/dash/api_info/"
>https://trystack.org/dash/api_info/</link> after
logging in with Facebook. </para>
<para>Okay, you've created the basic scaffolding for
your cloud user so that you can get some images
and run instances on TryStack with your starter
set of StackDollars. You're rich, man! Now to Part
II!</para>
</section>
<section xml:id="part-ii-getting-virtual-machines">
<title>Part II: Getting Virtual Machines to Run the Virtual Servers</title>
<para>Understanding what you can do with cloud computing means you should have a grasp
on the concept of virtualization. With virtualization, you can run operating systems
and applications on virtual machines instead of physical computers. To use a virtual
machine, you must have an image that contains all the information about which
operating system to run, the user login and password, files stored on the system,
and so on.</para>
<para>For this tutorial, we've created an image that you can download that allows the
networking you need to run web applications and so forth. In order to use it with
the OpenStack Compute cloud, you download the image, then use uec-publish-tarball to
publish it. </para>
<para>Here are the commands to get your virtual image. Be aware that the download of the
compressed file may take a few minutes.</para>
<literallayout class="monospaced">image="ubuntu1010-UEC-localuser-image.tar.gz"
wget http://c0179148.cdn1.cloudfiles.rackspacecloud.com/
ubuntu1010-UEC-localuser-image.tar.gz
uec-publish-tarball $image wpbucket amd64
</literallayout>
<para>What you'll get in return from this command is three references: <emphasis
role="italic">emi</emphasis>, <emphasis role="italic">eri</emphasis> and
<emphasis role="italic">eki</emphasis>. These are acronyms - emi stands for
eucalyptus machine image, eri stands for eucalyptus ramdisk image, and eki stands
for eucalyptus kernal image. Amazon has similar references for their images - ami,
ari, and aki.</para>
<para>You need to use the emi value when you run the instance. These look something like
“ami-zqkyh9th″ - basically a unique identifier.</para>
<para>Okay, now that you have your image and it's published, realize that it has to be
decompressed before you can launch an image from it. We can realize what state an
image is in using the 'euca-describe-instances' command. Basically, run:</para>
<para>nova image-list</para>
<para>and look for the state in the text that returns. You can also use
euca-describe-images to ensure the image is untarred. Wait until the state shows
"available" so that you know the instances is ready to roll.</para>
<title>Part II: Starting Virtual Machines </title>
<para>Understanding what you can do with cloud computing
means you should have a grasp on the concept of
virtualization. With virtualization, you can run
operating systems and applications on virtual machines
instead of physical computers. To use a virtual
machine, you must have an image that contains all the
information about which operating system to run, the
user login and password, files stored on the system,
and so on. Fortunately, TryStack provides images for
your use. </para>
<para>Basically, run:</para>
<para>
<screen>$ nova image-list</screen>
</para>
<para>and look for the images available in the text that
returns. Look for the ID
value.<programlisting>+----+--------------------------------------+--------+--------+
| ID | Name | Status | Server |
+----+--------------------------------------+--------+--------+
| 12 | natty-server-cloudimg-amd64-kernel | ACTIVE | |
| 13 | natty-server-cloudimg-amd64 | ACTIVE | |
| 14 | oneiric-server-cloudimg-amd64-kernel | ACTIVE | |
| 15 | oneiric-server-cloudimg-amd64 | ACTIVE | |
+----+--------------------------------------+--------+--------+</programlisting></para>
<para>Now get a list of the flavors you can launch:</para>
<para>
<screen>$ nova flavor-list</screen>
</para>
<para><programlisting>+----+-----------+-----------+------+-----------+------+-------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+-----------+------+-------+-------------+
| 1 | m1.tiny | 512 | 0 | N/A | 0 | 1 | |
| 2 | m1.small | 2048 | 20 | N/A | 0 | 1 | |
| 3 | m1.medium | 4096 | 40 | N/A | 0 | 2 | |
| 4 | m1.large | 8192 | 80 | N/A | 0 | 4 | |
| 5 | m1.xlarge | 16384 | 160 | N/A | 0 | 8 | |
+----+-----------+-----------+------+-----------+------+-------+-------------+</programlisting>Create
a keypair to launch the image, in a directory where
you run the nova boot command later.
<screen>$ nova keypair-add mykeypair > mykeypair.pem</screen></para>
<para>Create security group that enables public IP access
for the webserver that will run WordPress for you. You
can also enable port 22 for
SSH.<screen>$ nova secgroup-create openpub "Open for public"
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 </screen></para>
<para>Next, with the ID value of the server you want to
launch and the ID of the flavor you want to launch,
use your credentials to start up the instance with the
identifier you got by looking at the image
list.</para>
<screen>nova boot --image 15 --flavor 2 --key_name mykeypair --security_groups openpub testtutorial</screen>
<para><programlisting>+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| accessIPv4 | |
| accessIPv6 | |
| adminPass | StuacCpAr7evnz5Q |
| config_drive | |
| created | 2012-03-21T20:31:40Z |
| flavor | m1.small |
| hostId | |
| id | 1426 |
| image | oneiric-server-cloudimg-amd64 |
| key_name | testkey2 |
| metadata | {} |
| name | testtut |
| progress | 0 |
| status | BUILD |
| tenant_id | 296 |
| updated | 2012-03-21T20:31:40Z |
| user_id | facebook521113267 |
| uuid | be9f80e8-7b20-49e8-83cf-fa059a36c9f8 |
+--------------+--------------------------------------+</programlisting>Now
you can look at the state of the running instances by
using nova list.
<screen>$ nova list</screen><programlisting>+------+----------------+--------+----------------------+
| ID | Name | Status | Networks |
+------+----------------+--------+----------------------+
| 1426 | testtut | ACTIVE | internet=8.22.27.251 |
+------+----------------+--------+----------------------+</programlisting></para>
<para>The instance goes from “launching” to “running” in a
short time, and you should be able to connect via SSH.
Look at the IP addresses so that you can connect to
the instance once it starts running.</para>
</section>
<section xml:id="installing-needed-software-for-web-scale">
<title>Part III: Installing the Needed Software for the Web-Scale Scenario</title>
<para>Once that state is "ACTIVE" you can enter this command, which will use your
credentials to start up the instance with the identifier you got by publishing the
image.</para>
<literallayout class="monospaced">
nova boot --image 1 --flavor 1 --key_path /root/creds/
</literallayout>
<para>Now you can look at the state of the running instances by using
euca-describe-instances again. The instance will go from “launching” to “running” in
a short time, and you should be able to connect via SSH. Look at the IP addresses so
that you can connect to the instance once it starts running.</para>
<para>Basically launch a terminal window from any computer, and enter: </para>
<literallayout class="monospaced">ssh -i mykey ubuntu@10.127.35.119</literallayout>
<screen>ssh -i mykeypair ubuntu@10.127.35.119</screen>
<para>On this particular image, the 'ubuntu' user has been set up as part of the sudoers
group, so you can escalate to 'root' via the following command:</para>
<literallayout class="monospaced">sudo -i</literallayout>
<literallayout/>
<screen>sudo -i</screen>
<simplesect>
<title>On the first VM, install WordPress</title>
<para>Now, you can install WordPress. Create and then switch to a blog
directory:</para>
<literallayout class="monospaced">mkdir blog
cd blog</literallayout>
<screen>mkdir blog
cd blog</screen>
<para>Download WordPress directly to you by using wget:</para>
<literallayout class="monospaced">wget http://wordpress.org/latest.tar.gz</literallayout>
<screen>wget http://wordpress.org/latest.tar.gz</screen>
<para>Then unzip the package using: </para>
<literallayout class="monospaced">tar -xzvf latest.tar.gz</literallayout>
<screen>tar -xzvf latest.tar.gz</screen>
<para>The WordPress package will extract into a folder called wordpress in the same
directory that you downloaded latest.tar.gz. </para>
<para>Next, enter "exit" and disconnect from this SSH session.</para>
<para/>
</simplesect>
<simplesect>
<title>On a second VM, install MySQL</title>
@ -270,15 +180,13 @@ cd blog</literallayout>
instructions to install the WordPress database using the MySQL Client from a
command line: <link xlink:href="http://codex.wordpress.org/Installing_WordPress#Using_the_MySQL_Client">Using the MySQL Client - Wordpress Codex.</link>
</para>
<para/>
</simplesect>
<simplesect><title>On a third VM, install Memcache</title><para>Memcache makes Wordpress database reads and writers more efficient, so your virtual servers
can go to work for you in a scalable manner. SSH to a third virtual machine and
install Memcache:</para>
<para>
<literallayout class="monospaced">apt-get install memcached
</literallayout>
<screen>apt-get install memcached</screen>
</para></simplesect><simplesect><title>Configure the Wordpress Memcache plugin</title><para>From a web browser, point to the IP address of your Wordpress server. Download and install the Memcache Plugin. Enter the IP address of your Memcache server.</para></simplesect>
</section><section xml:id="running-a-blog-in-the-cloud">
<title>Running a Blog in the Cloud</title><para>That's it! You're now running your blog on a cloud server in OpenStack Compute, and you've scaled it horizontally using additional virtual images to run the database and Memcache. Now if your blog gets a big boost of comments, you'll be ready for the extra reads-and-writes to the database. </para></section>