diff --git a/doc/admin-guide-cloud/ch_compute.xml b/doc/admin-guide-cloud/ch_compute.xml
index 87237a9ce3..a3443d5d12 100644
--- a/doc/admin-guide-cloud/ch_compute.xml
+++ b/doc/admin-guide-cloud/ch_compute.xml
@@ -59,8 +59,10 @@
Xen, Citrix XenServer and Xen Cloud Platform (XCP)
-
- Bare Metal - Provisions physical hardware through pluggable sub-drivers.
+ Bare Metal - Provisions physical hardware through pluggable
+ sub-drivers
diff --git a/doc/common/section_baremetal.xml b/doc/common/section_baremetal.xml
index eb2a1c3711..1895f401fa 100644
--- a/doc/common/section_baremetal.xml
+++ b/doc/common/section_baremetal.xml
@@ -6,21 +6,18 @@ xml:id="baremetal">
Bare Metal Driver
-
- The baremetal driver is a hypervisor driver for OpenStack Nova
- Compute. Within the OpenStack framework, it has the same role
- as the drivers for other hypervisors (libvirt, xen, etc), and
- yet it is presently unique in that the hardware is not
- virtualized - there is no hypervisor between the tenants and
- the physical hardware. It exposes hardware via OpenStack's
- API, using pluggable sub-drivers to deliver machine imaging
- (PXE) and power control (IPMI). With this, provisioning and
- management of physical hardware is accomplished using common
- cloud APIs and tools, such as Heat or salt-cloud. However, due
- to this unique situation, using the baremetal driver requires
- some additional preparation of its environment, the details of
- which are beyond the scope of this guide.
-
+ The baremetal driver is a hypervisor driver for OpenStack Nova
+ Compute. Within the OpenStack framework, it has the same role as the
+ drivers for other hypervisors (libvirt, xen, etc), and yet it is
+ presently unique in that the hardware is not virtualized - there is no
+ hypervisor between the tenants and the physical hardware. It exposes
+ hardware via OpenStack's API, using pluggable sub-drivers to deliver
+ machine imaging (PXE) and power control (IPMI). With this, provisioning
+ and management of physical hardware is accomplished using common cloud
+ APIs and tools, such as Heat or salt-cloud. However, due to this unique
+ situation, using the baremetal driver requires some additional
+ preparation of its environment, the details of which are beyond the
+ scope of this guide.
Some OpenStack Compute features are not implemented by
the baremetal hypervisor driver. See the
Baremetal driver. Also, some additional steps will be
required, such as building the baremetal deploy ramdisk. See
the
+ xlink:href="https://wiki.openstack.org/wiki/Baremetal">
main wiki page for details and implementation suggestions.
diff --git a/doc/common/section_cli_install.xml b/doc/common/section_cli_install.xml
index b1c6fee267..5450b3d140 100644
--- a/doc/common/section_cli_install.xml
+++ b/doc/common/section_cli_install.xml
@@ -15,6 +15,13 @@
Install the OpenStack command-line clientsInstall the prerequisite software and the Python package for
each OpenStack client.
+
+ For each command, replace
+ PROJECT
+ with the lower case name of the client to
+ install, such as nova.
+ Repeat for each client.
+
Prerequisite software
@@ -117,13 +124,6 @@
the clients:
#zypper install python-PROJECTclient
-
- For each command, replace
- PROJECT
- with the lower case name of the client to
- install, such as nova.
- Repeat for each client.
-
diff --git a/doc/common/section_nova_cli_baremetal.xml b/doc/common/section_nova_cli_baremetal.xml
index 3937574067..6992d7b7ad 100644
--- a/doc/common/section_nova_cli_baremetal.xml
+++ b/doc/common/section_nova_cli_baremetal.xml
@@ -1,30 +1,33 @@
- Manage bare metal nodes
- If you use the bare metal driver, you must create and add a
- network interface to a bare metal node. Then, you can launch an
- instance from a bare metal image.
- You can list and delete bare metal nodes. When you delete a
- node, any associated network interfaces are removed. You can list
- and remove network interfaces that are associated with a bare
- metal node.
-
- Commands
+ xmlns:xi="http://www.w3.org/2001/XInclude"
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
+ Manage bare metal nodes
+ The bare metal driver for OpenStack Compute manages provisioning of
+ physical hardware using common cloud APIs and tools such as Orchestration
+ (Heat). The use case for this driver is for single tenant clouds such as a
+ high-performance computing cluster or deploying OpenStack itself.
+ Development efforts are focused on moving the driver out of the Compute code
+ base in the Icehouse release. If you use the bare metal driver, you must
+ create and add a network interface to a bare metal node. Then, you can
+ launch an instance from a bare metal image.
+ You can list and delete bare metal nodes. When you delete a node, any
+ associated network interfaces are removed. You can list and remove network
+ interfaces that are associated with a bare metal node.
+
+ Commands
+ baremetal-interface-addAdds a network interface to a bare metal node.baremetal-interface-list
- Lists network interfaces associated with a bare metal
- node.
+ Lists network interfaces associated with a bare metal node.baremetal-interface-remove
- Removes a network interface from a bare metal
- node.
+ Removes a network interface from a bare metal node.
@@ -34,8 +37,7 @@
baremetal-node-delete
- Removes a bare metal node and any associated
- interfaces.
+ Removes a bare metal node and any associated interfaces.baremetal-node-list
@@ -44,10 +46,14 @@
baremetal-node-showShows information about a bare metal node.
-
- To manage bare metal nodesCreate a bare metal node:
- $nova baremetal-node-create --pm_address=1.2.3.4 --pm_user=ipmi --pm_password=ipmi $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff
- +------------------+-------------------+
+
+
+
+ To manage bare metal nodes
+
+ Create a bare metal node:
+ $nova baremetal-node-create --pm_address=1.2.3.4 --pm_user=ipmi --pm_password=ipmi $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff
+ +------------------+-------------------+
| Property | Value |
+------------------+-------------------+
| instance_uuid | None |
@@ -64,9 +70,10 @@
| terminal_port | None |
+------------------+-------------------+
- Add a network interface to the node:
- $nova baremetal-interface-add 1 aa:bb:cc:dd:ee:ff
-+-------------+-------------------+
+
+ Add a network interface to the node:
+ $nova baremetal-interface-add 1 aa:bb:cc:dd:ee:ff
+ +-------------+-------------------+
| Property | Value |
+-------------+-------------------+
| datapath_id | 0 |
@@ -74,8 +81,9 @@
| port_no | 0 |
| address | aa:bb:cc:dd:ee:ff |
+-------------+-------------------+
-
- Launch an instance from a bare metal image:
+
+
+ Launch an instance from a bare metal image:$nova boot --image my-baremetal-image --flavor my-baremetal-flavor test+-----------------------------+--------------------------------------+
| Property | Value |
@@ -85,10 +93,11 @@
... wait for instance to become active ...
- You can list bare metal nodes and interfaces, as follows:
+
+ You can list bare metal nodes and interfaces, as follows:$nova baremetal-node-list
-When a node is in use, its status includes the UUID of the
- instance that runs on it:
+ When a node is in use, its status includes the UUID of the instance
+ that runs on it:+----+--------+------+-----------+---------+-------------------
+------+------------+-------------+-------------+---------------+
| ID | Host | CPUs | Memory_MB | Disk_GB | MAC Address
@@ -99,10 +108,10 @@
| None | 1.2.3.4 | ipmi | | None |
+----+--------+------+-----------+---------+-------------------
+------+------------+-------------+-------------+---------------+
-
-
-Show details for a bare metal node:
- $nova baremetal-node-show 1
+
+
+ Show details for a bare metal node:
+ $nova baremetal-node-show 1+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
@@ -119,5 +128,14 @@
| id | 1 |
| pm_user | ipmi |
| terminal_port | None |
-+------------------+--------------------------------------+
++------------------+--------------------------------------+
+
+
+
+ Set the --availability_zone parameter to
+ specify which zone or node to start the server. You can separate the zone
+ from the hostname with a comma. As an example:
+ $nova boot --availability_zone=zone:host,node
+ Specifying "host" is optional for the --availability_zone parameter, and "zone:,node" also works.
+
diff --git a/doc/config-reference/compute/section_compute-hypervisors.xml b/doc/config-reference/compute/section_compute-hypervisors.xml
index 0e1df48e59..4121914b3b 100644
--- a/doc/config-reference/compute/section_compute-hypervisors.xml
+++ b/doc/config-reference/compute/section_compute-hypervisors.xml
@@ -80,7 +80,7 @@
Bare Metal - Not a hypervisor in the
traditional sense, this driver provisions physical
hardware through pluggable sub-drivers (for example, PXE for image
diff --git a/doc/user-guide-admin/ch_cli.xml b/doc/user-guide-admin/ch_cli.xml
index 9979d8eb69..b350375620 100644
--- a/doc/user-guide-admin/ch_cli.xml
+++ b/doc/user-guide-admin/ch_cli.xml
@@ -34,6 +34,8 @@
+
+
diff --git a/doc/user-guide-admin/section_nova_specify_host.xml b/doc/user-guide-admin/section_nova_specify_host.xml
new file mode 100644
index 0000000000..ed31344219
--- /dev/null
+++ b/doc/user-guide-admin/section_nova_specify_host.xml
@@ -0,0 +1,34 @@
+
+
+ Select a specific host to boot instances on
+ If you have the appropriate permissions, you can select the specific host where the
+ instance will be launched. This is done using the --availability_zone
+ zone:host
+ arguments to the nova boot command. For example:
+
+ $nova boot --image <uuid> --flavor m1.tiny --key_name test --availability-zone nova:server2
+
+ Starting with the Grizzly release, you can specify which roles are permitted to boot
+ an instance to a specific host with the create:forced_host setting
+ within policy.json on the desired roles. By default, only the admin
+ role has this setting enabled.
+ You can view the list of valid compute hosts by using the nova
+ hypervisor-list command, for
+ example:$nova hypervisor-list
++----+---------------------+
+| ID | Hypervisor hostname |
++----+---------------------+
+| 1 | server2 |
+| 2 | server3 |
+| 3 | server4 |
++----+---------------------+
+
+ The --availability_zone
+ zone:host
+ flag replaced the --force_hosts scheduler hint for specifying a
+ specific host, starting with the Folsom release.
+
+