Merge "Fixes to common section docs"
This commit is contained in:
commit
dbed7abe79
@ -192,8 +192,7 @@ provides a service catalog within a particular OpenStack cloud.</td>
|
|||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>Identity Service functions. Each has a pluggable backend that allows different ways to use
|
<para>Identity Service functions. Each has a pluggable backend that allows different ways to use
|
||||||
the particular service. Most support standard backends like LDAP or SQL, as well as key-value
|
the particular service. Most support standard backends like LDAP or SQL.</para>
|
||||||
stores (KVS).</para>
|
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<para>The Identity Service is mostly used to customize authentication
|
<para>The Identity Service is mostly used to customize authentication
|
||||||
|
@ -4,8 +4,10 @@
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="troubleshooting-openstack-compute">
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="troubleshooting-openstack-compute">
|
||||||
<title>Troubleshooting OpenStack Compute</title>
|
<title>Troubleshooting OpenStack Compute</title>
|
||||||
<para>Common problems for Compute typically involve misconfigured networking or credentials that are not sourced properly in the environment. Also, most flat networking configurations do not enable ping or ssh from a compute node to the instances running on that node. Another common problem is trying to run 32-bit images on a 64-bit compute node. This section offers more information about how to troubleshoot Compute.</para>
|
<para>Common problems for Compute typically involve misconfigured networking or credentials that are not sourced properly in the environment. Also, most flat networking configurations do not enable ping or ssh from a compute node to the instances running on that node. Another common problem is trying to run 32-bit images on a 64-bit compute node. This section offers more information about how to troubleshoot Compute.</para>
|
||||||
<section xml:id="log-files-for-openstack-compute"><title>Log files for OpenStack Compute</title><para>Log files are stored in /var/log/nova and there is a log file for each
|
<section xml:id="log-files-for-openstack-compute"><title>Log files for OpenStack Compute</title>
|
||||||
service, for example nova-compute.log. You can format the log
|
<para>Log files are stored in <filename>/var/log/nova</filename> and
|
||||||
|
there is a log file for each service, for example
|
||||||
|
<filename>nova-compute.log</filename>. You can format the log
|
||||||
strings using options for the nova.log module. The options used to
|
strings using options for the nova.log module. The options used to
|
||||||
set format strings are: logging_context_format_string and
|
set format strings are: logging_context_format_string and
|
||||||
logging_default_format_string. If the log level is set to debug, you
|
logging_default_format_string. If the log level is set to debug, you
|
||||||
@ -14,8 +16,8 @@
|
|||||||
the formatter see:
|
the formatter see:
|
||||||
http://docs.python.org/library/logging.html#formatter.</para>
|
http://docs.python.org/library/logging.html#formatter.</para>
|
||||||
<para>You have two options for logging for OpenStack Compute based on configuration
|
<para>You have two options for logging for OpenStack Compute based on configuration
|
||||||
settings. In nova.conf, include the logfile option to enable logging. Alternatively
|
settings. In <filename>nova.conf</filename>, include the <literal>logfile</literal> option to enable logging. Alternatively
|
||||||
you can set use_syslog=1, and then the nova daemon logs to syslog.</para></section>
|
you can set <literal>use_syslog=1</literal>, and then the nova daemon logs to syslog.</para></section>
|
||||||
<section xml:id="common-errors-and-fixes-for-openstack-compute">
|
<section xml:id="common-errors-and-fixes-for-openstack-compute">
|
||||||
<title>Common Errors and Fixes for OpenStack Compute</title>
|
<title>Common Errors and Fixes for OpenStack Compute</title>
|
||||||
<para>The ask.openstack.org site offers a place to ask and
|
<para>The ask.openstack.org site offers a place to ask and
|
||||||
@ -28,35 +30,38 @@
|
|||||||
<para>Credential errors, 401, 403 forbidden errors</para>
|
<para>Credential errors, 401, 403 forbidden errors</para>
|
||||||
<para>A 403 forbidden error is caused by missing credentials.
|
<para>A 403 forbidden error is caused by missing credentials.
|
||||||
Through current installation methods, there are basically
|
Through current installation methods, there are basically
|
||||||
two ways to get the novarc file. The manual method
|
two ways to get the <filename>novarc</filename> file. The manual method
|
||||||
requires getting it from within a project zipfile, and the
|
requires getting it from within a project zipfile, and the
|
||||||
scripted method just generates novarc out of the project
|
scripted method just generates <filename>novarc</filename> out of the project
|
||||||
zip file and sources it for you. If you do the manual
|
zip file and sources it for you. If you use the manual
|
||||||
method through a zip file, then the following novarc
|
method through a zip file, before sourcing <filename>novarc</filename>
|
||||||
alone, you end up losing the creds that are tied to the
|
be sure to save any credentials that were created previously, as they
|
||||||
user you created with nova-manage in the steps
|
can be overridden.
|
||||||
before.</para>
|
</para>
|
||||||
<para>When you run <systemitem class="service">nova-api</systemitem> the first time, it generates the
|
<para>When you run <systemitem class="service">nova-api</systemitem> the
|
||||||
certificate authority information, including openssl.cnf.
|
first time, it generates the certificate authority information,
|
||||||
If it gets started out of order, you may not be able to
|
including <filename>openssl.cnf</filename>. If the CA components are
|
||||||
create your zip file. Once your CA information is
|
started prior to this, you may not be able to create your zip file.
|
||||||
available, you should be able to go back to nova-manage to
|
Restart the services, then once your CA information is available,
|
||||||
create your zipfile.</para>
|
you should be able to create your zip file.</para>
|
||||||
<para>You may also need to check your proxy settings to see if
|
<para>You may also need to check your http proxy settings to see if
|
||||||
they are causing problems with the novarc creation.</para>
|
they are causing problems with the <filename>novarc</filename>
|
||||||
|
creation.</para>
|
||||||
<para>Instance errors</para>
|
<para>Instance errors</para>
|
||||||
<para>Sometimes a particular instance shows "pending" or you
|
<para>Sometimes a particular instance shows "pending" or you
|
||||||
cannot SSH to it. Sometimes the image itself is the
|
cannot SSH to it. Sometimes the image itself is the
|
||||||
problem. For example, when using flat manager networking,
|
problem. For example, when using flat manager networking,
|
||||||
you do not have a dhcp server, and an ami-tiny image
|
you do not have a dhcp server, and certain images
|
||||||
doesn't support interface injection so you cannot connect
|
don't support interface injection so you cannot connect
|
||||||
to it. The fix for this type of problem is to use an
|
to them. The fix for this type of problem is to use an
|
||||||
Ubuntu image, which should obtain an IP address correctly
|
image that does support this method, such as Ubuntu,
|
||||||
|
which should obtain an IP address correctly
|
||||||
with FlatManager network settings. To troubleshoot other
|
with FlatManager network settings. To troubleshoot other
|
||||||
possible problems with an instance, such as one that stays
|
possible problems with an instance, such as one that stays
|
||||||
in a spawning state, first check your instances directory
|
in a spawning state, first check the directory for the particular
|
||||||
for i-ze0bnh1q dir to make sure it has the following
|
instance under <filename>/var/lib/nova/instances</filename>
|
||||||
files:</para>
|
on the <systemitem class="service">nova-compute</systemitem>
|
||||||
|
host and make sure it has the following files:</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>libvirt.xml</para>
|
<para>libvirt.xml</para>
|
||||||
@ -74,23 +79,25 @@
|
|||||||
<para>ramdisk</para>
|
<para>ramdisk</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>console.log (Once the instance actually starts
|
<para>console.log (Once the instance actually starts you should
|
||||||
you should see a console.log.)</para>
|
see a <filename>console.log</filename>.)</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<para>Check the file sizes to see if they are reasonable. If
|
<para>Check the file sizes to see if they are reasonable. If
|
||||||
any are missing/zero/very small then <systemitem class="service">nova-compute</systemitem> has
|
any are missing/zero/very small then <systemitem class="service">nova-compute</systemitem> has
|
||||||
somehow not completed download of the images from
|
somehow not completed download of the images from
|
||||||
objectstore.</para>
|
the Image service.</para>
|
||||||
<para>Also check nova-compute.log for exceptions. Sometimes
|
<para>Also check <filename>nova-compute.log</filename> for exceptions.
|
||||||
they don't show up in the console output.</para>
|
Sometimes they don't show up in the console output.</para>
|
||||||
<para>Next, check the /var/log/libvirt/qemu/i-ze0bnh1q.log
|
<para>Next, check the log file for the instance in the directory
|
||||||
file to see if it exists and has any useful error messages
|
<filename>/var/log/libvirt/qemu</filename>
|
||||||
|
to see if it exists and has any useful error messages
|
||||||
in it.</para>
|
in it.</para>
|
||||||
|
|
||||||
<para>Finally, from the instances/i-ze0bnh1q directory, try
|
<para>Finally, from the directory for the instance under
|
||||||
<code>virsh create libvirt.xml</code> and see if you
|
<filename>/var/lib/nova/instances</filename>, try
|
||||||
get an error there.</para>
|
<screen><prompt>#</prompt> <userinput>virsh create libvirt.xml</userinput></screen> and see if you
|
||||||
|
get an error when running this.</para>
|
||||||
</section>
|
</section>
|
||||||
<section xml:id="reset-state">
|
<section xml:id="reset-state">
|
||||||
<title>Manually reset the state of an instance</title>
|
<title>Manually reset the state of an instance</title>
|
||||||
|
@ -4,9 +4,17 @@
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_support-and-troubleshooting">
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_support-and-troubleshooting">
|
||||||
|
|
||||||
<title>Support</title>
|
<title>Support</title>
|
||||||
<para>Online resources aid in supporting OpenStack and the community members are willing and able to answer questions and help with bug suspicions. We are constantly improving and adding to the main features of OpenStack, but if you have any problems, do not hesitate to ask. Here are some ideas for supporting OpenStack and troubleshooting your existing installations.</para>
|
<para>Online resources aid in supporting OpenStack and there
|
||||||
|
are many community members willing and able to answer
|
||||||
|
questions and help with bug suspicions. We are constantly
|
||||||
|
improving and adding to the main features of OpenStack,
|
||||||
|
but if you have any problems, do not hesitate to ask.
|
||||||
|
Here are some ideas for supporting OpenStack and
|
||||||
|
troubleshooting your existing installations.</para>
|
||||||
<section xml:id="community-support">
|
<section xml:id="community-support">
|
||||||
<title>Community Support</title> <para>Here are some places you can locate others who want to help.</para>
|
<title>Community Support</title>
|
||||||
|
<para>Here are some places you can locate others who want to
|
||||||
|
help.</para>
|
||||||
<simplesect>
|
<simplesect>
|
||||||
<title>ask.openstack.org</title>
|
<title>ask.openstack.org</title>
|
||||||
<para>During setup or testing, you may have questions
|
<para>During setup or testing, you may have questions
|
||||||
@ -88,6 +96,12 @@
|
|||||||
<listitem><para>OpenStack Network Connectivity: <link
|
<listitem><para>OpenStack Network Connectivity: <link
|
||||||
xlink:href="https://bugs.launchpad.net/neutron"
|
xlink:href="https://bugs.launchpad.net/neutron"
|
||||||
>https://bugs.launchpad.net/neutron</link></para></listitem>
|
>https://bugs.launchpad.net/neutron</link></para></listitem>
|
||||||
|
<listitem><para>OpenStack Orchestration: <link
|
||||||
|
xlink:href="https://bugs.launchpad.net/heat"
|
||||||
|
>https://bugs.launchpad.net/heat</link></para></listitem>
|
||||||
|
<listitem><para>OpenStack Metering: <link
|
||||||
|
xlink:href="https://bugs.launchpad.net/ceilometer"
|
||||||
|
>https://bugs.launchpad.net/ceilometer</link></para></listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
|
|
||||||
</simplesect>
|
</simplesect>
|
||||||
|
Binary file not shown.
Binary file not shown.
Before Width: | Height: | Size: 95 KiB |
File diff suppressed because it is too large
Load Diff
Before Width: | Height: | Size: 90 KiB |
@ -10,4 +10,5 @@
|
|||||||
Since OpenStack Object Storage is a different way of thinking when it comes to storage, take a few
|
Since OpenStack Object Storage is a different way of thinking when it comes to storage, take a few
|
||||||
moments to review the key concepts in the developer documentation at
|
moments to review the key concepts in the developer documentation at
|
||||||
<link xlink:href="http://docs.openstack.org/developer/swift/">docs.openstack.org/developer/swift/</link>.</para>
|
<link xlink:href="http://docs.openstack.org/developer/swift/">docs.openstack.org/developer/swift/</link>.</para>
|
||||||
|
<!-- TODO Is this really the best we can do?-->
|
||||||
</section>
|
</section>
|
||||||
|
@ -14,7 +14,8 @@ xml:id="baremetal">
|
|||||||
hardware via OpenStack's API, using pluggable sub-drivers to deliver
|
hardware via OpenStack's API, using pluggable sub-drivers to deliver
|
||||||
machine imaging (PXE) and power control (IPMI). With this, provisioning
|
machine imaging (PXE) and power control (IPMI). With this, provisioning
|
||||||
and management of physical hardware is accomplished using common cloud
|
and management of physical hardware is accomplished using common cloud
|
||||||
APIs and tools, such as Heat or salt-cloud. However, due to this unique
|
APIs and tools, such as OpenStack Orchestration or salt-cloud.
|
||||||
|
However, due to this unique
|
||||||
situation, using the baremetal driver requires some additional
|
situation, using the baremetal driver requires some additional
|
||||||
preparation of its environment, the details of which are beyond the
|
preparation of its environment, the details of which are beyond the
|
||||||
scope of this guide.</para>
|
scope of this guide.</para>
|
||||||
|
@ -14,18 +14,5 @@
|
|||||||
Quotas are currently enforced at the tenant (or project) level,
|
Quotas are currently enforced at the tenant (or project) level,
|
||||||
rather than by user.
|
rather than by user.
|
||||||
</para>
|
</para>
|
||||||
<para>For example, a default value might be changed because a tenant requires more than 1TB on a Compute node.
|
|
||||||
</para>
|
|
||||||
<note><para>To view all tenants, run:
|
|
||||||
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput>
|
|
||||||
<computeroutput>+----------------------------------+----------+---------+
|
|
||||||
| id | name | enabled |
|
|
||||||
+----------------------------------+----------+---------+
|
|
||||||
| a981642d22c94e159a4a6540f70f9f8d | admin | True |
|
|
||||||
| 934b662357674c7b9f5e4ec6ded4d0e7 | tenant01 | True |
|
|
||||||
| 7bc1dbfd7d284ec4a856ea1eb82dca80 | tenant02 | True |
|
|
||||||
| 9c554aaef7804ba49e1b21cbd97d218a | services | True |
|
|
||||||
+----------------------------------+----------+---------+</computeroutput></screen>
|
|
||||||
</para></note>
|
|
||||||
<xi:include href="section_nova_cli_quotas.xml"/>
|
<xi:include href="section_nova_cli_quotas.xml"/>
|
||||||
</section>
|
</section>
|
||||||
|
@ -23,7 +23,4 @@
|
|||||||
</note></para>
|
</note></para>
|
||||||
<para>Options for configuring SPICE as the console for OpenStack Compute can be found below.</para>
|
<para>Options for configuring SPICE as the console for OpenStack Compute can be found below.</para>
|
||||||
<xi:include href="../common/tables/nova-spice.xml"/>
|
<xi:include href="../common/tables/nova-spice.xml"/>
|
||||||
<!--<note><para>If you intend to support <link linkend="configuring-migrations">live migration</link>,
|
|
||||||
you cannot specify a specific IP address for <literal>server_listen</literal>,
|
|
||||||
because that IP address will not exist on the destination host.</para></note>-->
|
|
||||||
</section>
|
</section>
|
||||||
|
@ -4,6 +4,11 @@
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="configuring-compute-API">
|
xml:id="configuring-compute-API">
|
||||||
<title>Configuring the Compute API</title>
|
<title>Configuring the Compute API</title>
|
||||||
|
<para>The Compute API, run by the
|
||||||
|
<systemitem class="service">nova-api</systemitem>
|
||||||
|
daemon, is the component of OpenStack Compute that
|
||||||
|
receives and responds to user requests, whether they
|
||||||
|
be direct API calls, or via the CLI tools or dashboard.</para>
|
||||||
<simplesect>
|
<simplesect>
|
||||||
<title>Configuring Compute API password handling</title>
|
<title>Configuring Compute API password handling</title>
|
||||||
<para>The OpenStack Compute API allows the user to specify an
|
<para>The OpenStack Compute API allows the user to specify an
|
||||||
@ -22,26 +27,6 @@
|
|||||||
<literal>enable_instance_password</literal> can be used to
|
<literal>enable_instance_password</literal> can be used to
|
||||||
disable the return of the admin password for installations
|
disable the return of the admin password for installations
|
||||||
that don't support setting instance passwords.</para>
|
that don't support setting instance passwords.</para>
|
||||||
<table rules="all">
|
|
||||||
<caption>Description of nova.conf API related configuration
|
|
||||||
options</caption>
|
|
||||||
<thead>
|
|
||||||
<tr>
|
|
||||||
<td>Configuration option</td>
|
|
||||||
<td>Default</td>
|
|
||||||
<td>Description</td>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
<tr>
|
|
||||||
<td><literal>enable_instance_password</literal></td>
|
|
||||||
<td><literal>true</literal></td>
|
|
||||||
<td>When true, the create and rebuild compute API calls
|
|
||||||
return the server admin password. When false, the server
|
|
||||||
admin password is not included in API responses.</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
</simplesect>
|
</simplesect>
|
||||||
<simplesect>
|
<simplesect>
|
||||||
<title>Configuring Compute API Rate Limiting</title>
|
<title>Configuring Compute API Rate Limiting</title>
|
||||||
@ -174,4 +159,8 @@ paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.
|
|||||||
limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
|
limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
|
||||||
</programlisting>
|
</programlisting>
|
||||||
</simplesect>
|
</simplesect>
|
||||||
|
<simplesect>
|
||||||
|
<title>List of configuration options for Compute API</title>
|
||||||
|
<xi:include href="tables/nova-api.xml"/>
|
||||||
|
</simplesect>
|
||||||
</section>
|
</section>
|
||||||
|
@ -9,7 +9,7 @@
|
|||||||
to Compute nodes for VMs.</para>
|
to Compute nodes for VMs.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>Fibre Channel supports the KVM hypervisor in only the grizzly release.</para>
|
<para>In the Grizzly release, Fibre Channel only supports the KVM hypervisor.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>There is no automatic zoning support in Nova or Cinder for Fibre Channel.
|
<para>There is no automatic zoning support in Nova or Cinder for Fibre Channel.
|
||||||
|
@ -7,13 +7,12 @@ xml:id="host-aggregates">
|
|||||||
<simplesect>
|
<simplesect>
|
||||||
<title>Overview</title>
|
<title>Overview</title>
|
||||||
<para>Host aggregates are a mechanism to further partition an availability zone; while availability
|
<para>Host aggregates are a mechanism to further partition an availability zone; while availability
|
||||||
zones are visible to users, host aggregates are only visible to administrators. Host
|
zones are visible to users, host aggregates are only visible to administrators.
|
||||||
aggregates started out as a way to use Xen hypervisor resource pools, but has been
|
Host Aggregates provide a mechanism to allow administrators to assign key-value pairs to
|
||||||
generalized to provide a mechanism to allow administrators to assign key-value pairs to
|
|
||||||
groups of machines. Each node can have multiple aggregates, each aggregate can have
|
groups of machines. Each node can have multiple aggregates, each aggregate can have
|
||||||
multiple key-value pairs, and the same key-value pair can be assigned to multiple
|
multiple key-value pairs, and the same key-value pair can be assigned to multiple
|
||||||
aggregate. This information can be used in the scheduler to enable advanced scheduling,
|
aggregate. This information can be used in the scheduler to enable advanced scheduling,
|
||||||
to set up Xen hypervisor resources pools or to define logical groups for migration.</para>
|
to set up hypervisor resource pools or to define logical groups for migration.</para>
|
||||||
</simplesect>
|
</simplesect>
|
||||||
<simplesect>
|
<simplesect>
|
||||||
<title>Command-line interface</title>
|
<title>Command-line interface</title>
|
||||||
|
@ -155,12 +155,7 @@ kvm-amd</programlisting></para>
|
|||||||
<para>In libvirt, the CPU is specified by providing a base CPU model name (which is a
|
<para>In libvirt, the CPU is specified by providing a base CPU model name (which is a
|
||||||
shorthand for a set of feature flags), a set of additional feature flags, and the
|
shorthand for a set of feature flags), a set of additional feature flags, and the
|
||||||
topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard
|
topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard
|
||||||
CPU model names. Examples of model names include:</para>
|
CPU model names. These models are defined in the file
|
||||||
<para><literal>"486", "pentium", "pentium2", "pentiumpro", "coreduo", "n270", "pentiumpro",
|
|
||||||
"qemu32", "kvm32", "cpu64-rhel5", "cpu64-rhel5", "kvm64", "pentiumpro", "Conroe"
|
|
||||||
"Penryn", "Nehalem", "Westmere", "pentiumpro", "cpu64-rhel5", "cpu64-rhel5",
|
|
||||||
"Opteron_G1", "Opteron_G2", "Opteron_G3", "Opteron_G4"</literal></para>
|
|
||||||
<para>These models are defined in the file
|
|
||||||
<filename>/usr/share/libvirt/cpu_map.xml</filename>. Check this file to determine
|
<filename>/usr/share/libvirt/cpu_map.xml</filename>. Check this file to determine
|
||||||
which models are supported by your local installation.</para>
|
which models are supported by your local installation.</para>
|
||||||
<para>There are two Compute configuration options that determine the type of CPU model
|
<para>There are two Compute configuration options that determine the type of CPU model
|
||||||
|
@ -1,271 +0,0 @@
|
|||||||
<?xml version="1.0" encoding="UTF-8"?>
|
|
||||||
<section
|
|
||||||
xmlns="http://docbook.org/ns/docbook"
|
|
||||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
||||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
||||||
version="5.0"
|
|
||||||
xml:id="installing-moosefs-as-backend">
|
|
||||||
<title>Installing MooseFS as shared storage for the instances directory</title>
|
|
||||||
<para>In the previous section we presented a convenient way to deploy a shared storage using
|
|
||||||
NFS. For better transactions performance, you could deploy MooseFS instead.</para>
|
|
||||||
<para>MooseFS (Moose File System) is a shared file system ; it implements the same rough
|
|
||||||
concepts of shared storage solutions - such as Ceph, Lustre or even GlusterFS.</para>
|
|
||||||
<para>
|
|
||||||
<emphasis role="bold">Main concepts </emphasis>
|
|
||||||
<itemizedlist>
|
|
||||||
<listitem>
|
|
||||||
<para>A metadata server (MDS), also called master server, which manages the file
|
|
||||||
repartition, their access and the namespace.</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para>A metalogger server (MLS) which backs up the MDS logs, including, objects, chunks,
|
|
||||||
sessions and object metadata</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para>A chunk server (CSS) which store the data as chunks
|
|
||||||
and replicate them across the chunkservers</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para>A client, which talks with the MDS and interact with the CSS. MooseFS clients manage
|
|
||||||
MooseFS filesystem using FUSE</para>
|
|
||||||
</listitem>
|
|
||||||
</itemizedlist> For more informations, please see the <link
|
|
||||||
xlink:href="http://www.moosefs.org/">Official project website</link>
|
|
||||||
</para>
|
|
||||||
<para>Our setup will be made the following way:</para>
|
|
||||||
<para>
|
|
||||||
<itemizedlist>
|
|
||||||
<listitem>
|
|
||||||
<para>Two compute nodes running both MooseFS chunkserver and client services.</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para>One MooseFS master server, running the metadata service.</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para>One MooseFS slave server, running the metalogger service.</para>
|
|
||||||
</listitem>
|
|
||||||
</itemizedlist> For that particular walkthrough, we will use the following network schema:</para>
|
|
||||||
<para>
|
|
||||||
<itemizedlist>
|
|
||||||
<listitem>
|
|
||||||
<para><literal>10.0.10.15</literal> for the MooseFS metadata server admin IP</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><literal>10.0.10.16</literal> for the MooseFS metadata server main IP</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><literal>10.0.10.17</literal> for the MooseFS metalogger server admin IP</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><literal>10.0.10.18</literal> for the MooseFS metalogger server main IP</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><literal>10.0.10.19</literal> for the MooseFS first chunkserver IP</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><literal>10.0.10.20</literal> for the MooseFS second chunkserver IP</para>
|
|
||||||
</listitem>
|
|
||||||
</itemizedlist>
|
|
||||||
<figure xml:id="moose-FS-deployment">
|
|
||||||
<title>MooseFS deployment for OpenStack</title>
|
|
||||||
<mediaobject>
|
|
||||||
<imageobject>
|
|
||||||
<imagedata fileref="figures/moosefs/SCH_5008_V00_NUAC-MooseFS_OpenStack.png" scale="60"
|
|
||||||
/>
|
|
||||||
</imageobject>
|
|
||||||
</mediaobject>
|
|
||||||
</figure>
|
|
||||||
</para>
|
|
||||||
<section xml:id="installing-moosefs-metadata-metalogger-servers">
|
|
||||||
<title>Installing the MooseFS metadata and metalogger servers</title>
|
|
||||||
<para>Both components could be run anywhere , as long as the MooseFS chunkservers can reach
|
|
||||||
the MooseFS master server.</para>
|
|
||||||
<para>In our deployment, both MooseFS master and slave run their services inside a virtual
|
|
||||||
machine ; you just need to make sure to allocate enough memory to the MooseFS metadata
|
|
||||||
server, all the metadata being stored in RAM when the service runs.</para>
|
|
||||||
<para>
|
|
||||||
<orderedlist>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Hosts entry configuration</emphasis></para>
|
|
||||||
<para>In the <filename>/etc/hosts</filename> add the following entry :
|
|
||||||
<programlisting>10.0.10.16 mfsmaster</programlisting></para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Required packages</emphasis></para>
|
|
||||||
<para>Install the required packages by running the following commands :
|
|
||||||
<screen os="ubuntu"><prompt>$</prompt> <userinput>apt-get install zlib1g-dev python pkg-config</userinput> </screen>
|
|
||||||
<screen os="rhel;fedora;centos"><prompt>$</prompt> <userinput>yum install make automake gcc gcc-c++ kernel-devel python26 pkg-config</userinput></screen>
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">User and group creation</emphasis></para>
|
|
||||||
<para>Create the adequate user and group :
|
|
||||||
<screen><prompt>$</prompt> <userinput>groupadd mfs && useradd -g mfs mfs</userinput></screen>
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Download the sources</emphasis></para>
|
|
||||||
<para>Go to the <link xlink:href="http://www.moosefs.org/download.html">MooseFS download page</link>
|
|
||||||
and fill the download form in order to obtain your URL for the package.
|
|
||||||
</para>
|
|
||||||
<para/>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Extract and configure the sources</emphasis></para>
|
|
||||||
<para>Extract the package and compile it :
|
|
||||||
<screen><prompt>$</prompt> <userinput>tar -zxvf mfs-1.6.25.tar.gz && cd mfs-1.6.25</userinput></screen>
|
|
||||||
For the MooseFS master server installation, we disable from the compilation the
|
|
||||||
mfschunkserver and mfsmount components :
|
|
||||||
<screen><prompt>$</prompt> <userinput>./configure --prefix=/usr --sysconfdir=/etc/moosefs --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount</userinput></screen><screen><prompt>$</prompt> <userinput>make && make install</userinput></screen>
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Create configuration files</emphasis></para>
|
|
||||||
<para>We will keep the default settings, for tuning performance, you can read the <link
|
|
||||||
xlink:href="http://www.moosefs.org/moosefs-faq.html">MooseFS official FAQ</link>
|
|
||||||
</para>
|
|
||||||
<para><screen><prompt>$</prompt> <userinput>cd /etc/moosefs</userinput></screen>
|
|
||||||
<screen><prompt>$</prompt> <userinput>cp mfsmaster.cfg.dist mfsmaster.cfg</userinput></screen>
|
|
||||||
<screen><prompt>$</prompt> <userinput>cp mfsmetalogger.cfg.dist mfsmetalogger.cfg</userinput></screen>
|
|
||||||
<screen><prompt>$</prompt> <userinput>cp mfsexports.cfg.dist mfsexports.cfg</userinput></screen>
|
|
||||||
In <filename>/etc/moosefs/mfsexports.cfg</filename> edit the second line in order to
|
|
||||||
restrict the access to our private network:</para>
|
|
||||||
<programlisting>10.0.10.0/24 / rw,alldirs,maproot=0</programlisting>
|
|
||||||
<para>
|
|
||||||
Create the metadata file :
|
|
||||||
<screen><prompt>$</prompt> <userinput>cd /var/lib/mfs && cp metadata.mfs.empty metadata.mfs</userinput></screen>
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Power up the MooseFS mfsmaster service</emphasis></para>
|
|
||||||
<para>You can now start the <literal>mfsmaster</literal> and <literal>mfscgiserv</literal> deamons on the MooseFS
|
|
||||||
metadataserver (The <literal>mfscgiserv</literal> is a webserver which allows you to see via a
|
|
||||||
web interface the MooseFS status realtime) :
|
|
||||||
<screen><prompt>$</prompt> <userinput>/usr/sbin/mfsmaster start && /usr/sbin/mfscgiserv start</userinput></screen>Open
|
|
||||||
the following url in your browser : http://10.0.10.16:9425 to see the MooseFS status
|
|
||||||
page</para>
|
|
||||||
<para/>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Power up the MooseFS metalogger service</emphasis></para>
|
|
||||||
<para>
|
|
||||||
<screen><prompt>$</prompt> <userinput>/usr/sbin/mfsmetalogger start</userinput></screen>
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
</orderedlist>
|
|
||||||
</para>
|
|
||||||
<para/>
|
|
||||||
</section>
|
|
||||||
<section xml:id="installing-moosefs-chunk-client-services">
|
|
||||||
<title>Installing the MooseFS chunk and client services</title>
|
|
||||||
<para>In the first part, we will install the last version of FUSE, and proceed to the
|
|
||||||
installation of the MooseFS chunk and client in the second part.</para>
|
|
||||||
<para/>
|
|
||||||
<para><emphasis role="bold">Installing FUSE</emphasis></para>
|
|
||||||
<para>
|
|
||||||
<orderedlist>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Required package</emphasis></para>
|
|
||||||
<para>
|
|
||||||
<screen os="ubuntu"><prompt>$</prompt> <userinput>apt-get install util-linux</userinput> </screen>
|
|
||||||
<screen os="rhel;fedora;centos"><prompt>$</prompt> <userinput>yum install util-linux</userinput></screen></para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Download the sources and configure them</emphasis></para>
|
|
||||||
<para>For that setup we will retrieve the last version of fuse to make sure every
|
|
||||||
function will be available :
|
|
||||||
<screen><prompt>$</prompt> <userinput>wget http://downloads.sourceforge.net/project/fuse/fuse-2.X/2.9.1/fuse-2.9.1.tar.gz && tar -zxvf fuse-2.9.1.tar.gz && cd fuse-2.9.1</userinput></screen><screen><prompt>$</prompt> <userinput>./configure && make && make install</userinput></screen>
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
</orderedlist>
|
|
||||||
</para>
|
|
||||||
<para><emphasis role="bold">Installing the MooseFS chunk and client services</emphasis></para>
|
|
||||||
<para>For installing both services, you can follow the same steps that were presented before
|
|
||||||
(Steps 1 to 4) : <orderedlist>
|
|
||||||
<listitem>
|
|
||||||
<para>Hosts entry configuration</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para>Required packages</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para>User and group creation</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para>Download the sources</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Extract and configure the sources</emphasis></para>
|
|
||||||
<para>Extract the package and compile it :
|
|
||||||
<screen><prompt>$</prompt> <userinput>tar -zxvf mfs-1.6.25.tar.gz && cd mfs-1.6.25</userinput></screen>
|
|
||||||
For the MooseFS chunk server installation, we only disable from the compilation the
|
|
||||||
mfsmaster component :
|
|
||||||
<screen><prompt>$</prompt> <userinput>./configure --prefix=/usr --sysconfdir=/etc/moosefs --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster</userinput></screen><screen><prompt>$</prompt> <userinput>make && make install</userinput></screen>
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Create configuration files</emphasis></para>
|
|
||||||
<para>The chunk servers configuration is relatively easy to setup. You only need to
|
|
||||||
create on every server directories that will be used for storing the datas of your
|
|
||||||
cluster.</para>
|
|
||||||
<para><screen><prompt>$</prompt> <userinput>cd /etc/moosefs</userinput></screen>
|
|
||||||
<screen><prompt>$</prompt> <userinput>cp mfschunkserver.cfg.dist mfschunkserver.cfg</userinput></screen>
|
|
||||||
<screen><prompt>$</prompt> <userinput>cp mfshdd.cfg.dist mfshdd.cfg</userinput></screen>
|
|
||||||
<screen><prompt>$</prompt> <userinput>mkdir /mnt/mfschunks{1,2} && chown -R mfs:mfs /mnt/mfschunks{1,2}</userinput></screen>
|
|
||||||
Edit <filename>/etc/moosefs/mfhdd.cfg</filename> and add the directories you created
|
|
||||||
to make them part of the cluster:</para>
|
|
||||||
<programlisting># mount points of HDD drives
|
|
||||||
#
|
|
||||||
#/mnt/hd1
|
|
||||||
#/mnt/hd2
|
|
||||||
#etc.
|
|
||||||
|
|
||||||
/mnt/mfschunks1
|
|
||||||
/mnt/mfschunks2</programlisting>
|
|
||||||
</listitem>
|
|
||||||
<listitem>
|
|
||||||
<para><emphasis role="bold">Power up the MooseFS mfschunkserver service</emphasis></para>
|
|
||||||
<para>
|
|
||||||
<screen><prompt>$</prompt> <userinput>/usr/sbin/mfschunkserver start</userinput></screen>
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
</orderedlist>
|
|
||||||
</para>
|
|
||||||
</section>
|
|
||||||
<section xml:id="access-to-cluster-storage">
|
|
||||||
<title>Access to your cluster storage</title>
|
|
||||||
<para>You can now access your cluster space from the compute node, (both acting as
|
|
||||||
chunkservers) : <screen><prompt>$</prompt> <userinput>mfsmount /var/lib/nova/instances -H mfsmaster</userinput></screen>
|
|
||||||
<computeroutput> mfsmaster accepted connection with parameters: read-write,restricted_ip ;
|
|
||||||
root mapped to root:root </computeroutput>
|
|
||||||
<screen><prompt>$</prompt> <userinput>mount</userinput></screen><programlisting>/dev/cciss/c0d0p1 on / type ext4 (rw,errors=remount-ro)
|
|
||||||
proc on /proc type proc (rw,noexec,nosuid,nodev)
|
|
||||||
none on /sys type sysfs (rw,noexec,nosuid,nodev)
|
|
||||||
fusectl on /sys/fs/fuse/connections type fusectl (rw)
|
|
||||||
none on /sys/kernel/debug type debugfs (rw)
|
|
||||||
none on /sys/kernel/security type securityfs (rw)
|
|
||||||
none on /dev type devtmpfs (rw,mode=0755)
|
|
||||||
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
|
|
||||||
none on /dev/shm type tmpfs (rw,nosuid,nodev)
|
|
||||||
none on /var/run type tmpfs (rw,nosuid,mode=0755)
|
|
||||||
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
|
|
||||||
none on /var/lib/ureadahead/debugfs type debugfs (rw,relatime)
|
|
||||||
<emphasis role="bold">mfsmaster:9421 on /var/lib/nova/instances type fuse.mfs (rw,allow_other,default_permissions)</emphasis></programlisting>
|
|
||||||
You can interact with it the way you would interact with a classical mount, using build-in linux
|
|
||||||
commands (cp, rm, etc...).
|
|
||||||
</para>
|
|
||||||
<para>The MooseFS client has several tools for managing the objects within the cluster (set
|
|
||||||
replication goals, etc..). You can see the list of the available tools by running
|
|
||||||
<screen><prompt>$</prompt> <userinput>mfs <TAB> <TAB></userinput> </screen><programlisting>
|
|
||||||
mfsappendchunks mfschunkserver mfsfileinfo mfsgetgoal mfsmount mfsrsetgoal mfssetgoal mfstools
|
|
||||||
mfscgiserv mfsdeleattr mfsfilerepair mfsgettrashtime mfsrgetgoal mfsrsettrashtime mfssettrashtime
|
|
||||||
mfscheckfile mfsdirinfo mfsgeteattr mfsmakesnapshot mfsrgettrashtime mfsseteattr mfssnapshot</programlisting>
|
|
||||||
You can read the manual for every command. You can also see the <link xlink:href="http://linux.die.net/man/1/mfsrgetgoal">online help</link>
|
|
||||||
</para>
|
|
||||||
<para><emphasis role="bold">Add an entry into the fstab file</emphasis></para>
|
|
||||||
<para>
|
|
||||||
In order to make sure to have the storage mounted, you can add an entry into the <filename>/etc/fstab</filename> on both compute nodes :
|
|
||||||
<programlisting>mfsmount /var/lib/nova/instances fuse mfsmaster=mfsmaster,_netdev 0 0</programlisting>
|
|
||||||
</para>
|
|
||||||
</section>
|
|
||||||
</section>
|
|
@ -21,7 +21,8 @@
|
|||||||
compute nodes. Ensure each <filename>nova.conf</filename> file
|
compute nodes. Ensure each <filename>nova.conf</filename> file
|
||||||
points to the correct IP addresses for the respective
|
points to the correct IP addresses for the respective
|
||||||
services.</para>
|
services.</para>
|
||||||
<para>By default, Nova sets the bridge device based on the
|
<para>By default, <systemitem class="service">nova-network</systemitem>
|
||||||
|
sets the bridge device based on the
|
||||||
setting in <literal>flat_network_bridge</literal>. Now you can
|
setting in <literal>flat_network_bridge</literal>. Now you can
|
||||||
edit <filename>/etc/network/interfaces</filename> with the
|
edit <filename>/etc/network/interfaces</filename> with the
|
||||||
following template, updated with your IP information.</para>
|
following template, updated with your IP information.</para>
|
||||||
@ -55,12 +56,9 @@ $ <userinput>sudo service nova-compute restart</userinput></screen>
|
|||||||
optimally:</para>
|
optimally:</para>
|
||||||
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
|
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
|
||||||
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
|
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
|
||||||
<para>If you want to use the 10.04 Ubuntu Enterprise Cloud
|
<para>Any server that does not have
|
||||||
images that are readily available at
|
|
||||||
http://uec-images.ubuntu.com/releases/10.04/release/, you may
|
|
||||||
run into delays with booting. Any server that does not have
|
|
||||||
<command>nova-api</command> running on it needs this
|
<command>nova-api</command> running on it needs this
|
||||||
iptables entry so that UEC images can get metadata info. On
|
iptables entry so that images can get metadata info. On
|
||||||
compute nodes, configure the iptables with this next
|
compute nodes, configure the iptables with this next
|
||||||
step:</para>
|
step:</para>
|
||||||
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
|
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="boot_from_volume">
|
xml:id="boot_from_volume">
|
||||||
<title wordsize="20">Launch an instance from a volume</title>
|
<title>Launch an instance from a volume</title>
|
||||||
<para>After you <link linkend="create_volume_from_image">create
|
<para>After you <link linkend="create_volume_from_image">create
|
||||||
a bootable volume</link>, you <link
|
a bootable volume</link>, you <link
|
||||||
linkend="launch_image_from_volume">launch an instance from
|
linkend="launch_image_from_volume">launch an instance from
|
||||||
@ -24,11 +24,10 @@
|
|||||||
--image-id 397e713c-b95b-4186-ad46-6126863ea0a9 \
|
--image-id 397e713c-b95b-4186-ad46-6126863ea0a9 \
|
||||||
--display-name my-bootable-vol 8</userinput></screen></para>
|
--display-name my-bootable-vol 8</userinput></screen></para>
|
||||||
</note>
|
</note>
|
||||||
<!-- Commenting out because the OpenStack Config Reference is not available yet -->
|
<para>Optionally, to configure your volume, see the
|
||||||
<!--<para>Optionally, to configure your volume, see the
|
|
||||||
<citetitle>Configuring Image Service and Storage for
|
<citetitle>Configuring Image Service and Storage for
|
||||||
Compute</citetitle> chapter in the <citetitle>OpenStack
|
Compute</citetitle> chapter in the <citetitle>OpenStack
|
||||||
Configuration Reference</citetitle>.</para>-->
|
Configuration Reference</citetitle>.</para>
|
||||||
</step>
|
</step>
|
||||||
<step>
|
<step>
|
||||||
<para>To list volumes, run the following command:</para>
|
<para>To list volumes, run the following command:</para>
|
||||||
|
@ -10,7 +10,7 @@ xml:id="powervm">
|
|||||||
<para>PowerVM compute driver connects to an Integrated Virtualization
|
<para>PowerVM compute driver connects to an Integrated Virtualization
|
||||||
Manager (IVM) to perform PowerVM Logical Partition (LPAR)
|
Manager (IVM) to perform PowerVM Logical Partition (LPAR)
|
||||||
deployment and management. The driver supports file-based deployment
|
deployment and management. The driver supports file-based deployment
|
||||||
using images from Glance.</para>
|
using images from the OpenStack Image Service.</para>
|
||||||
<note><para>Hardware Management Console (HMC) is not yet supported.</para></note>
|
<note><para>Hardware Management Console (HMC) is not yet supported.</para></note>
|
||||||
<para>For more detailed information about PowerVM Virtualization system,
|
<para>For more detailed information about PowerVM Virtualization system,
|
||||||
refer to the IBM Redbook publication:
|
refer to the IBM Redbook publication:
|
||||||
@ -34,9 +34,10 @@ powervm_img_local_path=/path/to/local/image/directory/on/compute/host</programli
|
|||||||
<section xml:id="powervm-limits">
|
<section xml:id="powervm-limits">
|
||||||
<title>Limitations</title>
|
<title>Limitations</title>
|
||||||
<para>
|
<para>
|
||||||
PowerVM LPARs names have a limit of 31 characters. Since Nova instance names
|
PowerVM LPARs names have a limit of 31 characters. Since OpenStack Compute instance names
|
||||||
are mapped to LPAR names in Power Systems, make sure instance_name_template
|
are mapped to LPAR names in Power Systems, make sure
|
||||||
config option in nova.conf yields names that have 31 or fewer characters.
|
<literal>instance_name_template</literal>
|
||||||
|
config option in <filename>nova.conf</filename> yields names that have 31 or fewer characters.
|
||||||
</para>
|
</para>
|
||||||
</section>
|
</section>
|
||||||
</section>
|
</section>
|
||||||
|
@ -22,12 +22,8 @@
|
|||||||
virtualization for guests.</para>
|
virtualization for guests.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist></para>
|
</itemizedlist></para>
|
||||||
<para>KVM requires hardware support for acceleration. If hardware support is
|
<para>
|
||||||
not available (e.g., if you are running Compute inside of a VM and the
|
To enable QEMU, add these settings to
|
||||||
hypervisor does not expose the required hardware support), you can use
|
|
||||||
QEMU instead. KVM and QEMU have the same level of support in OpenStack,
|
|
||||||
but KVM will provide better performance. To enable QEMU, add these
|
|
||||||
settings to
|
|
||||||
<filename>nova.conf</filename>:<programlisting language="ini">compute_driver=libvirt.LibvirtDriver
|
<filename>nova.conf</filename>:<programlisting language="ini">compute_driver=libvirt.LibvirtDriver
|
||||||
libvirt_type=qemu</programlisting></para>
|
libvirt_type=qemu</programlisting></para>
|
||||||
<para>
|
<para>
|
||||||
|
@ -19,9 +19,6 @@
|
|||||||
in <filename>cinder.conf</filename> when you install manually.</para>
|
in <filename>cinder.conf</filename> when you install manually.</para>
|
||||||
<para>Here is a simple example <filename>cinder.conf</filename> file.</para>
|
<para>Here is a simple example <filename>cinder.conf</filename> file.</para>
|
||||||
<programlisting language="ini"><xi:include parse="text" href="../common/samples/cinder.conf"/></programlisting>
|
<programlisting language="ini"><xi:include parse="text" href="../common/samples/cinder.conf"/></programlisting>
|
||||||
<para>You can also provide shared storage for the instances
|
|
||||||
directory with MooseFS instead of NFS.</para>
|
|
||||||
<xi:include href="../common/section_moosefs.xml"/>
|
|
||||||
<xi:include href="block-storage/section_volume-drivers.xml"/>
|
<xi:include href="block-storage/section_volume-drivers.xml"/>
|
||||||
<xi:include href="block-storage/section_backup-drivers.xml"/>
|
<xi:include href="block-storage/section_backup-drivers.xml"/>
|
||||||
</section>
|
</section>
|
||||||
|
@ -5,13 +5,11 @@
|
|||||||
xml:id="ch_configuring-openstack-compute">
|
xml:id="ch_configuring-openstack-compute">
|
||||||
<title>OpenStack Compute</title>
|
<title>OpenStack Compute</title>
|
||||||
|
|
||||||
<para>The OpenStack system has several components that are installed separately but which can work
|
<para>The OpenStack Compute service is a cloud computing
|
||||||
together depending on your cloud needs. Key components include: OpenStack Compute, OpenStack
|
fabric controller, the main part of an IaaS system. It can
|
||||||
Object Storage, and OpenStack Image Store. There are basic configuration decisions to make, and
|
be used for hosting and manging cloud computing systems.
|
||||||
the <link xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/content/"
|
This section provides detail on all of the configuration
|
||||||
>OpenStack Install Guide</link> covers a few different architectures for certain use
|
options involved in Openstack Compute.</para>
|
||||||
cases.</para>
|
|
||||||
<!--status: right place-->
|
|
||||||
<section xml:id="configuring-openstack-compute-basics">
|
<section xml:id="configuring-openstack-compute-basics">
|
||||||
<?dbhtml stop-chunking?>
|
<?dbhtml stop-chunking?>
|
||||||
<title>Post-Installation Configuration</title>
|
<title>Post-Installation Configuration</title>
|
||||||
@ -25,7 +23,6 @@
|
|||||||
options and hypervisor options described in separate
|
options and hypervisor options described in separate
|
||||||
chapters.</para>
|
chapters.</para>
|
||||||
|
|
||||||
<!--status: right place-->
|
|
||||||
<section xml:id="setting-flags-in-nova-conf-file">
|
<section xml:id="setting-flags-in-nova-conf-file">
|
||||||
<title>Setting Configuration Options in the
|
<title>Setting Configuration Options in the
|
||||||
<filename>nova.conf</filename> File</title>
|
<filename>nova.conf</filename> File</title>
|
||||||
@ -50,7 +47,7 @@
|
|||||||
<prompt>$</prompt> <userinput>chown -R <option>username</option>:nova /etc/nova</userinput>
|
<prompt>$</prompt> <userinput>chown -R <option>username</option>:nova /etc/nova</userinput>
|
||||||
<prompt>$</prompt> <userinput>chmod 640 /etc/nova/nova.conf</userinput></screen>
|
<prompt>$</prompt> <userinput>chmod 640 /etc/nova/nova.conf</userinput></screen>
|
||||||
</section>
|
</section>
|
||||||
<!--status: good, right place-->
|
|
||||||
<xi:include href="compute/section_compute-config-overview.xml"/>
|
<xi:include href="compute/section_compute-config-overview.xml"/>
|
||||||
<section xml:id="configuring-logging">
|
<section xml:id="configuring-logging">
|
||||||
<title>Configuring Logging</title>
|
<title>Configuring Logging</title>
|
||||||
@ -64,7 +61,6 @@
|
|||||||
<title>Configuring Hypervisors</title>
|
<title>Configuring Hypervisors</title>
|
||||||
<para>See <xref linkend="section_compute-hypervisors"/> for details.</para>
|
<para>See <xref linkend="section_compute-hypervisors"/> for details.</para>
|
||||||
</section>
|
</section>
|
||||||
<!--status: good, right place-->
|
|
||||||
<section xml:id="configuring-authentication-authorization">
|
<section xml:id="configuring-authentication-authorization">
|
||||||
<title>Configuring Authentication and Authorization</title>
|
<title>Configuring Authentication and Authorization</title>
|
||||||
<para>There are different methods of authentication for the
|
<para>There are different methods of authentication for the
|
||||||
@ -109,7 +105,7 @@
|
|||||||
<section xml:id="section_compute-components">
|
<section xml:id="section_compute-components">
|
||||||
<title>Components Configuration</title>
|
<title>Components Configuration</title>
|
||||||
<xi:include href="../common/section_rpc.xml"/>
|
<xi:include href="../common/section_rpc.xml"/>
|
||||||
<xi:include href="../common/section_compute_config-api.xml"></xi:include>
|
<xi:include href="../common/section_compute_config-api.xml"/>
|
||||||
<xi:include href="../common/section_compute-configure-ec2.xml"/>
|
<xi:include href="../common/section_compute-configure-ec2.xml"/>
|
||||||
<xi:include href="../common/section_compute-configure-quotas.xml"/>
|
<xi:include href="../common/section_compute-configure-quotas.xml"/>
|
||||||
<xi:include href="../common/section_compute-configure-console.xml"/>
|
<xi:include href="../common/section_compute-configure-console.xml"/>
|
||||||
|
Loading…
Reference in New Issue
Block a user