Add block storage issue docs

Change-Id: I68c76f5e729232584c2119f8f34f2d72e4715f3c
This commit is contained in:
Scott 2013-10-19 17:11:10 -07:00 committed by Andreas Jaeger
parent 486d518ff1
commit 9c0005803b
14 changed files with 253 additions and 0 deletions

View File

@ -123,5 +123,15 @@
<xi:include href="section_ts_cinder_config.xml"/>
<xi:include href="section_ts_multipath_warn.xml"/>
<xi:include href="section_ts_vol_attach_miss_sg_scan.xml"/>
<xi:include href="section_ts_HTTP_bad_req_in_cinder_vol_log.xml"/>
<xi:include href="section_ts_attach_vol_fail_not_JSON.xml"/>
<xi:include href="section_ts_duplicate_3par_host.xml"/>
<xi:include href="section_ts_failed_attach_vol_after_detach.xml"/>
<xi:include href="section_ts_failed_attach_vol_no_sysfsutils.xml"/>
<xi:include href="section_ts_failed_connect_vol_FC_SAN.xml"/>
<xi:include href="section_ts_failed_sched_create_vol.xml"/>
<xi:include href="section_ts_no_emulator_x86_64.xml"/>
<xi:include href="section_ts_non_existent_host.xml"/>
<xi:include href="section_ts_non_existent_vlun.xml"/>
</section>
</chapter>

View File

@ -0,0 +1,44 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log">
<title>Failed to attach volume after detaching</title>
<section xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log_problem">
<title>Problem</title>
<para>The following errors are in the <filename>cinder-volume.log</filename> file.</para>
<screen>2013-05-03 15:16:33 INFO [cinder.volume.manager] Updating volume status
2013-05-03 15:16:33 DEBUG [hp3parclient.http]
REQ: curl -i https://10.10.22.241:8080/api/v1/cpgs -X GET -H "X-Hp3Par-Wsapi-Sessionkey: 48dc-b69ed2e5
f259c58e26df9a4c85df110c-8d1e8451" -H "Accept: application/json" -H "User-Agent: python-3parclient"
2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP:{'content-length': 311, 'content-type': 'text/plain',
'status': '400'}
2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP BODY:Second simultaneous read on fileno 13 detected.
Unless you really know what you're doing, make sure that only one greenthread can read any particular socket.
Consider using a pools.Pool. If you do know what you're doing and want to disable this error,
call eventlet.debug.hub_multiple_reader_prevention(False)
2013-05-03 15:16:33 ERROR [cinder.manager] Error during VolumeManager._report_driver_status: Bad request (HTTP 400)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cinder/manager.py", line 167, in periodic_tasks task(self, context)
File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 690, in _report_driver_status volume_stats =
self.driver.get_volume_stats(refresh=True)
File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_fc.py", line 77, in get_volume_stats stats =
self.common.get_volume_stats(refresh, self.client)
File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_common.py", line 421, in get_volume_stats cpg =
client.getCPG(self.config.hp3par_cpg)
File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 231, in getCPG cpgs = self.getCPGs()
File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 217, in getCPGs response, body = self.http.get('/cpgs')
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 255, in get return self._cs_request(url, 'GET', **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 224, in _cs_request **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 198, in _time_request resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 192, in request raise exceptions.from_response(resp, body)
HTTPBadRequest: Bad request (HTTP 400)</screen>
</section>
<section xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log_solution">
<title>Solution</title>
<para>You need to update your copy of the <filename>hp_3par_fc.py</filename> driver which
contains the synchronization code.</para>
</section>
</section>

View File

@ -0,0 +1,20 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_attach_vol_fail_not_JSON">
<title>Nova volume attach error, not JSON serializable</title>
<section xml:id="section_ts_attach_vol_fail_not_JSON_problem">
<title>Problem</title>
<para>When you attach a nova volume to a VM, you will see the error with stack trace in <filename>/var/log/nova/nova-volume.log</filename>. The JSON serializable issue is caused by an RPC response timeout.</para>
</section>
<section xml:id="section_ts_attach_vol_fail_not_JSON_solution">
<title>Solution</title>
<para>Make sure your iptables allow port 3260 comunication on the ISC controller. Run the
following command.</para>
<para>
<screen><prompt>$</prompt> <userinput>iptables -I INPUT &lt;Last Rule No> -p tcp --dport 3260 -j ACCEPT</userinput></screen></para>
<para>If the port communication is properly configured, you can try running the following
command.<screen><prompt>$</prompt> <userinput>service iptables stop</userinput></screen></para>
<para>If you try these solutions and still get the RPC response time out, you probably have
an ISC controller and KVM host incompatibility issue. Make sure they are
compatible.</para></section></section>

0
doc/admin-guide-cloud/section_ts_cinder_config.xml Executable file → Normal file
View File

View File

@ -0,0 +1,21 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_duplicate_3PAR_host">
<title>Duplicate 3PAR host</title>
<section xml:id="section_ts_duplicate_3PAR_host_problem">
<title>Problem</title>
<para>This error could be caused be by a volume being exported outside of OpenStack using a
host name different from the system name that OpenStack expects. This error could be displayed with the IQN if the host was exported using iSCSI.</para>
<programlisting>Duplicate3PARHost: 3PAR Host already exists: Host wwn 50014380242B9750 already used by host cld4b5ubuntuW(id = 68. The hostname must be called 'cld4b5ubuntu.</programlisting>
</section>
<section xml:id="section_ts_duplicate_3PAR_host_solution">
<title>Solution</title>
<para>Change the 3PAR host name to match the one that OpenStack expects. The 3PAR host
constructed by the driver uses just the local hostname, not the fully qualified domain
name (FQDN) of the compute host. For example, if the FQDN was
<emphasis>myhost.example.com</emphasis>, just <emphasis>myhost</emphasis> would be
used as the 3PAR hostname. IP addresses are not allowed as host names on the 3PAR
storage server.</para>
</section>
</section>

View File

@ -0,0 +1,32 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_failed_attach_vol_after_detach">
<title>Failed to attach volume after detaching</title>
<section xml:id="section_ts_failed_attach_vol_after_detach_problem">
<title>Problem</title>
<para>Failed to attach a volume after detaching the same volume.</para>
</section>
<section xml:id="section_ts_failed_attach_vol_after_detach_solution">
<title>Solution</title>
<para>You need to change the device name on the <code>nova-attach</code> call. The VM may
not clean-up after a <code>nova-detach</code> operation. In the following example from
the VM, the <code>nova-attach</code> call will fail if the device names
<code>vdb</code>, <code>vdc</code>, or <code>vdd</code> are
used.<screen><prompt>#</prompt> <userinput>ls -al /dev/disk/by-path/
total 0
drwxr-xr-x 2 root root 200 2012-08-29 17:33 .
drwxr-xr-x 5 root root 100 2012-08-29 17:33 ..
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1</userinput></screen></para>
<para>You may also have this problem after attaching and detaching the same volume from the
same VM with the same mount point multiple times. In this case, restarting the KVM host
may fix the problem.</para>
</section>
</section>

View File

@ -0,0 +1,23 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_failed_attach_vol_no_sysfsutils">
<title>Failed to attach volume, systool is not installed</title>
<section xml:id="section_ts_failed_attach_vol_no_sysfsutils_problem">
<title>Problem</title>
<para>This warning and error occurs if you do not have the required
<filename>sysfsutils</filename> package installed on the Compute node.</para>
<programlisting>WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool is not installed
ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin]
[instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-477a-be9b-47c97626555c]
Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.</programlisting>
</section>
<section xml:id="section_ts_failed_attach_vol_no_sysfsutils_solution">
<title>Solution</title>
<para>Run the following command on the Compute node to install the
<filename>sysfsutils</filename> packages.</para>
<para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install sysfsutils</userinput></screen>
</para>
</section>
</section>

View File

@ -0,0 +1,20 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_failed_connect_vol_FC_SAN">
<title>Failed to connect volume in FC SAN</title>
<section xml:id="section_ts_failed_connect_vol_FC_SAN_problem">
<title>Problem</title>
<para>Compute node failed to connect to a volume in an Fibre Channe (FC) SAN configuration.
The WWN may not be zoned correctly in your FC SAN that links the Compute host to the
storage array.</para>
<programlisting>ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3]
Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3]
Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The server has either erred or is incapable of performing the requested operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00)</programlisting>
</section>
<section xml:id="section_ts_failed_connect_vol_FC_SAN_solution">
<title>Solution</title>
<para>The network administrator must configure the FC SAN fabric by correctly zoning the WWN
(port names) from your Compute node HBAs.</para>
</section>
</section>

View File

@ -0,0 +1,23 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_failed_sched_create_vol">
<title>Failed to schedule and create volume</title>
<section xml:id="section_ts_failed_sched_create_vol_problem">
<title>Problem</title>
<para>The following warning is seen in the <filename>cinder-scheduler.log</filename> when
volume type and extra specs are defined and the volume is in an error state.</para>
<programlisting>WARNING cinder.scheduler.manager [req-b6ef1628-fdc5-49e9-a40a-79d5fcedcfef c0c1ccd20639448c9deea5fe4c112a42 c8b023257513436f 8b303269988b2e7b|req-b6ef1628-fdc5-49e9-a40a-79d5fcedcfef
c0c1ccd20639448c9deea5fe4c112a42 c8b023257513436f 8b303269988b2e7b]
Failed to schedule_create_volume: No valid host was found.</programlisting>
</section>
<section xml:id="section_ts_failed_sched_create_vol_solution">
<title>Solution</title>
<para>Enable the option
<filename>scheduler_driver=cinder.scheduler.simple.SimpleScheduler</filename> in the
<filename>/etc/cinder/cinder.conf</filename> file and restart the
<filename>cinder-scheduler</filename> service. The
<filename>scheduler_driver</filename> defaults to
<filename>cinder.scheduler.filter_scheduler.FilterScheduler</filename>.</para>
</section>
</section>

0
doc/admin-guide-cloud/section_ts_multipath_warn.xml Executable file → Normal file
View File

View File

@ -0,0 +1,20 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_no_emulator_x86_64">
<title>Cannot find suitable emulator for x86_64</title>
<section xml:id="section_ts_no_emulator_x86_64_problem">
<title>Problem</title>
<para>When you attempt to create a VM, the error shows the VM is in the <code>BUILD</code>
then <code>ERROR</code> state.</para>
</section>
<section xml:id="section_ts_no_emulator_x86_64_solution">
<title>Solution</title>
<para>On the KVM host run, <code>cat /proc/cpuinfo</code>. Make sure the <code>vme</code>
and <code>svm</code> flags are set.</para>
<para>Follow the instructions in the
<link xlink:href="http://docs.openstack.org/trunk/config-reference/content/kvm.html#section_kvm_enable">
enabling KVM section</link> of the <citetitle>Configuration
Reference</citetitle> to enable hardware virtualization
support in your BIOS.</para>
</section>
</section>

View File

@ -0,0 +1,22 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_non_existent_host">
<title>Non-existent host</title>
<section xml:id="section_ts_non_existent_host_problem">
<title>Problem</title>
<para>This error could be caused be by a volume being exported outside of OpenStack using a
host name different from the system name that OpenStack expects. This error could be
display with the IQN if the host was exported using iSCSI.</para>
<programlisting>2013-04-19 04:02:02.336 2814 ERROR cinder.openstack.common.rpc.common [-] Returning exception Not found (HTTP 404)
NON_EXISTENT_HOST - HOST '10' was not found to caller.</programlisting>
</section>
<section xml:id="section_ts_non_existent_host_solution">
<title>Solution</title>
<para>Host names constructed by the driver use just the local hostname, not the fully
qualified domain name (FQDN) of the Compute host. For example, if the FQDN was
<emphasis>myhost.example.com</emphasis>, just <emphasis>myhost</emphasis> would be
used as the 3PAR hostname. IP addresses are not allowed as host names on the 3PAR
storage server.</para>
</section>
</section>

View File

@ -0,0 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0" xml:id="section_ts_non_existent_vlun">
<title>Non-existent VLUN</title>
<section xml:id="section_ts_non_existent_vlun_problem">
<title>Problem</title>
<para>This error occurs if the 3PAR host exists with the correct host name that the
OpenStack cinder drivers expect but the volume was created in a different Domain.</para>
<programlisting>HTTPNotFound: Not found (HTTP 404) NON_EXISTENT_VLUN - VLUN 'osv-DqT7CE3mSrWi4gZJmHAP-Q' was not found.</programlisting>
</section>
<section xml:id="section_ts_non_existent_vlun_solution">
<title>Solution</title>
<para>The <code>hp3par_domain</code> configuration items either need to be updated to use
the domain the 3PAR host currently resides in, or the 3PAR host needs to be moved to the
domain that the volume was created in.</para>
</section>
</section>

View File