Install and configure a storage nodeThis section describes how to install and configure storage nodes
for the Block Storage service. For simplicity, this configuration
references one storage node with an empty local block storage device
/dev/sdb that contains a suitable partition table with
one partition /dev/sdb1 occupying the entire device.
The service provisions logical volumes on this device using the
LVM driver and provides them to instances via
iSCSI transport. You can follow these instructions with
minor modifications to horizontally scale your environment with
additional storage nodes.To configure prerequisitesYou must configure the storage node before you install and
configure the volume service on it. Similar to the controller node,
the storage node contains one network interface on the
management network. The storage node also
needs an empty block storage device of suitable size for your
environment. For more information, see
.Configure the management interface:IP address: 10.0.0.41Network mask: 255.255.255.0 (or /24)Default gateway: 10.0.0.1Set the hostname of the node to
block1.Copy the contents of the /etc/hosts file from
the controller node to the storage node and add the following
to it:# block1
10.0.0.41 block1Also add this content to the /etc/hosts file
on all other nodes in your environment.Install and configure
NTP
using the instructions in
.Install the LVM packages:#apt-get install lvm2#yum install lvm2Some distributions include LVM by default.Start the LVM metadata service and configure it to start when the
system boots:#systemctl enable lvm2-lvmetad.service#systemctl start lvm2-lvmetad.serviceCreate the LVM physical volume /dev/sdb1:#pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully createdIf your system uses a different device name, adjust these
steps accordingly.Create the LVM volume group
cinder-volumes:#vgcreate cinder-volumes /dev/sdb1 Volume group "cinder-volumes" successfully createdThe Block Storage service creates logical volumes in this
volume group.Only instances can access Block Storage volumes. However, the
underlying operating system manages the devices associated with
the volumes. By default, the LVM volume scanning tool scans the
/dev directory for block storage devices that
contain volumes. If tenants use LVM on their volumes, the scanning
tool detects these volumes and attempts to cache them which can cause
a variety of problems with both the underlying operating system
and tenant volumes. You must reconfigure LVM to scan only the devices
that contain the cinder-volume volume group. Edit
the /etc/lvm/lvm.conf file and complete the
following actions:In the devices section, add a filter
that accepts the /dev/sdb device and rejects
all other devices:devices {
...
filter = [ "a/sdb/", "r/.*/"]Each item in the filter array begins with a
for accept or r for
reject and includes a regular expression
for the device name. The array must end with
r/.*/ to reject any remaining
devices. You can use the vgs -vvvv
command to test filters.If your storage nodes use LVM on the operating system disk,
you must also add the associated device to the filter. For
example, if the /dev/sda device contains
the operating system:filter = [ "a/sda/", "a/sdb/", "r/.*/"]Similarly, if your compute nodes use LVM on the operating
system disk, you must also modify the filter in the
/etc/lvm/lvm.conf file on those nodes to
include only the operating system disk. For example, if the
/dev/sda device contains the operating
system:filter = [ "a/sda/", "r/.*/"]Install and configure Block Storage volume componentsInstall the packages:#apt-get install cinder-volume python-mysqldb#yum install openstack-cinder targetcli python-oslo-db MySQL-python#zypper install openstack-cinder-volume tgt python-mysqlEdit the /etc/cinder/cinder.conf file
and complete the following actions:In the [database] section, configure
database access:[database]
...
connection = mysql://cinder:CINDER_DBPASS@controller/cinderReplace CINDER_DBPASS with
the password you chose for the Block Storage database.In the [DEFAULT] section, configure
RabbitMQ message broker access:[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASSReplace RABBIT_PASS with the
password you chose for the guest account in
RabbitMQ.In the [DEFAULT] and
[keystone_authtoken] sections,
configure Identity service access:[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASSReplace CINDER_PASS with the
password you chose for the cinder user in the
Identity service.Comment out any auth_host,
auth_port, and
auth_protocol options because the
identity_uri option replaces them.In the [DEFAULT] section, configure the
my_ip option:[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESSReplace
MANAGEMENT_INTERFACE_IP_ADDRESS with
the IP address of the management network interface on your
storage node, typically 10.0.0.41 for the first node in the
example
architecture.In the [DEFAULT] section, configure the
location of the Image Service:[DEFAULT]
...
glance_host = controllerIn the [DEFAULT] section, configure Block
Storage to use the lioadm iSCSI
service:[DEFAULT]
...
iscsi_helper = lioadm(Optional) To assist with troubleshooting,
enable verbose logging in the [DEFAULT]
section:[DEFAULT]
...
verbose = TrueInstall and configure Block Storage volume componentsInstall the packages:#apt-get install cinder-volume python-mysqldbRespond to the prompts for
database management,
Identity service
credentials,
service endpoint
registration, and
message broker
credentials..Respond to prompts for the volume group to associate with the
Block Storage service. The script scans for volume groups and
attempts to use the first one. If your system only contains the
cinder-volumes volume group, the script should
automatically choose it.To finalize installationRestart the Block Storage volume service including its
dependencies:#service tgt restart#service cinder-volume restartStart the Block Storage volume service including its dependencies
and configure them to start when the system boots:#systemctl enable openstack-cinder-volume.service target.service#systemctl start openstack-cinder-volume.service target.serviceOn SLES:#service tgtd start#chkconfig tgtd on#service openstack-cinder-volume start#chkconfig openstack-cinder-volume onOn openSUSE:#systemctl enable openstack-cinder-volume.service tgtd.service#systemctl start openstack-cinder-volume.service tgtd.serviceBy default, the Ubuntu packages create an SQLite database.
Because this configuration uses a SQL database server, remove
the SQLite database file:#rm -f /var/lib/cinder/cinder.sqlite