Install and configure the storage nodes This section describes how to install and configure storage nodes that operate the account, container, and object services. For simplicity, this configuration references two storage nodes, each containing two empty local block storage devices. Each of the devices, /dev/sdb and /dev/sdc, must contain a suitable partition table with one partition occupying the entire device. Although the Object Storage service supports any file system with extended attributes (xattr), testing and benchmarking indicate the best performance and reliability on XFS. For more information on horizontally scaling your environment, see the Deployment Guide. To configure prerequisites You must configure each storage node before you install and configure the Object Storage service on it. Similar to the controller node, each storage node contains one network interface on the management network. Optionally, each storage node can contain a second network interface on a separate network for replication. For more information, see . Configure unique items on the first storage node: Configure the management interface: IP address: 10.0.0.51 Network mask: 255.255.255.0 (or /24) Default gateway: 10.0.0.1 Set the hostname of the node to object1. Configure unique items on the second storage node: Configure the management interface: IP address: 10.0.0.52 Network mask: 255.255.255.0 (or /24) Default gateway: 10.0.0.1 Set the hostname of the node to object2. Configure shared items on both storage nodes: Copy the contents of the /etc/hosts file from the controller node and add the following to it: # object1 10.0.0.51 object1 # object2 10.0.0.52 object2 Also add this content to the /etc/hosts file on all other nodes in your environment. Install and configure NTP using the instructions in . Install the supporting utility packages: # apt-get install xfsprogs rsync # yum install xfsprogs rsync # zypper install xfsprogs rsync Format the /dev/sdb1 and /dev/sdc1 partitions as XFS: # mkfs.xfs /dev/sdb1 # mkfs.xfs /dev/sdc1 Create the mount point directory structure: # mkdir -p /srv/node/sdb1 # mkdir -p /srv/node/sdc1 Edit the /etc/fstab file and add the following to it: /dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 /dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 Mount the devices: # mount /srv/node/sdb1 # mount /srv/node/sdc1 Edit the /etc/rsyncd.conf file and add the following to it: uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/object.lock Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node. The rsync service requires no authentication, so consider running it on a private network. Edit the /etc/default/rsync file and enable the rsync service: RSYNC_ENABLE=true Start the rsync service: # service rsync start Start the rsyncd service and configure it to start when the system boots: # systemctl enable rsyncd.service # systemctl start rsyncd.service Install and configure storage node components Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain. Perform these steps on each storage node. Install the packages: # apt-get install swift swift-account swift-container swift-object # yum install openstack-swift-account openstack-swift-container \ openstack-swift-object # zypper install openstack-swift-account openstack-swift-container \ openstack-swift-object python-xml Obtain the accounting, container, object, container-reconciler, and object-expirer service configuration files from the Object Storage source repository: # curl -o /etc/swift/account-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/kilo # curl -o /etc/swift/container-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/kilo # curl -o /etc/swift/object-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/kilo # curl -o /etc/swift/container-reconciler.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/container-reconciler.conf-sample?h=stable/kilo # curl -o /etc/swift/object-expirer.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/object-expirer.conf-sample?h=stable/kilo Edit the /etc/swift/account-server.conf file and complete the following actions: In the [DEFAULT] section, configure the bind IP address, bind port, user, configuration directory, and mount point directory: [DEFAULT] ... bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS bind_port = 6002 user = swift swift_dir = /etc/swift devices = /srv/node Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node. In the [pipeline:main] section, enable the appropriate modules: [pipeline:main] pipeline = healthcheck recon account-server For more information on other modules that enable additional features, see the Deployment Guide. In the [filter:recon] section, configure the recon (metrics) cache directory: [filter:recon] ... recon_cache_path = /var/cache/swift Edit the /etc/swift/container-server.conf file and complete the following actions: In the [DEFAULT] section, configure the bind IP address, bind port, user, configuration directory, and mount point directory: [DEFAULT] ... bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS bind_port = 6001 user = swift swift_dir = /etc/swift devices = /srv/node Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node. In the [pipeline:main] section, enable the appropriate modules: [pipeline:main] pipeline = healthcheck recon container-server For more information on other modules that enable additional features, see the Deployment Guide. In the [filter:recon] section, configure the recon (metrics) cache directory: [filter:recon] ... recon_cache_path = /var/cache/swift Edit the /etc/swift/object-server.conf file and complete the following actions: In the [DEFAULT] section, configure the bind IP address, bind port, user, configuration directory, and mount point directory: [DEFAULT] ... bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS bind_port = 6000 user = swift swift_dir = /etc/swift devices = /srv/node Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node. In the [pipeline:main] section, enable the appropriate modules: [pipeline:main] pipeline = healthcheck recon object-server For more information on other modules that enable additional features, see the Deployment Guide. In the [filter:recon] section, configure the recon (metrics) cache and lock directories: [filter:recon] ... recon_cache_path = /var/cache/swift recon_lock_path = /var/lock Ensure proper ownership of the mount point directory structure: # chown -R swift:swift /srv/node Create the recon directory and ensure proper ownership of it: # mkdir -p /var/cache/swift # chown -R swift:swift /var/cache/swift