openstack-manuals/doc/install-guide-rst/source/swift-storage-node.rst
Christian Berendt 24395ba8d2 [install-guide] migrate section swift to RST
Implements: blueprint installguide-liberty

Change-Id: I45743e259ae4318a68c8ae64d2757671954ad0b1
2015-08-05 10:21:56 -04:00

7.7 KiB

Install and configure the storage nodes

This section describes how to install and configure storage nodes that operate the account, container, and object services. For simplicity, this configuration references two storage nodes, each containing two empty local block storage devices. Each of the devices, /dev/sdb and /dev/sdc, must contain a suitable partition table with one partition occupying the entire device. Although the Object Storage service supports any file system with extended attributes (xattr), testing and benchmarking indicate the best performance and reliability on XFS. For more information on horizontally scaling your environment, see the Deployment Guide.

To configure prerequisites

You must configure each storage node before you install and configure the Object Storage service on it. Similar to the controller node, each storage node contains one network interface on the management network. Optionally, each storage node can contain a second network interface on a separate network for replication. For more information, see basic_environment.

  1. Configure unique items on the first storage node:

    Configure the management interface:

    • IP address: 10.0.0.51
    • Network mask: 255.255.255.0 (or /24)
    • Default gateway: 10.0.0.1

    Set the hostname of the node to object1.

  2. Configure unique items on the second storage node:

    Configure the management interface:

    • IP address: 10.0.0.52
    • Network mask: 255.255.255.0 (or /24)
    • Default gateway: 10.0.0.1

    Set the hostname of the node to object2.

  3. Configure shared items on both storage nodes:

    • Copy the contents of the /etc/hosts file from the controller node and add the following to it:

      # object1
      10.0.0.51        object1
      
      # object2
      10.0.0.52        object2

      Also add this content to the /etc/hosts file on all other nodes in your environment.

    • Install and configure NTP <Network Time Protocol (NTP)> using the instructions in basics-ntp.

    • Install the supporting utility packages:

      ubuntu or debian

      # apt-get install xfsprogs rsync

      rdo

      # yum install xfsprogs rsync

      obs

      # zypper install xfsprogs rsync
    • Format the /dev/sdb1 and /dev/sdc1 partitions as XFS:

      # mkfs.xfs /dev/sdb1
      # mkfs.xfs /dev/sdc1
    • Create the mount point directory structure:

      # mkdir -p /srv/node/sdb1
      # mkdir -p /srv/node/sdc1
    • Edit the etc/fstab file and add the following to it:

      /dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
      /dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
    • Mount the devices:

      # mount /srv/node/sdb1
      # mount /srv/node/sdc1
  4. Edit the /etc/rsyncd.conf file and add the following to it:

    uid = swift
    gid = swift
    log file = /var/log/rsyncd.log
    pid file = /var/run/rsyncd.pid
    address = MANAGEMENT_INTERFACE_IP_ADDRESS
    
    [account]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/account.lock
    
    [container]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/container.lock
    
    [object]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/object.lock

    Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

    Note

    The rsync service requires no authentication, so consider running it on a private network.

ubuntu or debian

  1. Edit the /etc/default/rsync file and enable the rsync service:

    RSYNC_ENABLE=true
  2. Start the rsync service:

    # service rsync start

obs or rdo

  1. Start the rsyncd service and configure it to start when the system boots:

    # systemctl enable rsyncd.service
    # systemctl start rsyncd.service

Install and configure storage node components

Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain.

Note

Perform these steps on each storage node.

  1. Install the packages:

    ubuntu or debian

    # apt-get install swift swift-account swift-container swift-object

    rdo

    # yum install openstack-swift-account openstack-swift-container \
      openstack-swift-object

    obs

    # zypper install openstack-swift-account \
      openstack-swift-container openstack-swift-object python-xml

ubuntu or rdo or debian

  1. Obtain the accounting, container, object, container-reconciler, and object-expirer service configuration files from the Object Storage source repository:

    # curl -o /etc/swift/account-server.conf \
      https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/kilo
    # curl -o /etc/swift/container-server.conf \
      https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/kilo
    # curl -o /etc/swift/object-server.conf \
      https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/kilo
    # curl -o /etc/swift/container-reconciler.conf \
      https://git.openstack.org/cgit/openstack/swift/plain/etc/container-reconciler.conf-sample?h=stable/kilo
    # curl -o /etc/swift/object-expirer.conf \
      https://git.openstack.org/cgit/openstack/swift/plain/etc/object-expirer.conf-sample?h=stable/kilo
  2. Ensure proper ownership of the mount point directory structure:

    # chown -R swift:swift /srv/node
  3. Create the recon directory and ensure proper ownership of it:

    # mkdir -p /var/cache/swift
    # chown -R swift:swift /var/cache/swift

obs

  1. Ensure proper ownership of the mount point directory structure:

    # chown -R swift:swift /srv/node
  2. Create the recon directory and ensure proper ownership of it:

    # mkdir -p /var/cache/swift
    # chown -R swift:swift /var/cache/swift