NFS driverThe Network File System (NFS) is a distributed file system
protocol originally developed by Sun Microsystems in 1984. An
NFS server exports one or more of its
file systems, known as shares. An NFS
client can mount these exported shares on its own file system.
You can perform file actions on this mounted remote file
system as if the file system were local.How the NFS driver worksThe NFS driver, and other drivers based on it, work
quite differently than a traditional block storage
driver.The NFS driver does not actually allow an instance to
access a storage device at the block level. Instead, files
are created on an NFS share and mapped to instances, which
emulates a block device. This works in a similar way to
QEMU, which stores instances in the
/var/lib/nova/instances
directory.Enable the NFS driver and related optionsTo use Cinder with the NFS driver, first set the
volume_driver in
cinder.conf:volume_driver=cinder.volume.drivers.nfs.NfsDriverThe following table contains the options supported by
the NFS driver.As of the Icehouse release, the NFS driver (and other
drivers based off it) will attempt to mount shares
using version 4.1 of the NFS protocol (including pNFS). If the
mount attempt is unsuccessful due to a lack of client or
server support, a subsequent mount attempt that requests the
default behavior of the mount.nfs command
will be performed. On most distributions, the default behavior
is to attempt mounting first with NFS v4.0, then silently fall
back to NFS v3.0 if necessary. If the
configuration option
contains a request for a specific version of NFS to be used,
or if specific options are specified in the shares
configuration file specified by the
configuration option, the
mount will be attempted as requested with no subsequent
attempts.How to use the NFS driverAccess to one or more NFS servers. Creating an
NFS server is outside the scope of this document.
This example assumes access to the following NFS
servers and mount points:192.168.1.200:/storage192.168.1.201:/storage192.168.1.202:/storageThis example demonstrates the use of with this
driver with multiple NFS servers. Multiple servers
are not required. One is usually enough.Add your list of NFS servers to the file you
specified with the
nfs_shares_config option.
For example, if the value of this option was set
to /etc/cinder/shares.txt,
then:#cat /etc/cinder/shares.txt192.168.1.200:/storage
192.168.1.201:/storage
192.168.1.202:/storageComments are allowed in this file. They begin
with a #.Configure the
nfs_mount_point_base
option. This is a directory where cinder-volume
mounts all NFS shares stored in
shares.txt. For this
example, /var/lib/cinder/nfs is
used. You can, of course, use the default value of
$state_path/mnt.Start the cinder-volume service.
/var/lib/cinder/nfs should
now contain a directory for each NFS share
specified in shares.txt. The
name of each directory is a hashed name:#ls /var/lib/cinder/nfs/...
46c5db75dc3a3a50a10bfd1a456a9f3f
...You can now create volumes as you normally
would:$nova volume-create --display-name myvol 5#ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3fvolume-a8862558-e6d6-4648-b5df-bb84f31c8935This volume can also be attached and deleted
just like other volumes. However, snapshotting is
not supported.NFS driver notescinder-volume manages the
mounting of the NFS shares as well as volume
creation on the shares. Keep this in mind when
planning your OpenStack architecture. If you have
one master NFS server, it might make sense to only
have one cinder-volume service to handle
all requests to that NFS server. However, if that
single server is unable to handle all requests,
more than one cinder-volume service is needed
as well as potentially more than one NFS
server.Because data is stored in a file and not
actually on a block storage device, you might not
see the same IO performance as you would with a
traditional block storage driver. Please test
accordingly.Despite possible IO performance loss, having
volume data stored in a file might be beneficial.
For example, backing up volumes can be as easy as
copying the volume files.Regular IO flushing and syncing still
stands.