diff --git a/doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml b/doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml
new file mode 100644
index 0000000000..ac3e437c21
--- /dev/null
+++ b/doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml
@@ -0,0 +1,153 @@
+
+ IBM GPFS Volume Driver
+ The General Parallel File System (GPFS) is a cluster file system that provides concurrent
+ access to file systems from multiple nodes. The storage provided by these nodes can be
+ direct attached, network attached, SAN attached or a combination of these methods. GPFS
+ provides many features beyond common data access including data replication, policy based
+ storage management, and space efficient file snapshot and clone operations.
+
+ How the GPFS Driver Works
+ This driver enables the use of GPFS in a similar fashion as the NFS driver. With the
+ GPFS driver, instances do not actually access a storage device at the block level.
+ Instead, volume backing files are created in a GPFS file system and mapped to instances,
+ which emulate a block device.
+
+
+ GPFS software must be installed and running on nodes where Cinder volume and
+ Nova compute services are running in the OpenStack environment. A GPFS file
+ system must also be created and mounted on these nodes before starting the
+ cinder-volume service. The details of these GPFS specific
+ steps are covered in GPFS Administration documentation.
+
+
+ Optionally, Glance can be configured to store images on a GPFS file system. When
+ Cinder volumes are created from Glance images, if both image and volume data reside in
+ the same GPFS file system, the data from image files is moved efficiently to Cinder
+ volumes using copy on write optimization strategy.
+
+
+ Enabling the GPFS Driver
+ To use Cinder with the GPFS driver, first set the volume_driver in
+ cinder.conf:
+ volume_driver = cinder.volume.drivers.gpfs.GPFSDriver
+ The following table contains the configuration options supported by the GPFS
+ driver.
+
+
+ The flag gpfs_images_share_mode is only valid if the Image service
+ is configured to use GPFS with gpfs_images_dir flag. Also note,
+ when the value of this flag is copy_on_write, the paths
+ specified by the flags gpfs_mount_point_base and
+ gpfs_images_dir must both reside in the same GPFS file system
+ and in the same GPFS fileset.
+
+
+
+
+ Volume Creation Options
+ It is possible to specify additional volume configuration options on a per-volume
+ basis by specifiying volume metadata. The volume is created using the specified options.
+ Changing the metadata after the volume is created has no effect. The following table
+ lists the volume creation options supported by the GPFS volume driver.
+
+ List of Volume Creation Options for GPFS Volume Driver:
+
+
+
+
+
+ Metadata Item Name
+ Description
+
+
+
+
+ fstype
+ The driver will create a file system or swap area on the new volume.
+ If fstype=swap is specified, the mkswap command is
+ used to create a swap area. Otherwise the mkfs command is passed the
+ specified type, for example ext3, ext4, etc.
+
+
+ fslabel
+ The driver will set the file system label for the file system
+ specified by fstype option. This value is only used if fstype is
+ specified.
+
+
+ data_pool_name
+
+ The driver will assign the volume file to the specified GPFS
+ storage pool. Note that the GPFS storage pool must already be
+ created.
+
+
+
+ replicas
+
+ Specify how many copies of the volume file to create. Valid values
+ are 1, 2, and, for GPFS V3.5.0.7 and later, 3. This value cannot be
+ greater than the value of the MaxDataReplicas attribute of the file
+ system.
+
+
+
+ dio
+
+ Enable or disable the Direct I/O caching policy for the volume
+ file. Valid values are "yes" and "no".
+
+
+
+ write_affinity_depth
+
+ Specify the allocation policy to be used for the volume file. Note
+ that this option only works if "allow-write-affinity" is set for the
+ GPFS data pool.
+
+
+
+ block_group_factor
+
+ Specify how many blocks are laid out sequentially in the volume
+ file to behave like a single large block. This option only works if
+ "allow-write-affinity" is set for the GPFS data pool.
+
+
+
+ write_affinity_failure_group
+
+ Specify the range of nodes (in GPFS shared nothing architecture)
+ where replicas of blocks in the volume file are to be written. See
+ GPFS Administration and Programming Reference guide for more details
+ on this option.
+
+
+
+
+
+
+ Example Using Volume Creation Options
+ This example shows the creation of a 50GB volume with an ext4 filesystem labeled
+ newfsand direct IO enabled:
+ $cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50
+
+
+
+ Operational Notes for GPFS Driver
+
+ Snapshots and Clones
+ Volume snapshots are implemented using the GPFS file clone feature. Whenever a new
+ snapshot is created, the snapshot file is efficiently created as a read-only clone
+ parent of the volume, and the volume file uses copy on write optimization strategy
+ to minimize data movement.
+ Similarly when a new volume is created from a snapshot or from an existing volume,
+ the same approach is taken. The same approach is also used when a new volume is
+ created from a Glance image, if the source image is in raw format, and
+ gpfs_images_share_mode is set to
+ copy_on_write.
+
+
+
diff --git a/doc/config-reference/block-storage/section_volume-drivers.xml b/doc/config-reference/block-storage/section_volume-drivers.xml
index 347e67bf49..e636e57450 100644
--- a/doc/config-reference/block-storage/section_volume-drivers.xml
+++ b/doc/config-reference/block-storage/section_volume-drivers.xml
@@ -21,6 +21,7 @@ iscsi_helper=tgtadm
+