diff --git a/doc/admin-guide-cloud/compute/section_compute-networking-nova.xml b/doc/admin-guide-cloud/compute/section_compute-networking-nova.xml
index 85e0c53c84..e270cb3e77 100644
--- a/doc/admin-guide-cloud/compute/section_compute-networking-nova.xml
+++ b/doc/admin-guide-cloud/compute/section_compute-networking-nova.xml
@@ -652,7 +652,7 @@ auto eth1
iface eth1 inet dhcp
If the Virtual Network Service Neutron is installed, you can
specify the networks to attach to the interfaces by using the
- --nic flag with the the nova
+ --nic flag with the nova
command:
$ nova boot --image ed8b2a37-5535-4a5f-a615-443513036d71 --flavor 1 --nic net-id=NETWORK1_ID --nic net-id=NETWORK2_ID test-vm1
diff --git a/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml b/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml
index 5acf4096b0..d2b4041955 100644
--- a/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml
+++ b/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml
@@ -46,7 +46,7 @@
than spawning directly on a hypervisor by calling
a specific host aggregate using the metadata of the image, the
VMware host aggregate compute nodes communicate with vCenter
- which then requests scheduling for the the instance to run on
+ which then requests scheduling for the instance to run on
an ESXi hypervisor. As of the Icehouse release, this
functionality requires that VMware Distributed Resource
Scheduler (DRS) is enabled on a cluster and set to "Fully
diff --git a/doc/install-guide/section_neutron-initial-networks.xml b/doc/install-guide/section_neutron-initial-networks.xml
index c5027b7045..33b35e60c8 100644
--- a/doc/install-guide/section_neutron-initial-networks.xml
+++ b/doc/install-guide/section_neutron-initial-networks.xml
@@ -249,7 +249,7 @@
To verify network connectivity
- From a host on the the external network, ping the tenant router
+ From a host on the external network, ping the tenant router
gateway:
$ ping -c 4 203.0.113.101
PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data.
diff --git a/doc/user-guide/source/cli_swift_large_object_creation.rst b/doc/user-guide/source/cli_swift_large_object_creation.rst
index d93327d43c..c98b85bac4 100644
--- a/doc/user-guide/source/cli_swift_large_object_creation.rst
+++ b/doc/user-guide/source/cli_swift_large_object_creation.rst
@@ -260,7 +260,7 @@ describes their differences:
- Dynamic large object
* - End-to-end integrity
- Assured. The list of segments includes the MD5 checksum
- (``ETag``) of each segment. You cannot upload the the manifest
+ (``ETag``) of each segment. You cannot upload the manifest
object if the ``ETag`` in the list differs from the uploaded
segment object. If a segment is somehow lost, an attempt to
download the manifest object results in an error.
diff --git a/doc/user-guide/source/set_up_clustering.rst b/doc/user-guide/source/set_up_clustering.rst
index 2d8b8305b5..2faaebca33 100644
--- a/doc/user-guide/source/set_up_clustering.rst
+++ b/doc/user-guide/source/set_up_clustering.rst
@@ -128,7 +128,7 @@ Set up clustering
- **Instance name. **\ This name consists of the replication set
name followed by the string -*n*, where *n* is 1 for the
- first instance in a replication set, 2 for the the second
+ first instance in a replication set, 2 for the second
instance, and so on. In this example, the instance names are
``cluster1-rs1-1``, ``cluster1-rs1-2``, and ``cluster1-rs1-3``.