From eaa9815ad210c8e307a8c45e57b8b0e2bceb8121 Mon Sep 17 00:00:00 2001
From: chenxing <chason.chan@foxmail.com>
Date: Fri, 28 Sep 2018 10:14:29 +0800
Subject: [PATCH] Remove '.. end' comments

Following by https://review.openstack.org/#/c/605097/
These were used by now-dead tooling. We can remove them.

Change-Id: I0953751044f038a3fdd1acd49b3d2b053ac4bec8
---
 doc/source/admin/advanced-configuration.rst   | 27 -------
 doc/source/admin/deployment-philosophy.rst    |  2 -
 doc/source/contributor/CONTRIBUTING.rst       |  2 -
 .../kolla-for-openstack-development.rst       | 12 ---
 doc/source/contributor/running-tests.rst      | 20 -----
 doc/source/contributor/vagrant-dev-env.rst    | 38 ---------
 doc/source/reference/bifrost.rst              | 50 ------------
 .../reference/central-logging-guide.rst       |  6 --
 doc/source/reference/ceph-guide.rst           | 63 ---------------
 doc/source/reference/cinder-guide-hnas.rst    | 24 ------
 doc/source/reference/cinder-guide.rst         | 28 -------
 doc/source/reference/designate-guide.rst      | 22 -----
 doc/source/reference/external-ceph-guide.rst  | 36 ---------
 .../reference/external-mariadb-guide.rst      | 23 ------
 doc/source/reference/horizon-guide.rst        |  1 -
 doc/source/reference/hyperv-guide.rst         | 22 -----
 doc/source/reference/ironic-guide.rst         | 36 ---------
 doc/source/reference/kuryr-guide.rst          | 12 ---
 doc/source/reference/manila-guide.rst         | 36 ---------
 doc/source/reference/manila-hnas-guide.rst    | 32 --------
 doc/source/reference/networking-guide.rst     | 48 -----------
 doc/source/reference/nova-fake-driver.rst     |  2 -
 doc/source/reference/osprofiler-guide.rst     |  6 --
 doc/source/reference/resource-constraints.rst |  7 --
 doc/source/reference/skydive-guide.rst        |  2 -
 doc/source/reference/swift-guide.rst          | 18 -----
 doc/source/reference/tacker-guide.rst         | 21 -----
 doc/source/reference/vmware-guide.rst         | 28 -------
 doc/source/reference/zun-guide.rst            | 22 -----
 doc/source/user/multi-regions.rst             | 20 -----
 doc/source/user/multinode.rst                 | 19 -----
 doc/source/user/operating-kolla.rst           |  9 ---
 doc/source/user/quickstart.rst                | 81 -------------------
 doc/source/user/troubleshooting.rst           | 14 ----
 34 files changed, 789 deletions(-)

diff --git a/doc/source/admin/advanced-configuration.rst b/doc/source/admin/advanced-configuration.rst
index fd9f9fa78e..501b2d8605 100644
--- a/doc/source/admin/advanced-configuration.rst
+++ b/doc/source/admin/advanced-configuration.rst
@@ -31,8 +31,6 @@ API requests, internal and external, will flow over the same network.
    kolla_internal_vip_address: "10.10.10.254"
    network_interface: "eth0"
 
-.. end
-
 For the separate option, set these four variables. In this configuration
 the internal and external REST API requests can flow over separate
 networks.
@@ -44,8 +42,6 @@ networks.
    kolla_external_vip_address: "10.10.20.254"
    kolla_external_vip_interface: "eth1"
 
-.. end
-
 Fully Qualified Domain Name Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -62,8 +58,6 @@ in your kolla deployment use the variables:
    kolla_internal_fqdn: inside.mykolla.example.net
    kolla_external_fqdn: mykolla.example.net
 
-.. end
-
 Provisions must be taken outside of kolla for these names to map to the
 configured IP addresses. Using a DNS server or the ``/etc/hosts`` file
 are two ways to create this mapping.
@@ -100,8 +94,6 @@ The default for TLS is disabled, to enable TLS networking:
    kolla_enable_tls_external: "yes"
    kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/mycert.pem"
 
-.. end
-
 .. note::
 
    TLS authentication is based on certificates that have been
@@ -138,8 +130,6 @@ have settings similar to this:
    export OS_CACERT=/etc/pki/mykolla-cacert.crt
    export OS_IDENTITY_API_VERSION=3
 
-.. end
-
 Self-Signed Certificates
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -191,8 +181,6 @@ needs to create ``/etc/kolla/config/nova/nova-scheduler.conf`` with content:
    [DEFAULT]
    scheduler_max_attempts = 100
 
-.. end
-
 If the operator wants to configure compute node cpu and ram allocation ratio
 on host myhost, the operator needs to create file
 ``/etc/kolla/config/nova/myhost/nova.conf`` with content:
@@ -204,8 +192,6 @@ on host myhost, the operator needs to create file
    cpu_allocation_ratio = 16.0
    ram_allocation_ratio = 5.0
 
-.. end
-
 Kolla allows the operator to override configuration globally for all services.
 It will look for a file called ``/etc/kolla/config/global.conf``.
 
@@ -218,8 +204,6 @@ operator needs to create ``/etc/kolla/config/global.conf`` with content:
    [database]
    max_pool_size = 100
 
-.. end
-
 In case the operators want to customize ``policy.json`` file, they should
 create a full policy file for specific project in the same directory like above
 and Kolla will overwrite default policy file with it. Be aware, with some
@@ -242,8 +226,6 @@ using following command:
 
    kolla-ansible reconfigure
 
-.. end
-
 IP Address Constrained Environments
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -255,8 +237,6 @@ adding:
 
    enable_haproxy: "no"
 
-.. end
-
 Note this method is not recommended and generally not tested by the
 Kolla community, but included since sometimes a free IP is not available
 in a testing environment.
@@ -271,8 +251,6 @@ first disable the deployment of the central logging.
 
    enable_central_logging: "no"
 
-.. end
-
 Now you can use the parameter ``elasticsearch_address`` to configure the
 address of the external Elasticsearch environment.
 
@@ -287,8 +265,6 @@ for service(s) in Kolla. It is possible with setting
 
    database_port: 3307
 
-.. end
-
 As ``<service>_port`` value is saved in different services' configuration so
 it's advised to make above change before deploying.
 
@@ -304,8 +280,6 @@ You can set syslog parameters in ``globals.yml`` file. For example:
    syslog_server: "172.29.9.145"
    syslog_udp_port: "514"
 
-.. end
-
 You can also set syslog facility names for Swift and HAProxy logs.
 By default, Swift and HAProxy use ``local0`` and ``local1``, respectively.
 
@@ -314,4 +288,3 @@ By default, Swift and HAProxy use ``local0`` and ``local1``, respectively.
    syslog_swift_facility: "local0"
    syslog_haproxy_facility: "local1"
 
-.. end
diff --git a/doc/source/admin/deployment-philosophy.rst b/doc/source/admin/deployment-philosophy.rst
index 1f82b7e52b..0af5229d25 100644
--- a/doc/source/admin/deployment-philosophy.rst
+++ b/doc/source/admin/deployment-philosophy.rst
@@ -66,8 +66,6 @@ result, simply :command:`mkdir -p /etc/kolla/config` and modify the file
    virt_type=qemu
    cpu_mode = none
 
-.. end
-
 After this change Kolla will use an emulated hypervisor with lower performance.
 Kolla could have templated this commonly modified configuration option. If
 Kolla starts down this path, the Kolla project could end with hundreds of
diff --git a/doc/source/contributor/CONTRIBUTING.rst b/doc/source/contributor/CONTRIBUTING.rst
index 5b21a048a0..8b43d36d75 100644
--- a/doc/source/contributor/CONTRIBUTING.rst
+++ b/doc/source/contributor/CONTRIBUTING.rst
@@ -93,8 +93,6 @@ that Kolla uses throughout that should be followed.
        {
        }
 
-    .. end
-
   - For OpenStack services there should be an entry in the ``services`` list
     in the ``cron.json.j2`` template file in ``ansible/roles/common/templates``.
 
diff --git a/doc/source/contributor/kolla-for-openstack-development.rst b/doc/source/contributor/kolla-for-openstack-development.rst
index db135478fe..f420b49f5c 100644
--- a/doc/source/contributor/kolla-for-openstack-development.rst
+++ b/doc/source/contributor/kolla-for-openstack-development.rst
@@ -30,8 +30,6 @@ To enable dev mode for all supported services, set in
 
    kolla_dev_mode: true
 
-.. end
-
 To enable it just for heat, set:
 
 .. path /etc/kolla/globals.yml
@@ -39,8 +37,6 @@ To enable it just for heat, set:
 
    heat_dev_mode: true
 
-.. end
-
 Usage
 -----
 
@@ -54,8 +50,6 @@ After making code changes, simply restart the container to pick them up:
 
    docker restart heat_api
 
-.. end
-
 Debugging
 ---------
 
@@ -66,8 +60,6 @@ make sure it is installed in the container in question:
 
    docker exec -it -u root heat_api pip install remote_pdb
 
-.. end
-
 Then, set your breakpoint as follows:
 
 .. code-block:: python
@@ -75,8 +67,6 @@ Then, set your breakpoint as follows:
    from remote_pdb import RemotePdb
    RemotePdb('127.0.0.1', 4444).set_trace()
 
-.. end
-
 Once you run the code(restart the container), pdb can be accessed using
 ``socat``:
 
@@ -84,7 +74,5 @@ Once you run the code(restart the container), pdb can be accessed using
 
    socat readline tcp:127.0.0.1:4444
 
-.. end
-
 Learn more information about `remote_pdb
 <https://pypi.org/project/remote-pdb/>`_.
diff --git a/doc/source/contributor/running-tests.rst b/doc/source/contributor/running-tests.rst
index 2f897493c8..e25cd7a0cb 100644
--- a/doc/source/contributor/running-tests.rst
+++ b/doc/source/contributor/running-tests.rst
@@ -25,8 +25,6 @@ so the only package you install is ``tox`` itself:
 
    pip install tox
 
-.. end
-
 For more information, see `the unit testing section of the Testing wiki page
 <https://wiki.openstack.org/wiki/Testing#Unit_Tests>`_. For example:
 
@@ -36,24 +34,18 @@ To run the Python 2.7 tests:
 
    tox -e py27
 
-.. end
-
 To run the style tests:
 
 .. code-block:: console
 
    tox -e pep8
 
-.. end
-
 To run multiple tests separate items by commas:
 
 .. code-block:: console
 
    tox -e py27,py35,pep8
 
-.. end
-
 Running a subset of tests
 -------------------------
 
@@ -68,8 +60,6 @@ directory use:
 
    tox -e py27 kolla-ansible.tests
 
-.. end
-
 To run the tests of a specific file
 ``kolla-ansible/tests/test_kolla_docker.py``:
 
@@ -77,8 +67,6 @@ To run the tests of a specific file
 
    tox -e py27 test_kolla_docker
 
-.. end
-
 To run the tests in the ``ModuleArgsTest`` class in
 the ``kolla-ansible/tests/test_kolla_docker.py`` file:
 
@@ -86,8 +74,6 @@ the ``kolla-ansible/tests/test_kolla_docker.py`` file:
 
    tox -e py27 test_kolla_docker.ModuleArgsTest
 
-.. end
-
 To run the ``ModuleArgsTest.test_module_args`` test method in
 the ``kolla-ansible/tests/test_kolla_docker.py`` file:
 
@@ -95,8 +81,6 @@ the ``kolla-ansible/tests/test_kolla_docker.py`` file:
 
    tox -e py27 test_kolla_docker.ModuleArgsTest.test_module_args
 
-.. end
-
 Debugging unit tests
 --------------------
 
@@ -107,8 +91,6 @@ a breaking point to the code:
 
    import pdb; pdb.set_trace()
 
-.. end
-
 Then run ``tox`` with the debug environment as one of the following:
 
 .. code-block:: console
@@ -116,8 +98,6 @@ Then run ``tox`` with the debug environment as one of the following:
    tox -e debug
    tox -e debug test_file_name.TestClass.test_name
 
-.. end
-
 For more information, see the `oslotest documentation
 <https://docs.openstack.org/oslotest/latest/user/features.html#debugging-with-oslo-debug-helper>`_.
 
diff --git a/doc/source/contributor/vagrant-dev-env.rst b/doc/source/contributor/vagrant-dev-env.rst
index 95f0ee0aca..2d70f6c1f7 100644
--- a/doc/source/contributor/vagrant-dev-env.rst
+++ b/doc/source/contributor/vagrant-dev-env.rst
@@ -50,8 +50,6 @@ For CentOS 7 or later:
    qemu-kvm qemu-img libvirt libvirt-python libvirt-client virt-install \
    bridge-utils git
 
-.. end
-
 For Ubuntu 16.04 or later:
 
 .. code-block:: console
@@ -60,8 +58,6 @@ For Ubuntu 16.04 or later:
    qemu-utils qemu-kvm libvirt-dev nfs-kernel-server zlib1g-dev libpng12-dev \
    gcc git
 
-.. end
-
 .. note::
 
    Many distros ship outdated versions of Vagrant by default. When in
@@ -74,16 +70,12 @@ Next install the hostmanager plugin so all hosts are recorded in ``/etc/hosts``
 
    vagrant plugin install vagrant-hostmanager
 
-.. end
-
 If you are going to use VirtualBox, then install vagrant-vbguest:
 
 .. code-block:: console
 
    vagrant plugin install vagrant-vbguest
 
-.. end
-
 Vagrant supports a wide range of virtualization technologies. If VirtualBox is
 used, the vbguest plugin will be required to install the VirtualBox Guest
 Additions in the virtual machine:
@@ -92,8 +84,6 @@ Additions in the virtual machine:
 
    vagrant plugin install vagrant-vbguest
 
-.. end
-
 This documentation focuses on libvirt specifics. To install vagrant-libvirt
 plugin:
 
@@ -101,8 +91,6 @@ plugin:
 
    vagrant plugin install --plugin-version ">= 0.0.31" vagrant-libvirt
 
-.. end
-
 Some Linux distributions offer vagrant-libvirt packages, but the version they
 provide tends to be too old to run Kolla. A version of >= 0.0.31 is required.
 
@@ -114,8 +102,6 @@ a password, add the user to the libvirt group:
    sudo gpasswd -a ${USER} libvirt
    newgrp libvirt
 
-.. end
-
 .. note::
 
    In Ubuntu 16.04 and later, libvirtd group is used.
@@ -131,8 +117,6 @@ than VirtualBox shared folders. For CentOS:
    sudo firewall-cmd --zone=internal --add-interface=virbr0
    sudo firewall-cmd --zone=internal --add-interface=virbr1
 
-.. end
-
 #. Enable nfs, rpc-bind and mountd services for firewalld:
 
 .. code-block:: console
@@ -146,8 +130,6 @@ than VirtualBox shared folders. For CentOS:
    sudo firewall-cmd --permanent --add-port=111/tcp
    sudo firewall-cmd --reload
 
-.. end
-
 .. note::
 
    You may not have to do this because Ubuntu uses Uncomplicated Firewall (ufw)
@@ -161,8 +143,6 @@ than VirtualBox shared folders. For CentOS:
    sudo systemctl start nfs-server
    sudo systemctl start rpcbind.service
 
-.. end
-
 Ensure your system has libvirt and associated software installed and setup
 correctly. For CentOS:
 
@@ -171,8 +151,6 @@ correctly. For CentOS:
    sudo systemctl start libvirtd
    sudo systemctl enable libvirtd
 
-.. end
-
 Find a location in the system's home directory and checkout Kolla repos:
 
 .. code-block:: console
@@ -181,8 +159,6 @@ Find a location in the system's home directory and checkout Kolla repos:
    git clone https://git.openstack.org/openstack/kolla-ansible
    git clone https://git.openstack.org/openstack/kolla
 
-.. end
-
 All repos must share the same parent directory so the bootstrap code can
 locate them.
 
@@ -193,8 +169,6 @@ CentOS 7-based environment:
 
    cd kolla-ansible/contrib/dev/vagrant && vagrant up
 
-.. end
-
 The command ``vagrant status`` provides a quick overview of the VMs composing
 the environment.
 
@@ -208,8 +182,6 @@ Kolla. First, connect with the **operator** node:
 
    vagrant ssh operator
 
-.. end
-
 To speed things up, there is a local registry running on the operator. All
 nodes are configured so they can use this insecure repo to pull from, and use
 it as a mirror. Ansible may use this registry to pull images from.
@@ -231,8 +203,6 @@ Once logged on the **operator** VM call the ``kolla-build`` utility:
 
    kolla-build
 
-.. end
-
 ``kolla-build`` accept arguments as documented in `Building Container Images
 <https://docs.openstack.org/kolla/latest/admin/image-building.html>`_.
 It builds Docker images and pushes them to the local registry if the **push**
@@ -247,8 +217,6 @@ To deploy **all-in-one**:
 
    sudo kolla-ansible deploy
 
-.. end
-
 To deploy multinode:
 
 For Centos 7:
@@ -257,16 +225,12 @@ For Centos 7:
 
    sudo kolla-ansible deploy -i /usr/share/kolla-ansible/ansible/inventory/multinode
 
-.. end
-
 For Ubuntu 16.04 or later:
 
 .. code-block:: console
 
    sudo kolla-ansible deploy -i /usr/local/share/kolla-ansible/ansible/inventory/multinode
 
-.. end
-
 Validate OpenStack is operational:
 
 .. code-block:: console
@@ -275,8 +239,6 @@ Validate OpenStack is operational:
    . /etc/kolla/admin-openrc.sh
    openstack user list
 
-.. end
-
 Or navigate to ``http://172.28.128.254/`` with a web browser.
 
 Further Reading
diff --git a/doc/source/reference/bifrost.rst b/doc/source/reference/bifrost.rst
index ea45708563..422eb570f3 100644
--- a/doc/source/reference/bifrost.rst
+++ b/doc/source/reference/bifrost.rst
@@ -87,8 +87,6 @@ resolving the deployment host's hostname to ``127.0.0.1``, for example:
     cat /etc/hosts
     127.0.0.1 bifrost localhost
 
-.. end
-
 The following lines are desirable for IPv6 capable hosts:
 
 .. code-block:: console
@@ -101,8 +99,6 @@ The following lines are desirable for IPv6 capable hosts:
     ff02::3 ip6-allhosts
     192.168.100.15 bifrost
 
-.. end
-
 Build a Bifrost Container Image
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -123,8 +119,6 @@ bifrost image.
         cd kolla
         tox -e genconfig
 
-     .. end
-
    * Modify ``kolla-build.conf``, setting ``install_type`` to ``source``:
 
      .. path etc/kolla/kolla-build.conf
@@ -132,8 +126,6 @@ bifrost image.
 
         install_type = source
 
-     .. end
-
 Alternatively, instead of using ``kolla-build.conf``, a ``source`` build can
 be enabled by appending ``--type source`` to the :command:`kolla-build` or
 ``tools/build.py`` command.
@@ -145,16 +137,12 @@ be enabled by appending ``--type source`` to the :command:`kolla-build` or
       cd kolla
       tools/build.py bifrost-deploy
 
-   .. end
-
    For Production:
 
    .. code-block:: console
 
       kolla-build bifrost-deploy
 
-   .. end
-
    .. note::
 
       By default :command:`kolla-build` will build all containers using CentOS as
@@ -165,8 +153,6 @@ be enabled by appending ``--type source`` to the :command:`kolla-build` or
 
          --base [ubuntu|centos|oraclelinux]
 
-      .. end
-
 Configure and Deploy a Bifrost Container
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -204,8 +190,6 @@ different than ``network_interface``.  For example to use ``eth1``:
 
    bifrost_network_interface: eth1
 
-.. end
-
 Note that this interface should typically have L2 network connectivity with the
 bare metal cloud hosts in order to provide DHCP leases with PXE boot options.
 
@@ -216,8 +200,6 @@ reflected in ``globals.yml``
 
    kolla_install_type: source
 
-.. end
-
 Prepare Bifrost Configuration
 -----------------------------
 
@@ -266,8 +248,6 @@ properties and a logical name.
        cpus: "16"
      name: "cloud1"
 
-.. end
-
 The required inventory will be specific to the hardware and environment in use.
 
 Create Bifrost Configuration
@@ -288,8 +268,6 @@ For details on bifrost's variables see the bifrost documentation. For example:
    # dhcp_lease_time: 12h
    # dhcp_static_mask: 255.255.255.0
 
-.. end
-
 Create Disk Image Builder Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -305,8 +283,6 @@ For example, to use the ``debian`` Disk Image Builder OS element:
 
    dib_os_element: debian
 
-.. end
-
 See the `diskimage-builder documentation
 <https://docs.openstack.org/diskimage-builder/latest/>`__ for more details.
 
@@ -325,16 +301,12 @@ For development:
    cd kolla-ansible
    tools/kolla-ansible deploy-bifrost
 
-.. end
-
 For Production:
 
 .. code-block:: console
 
    kolla-ansible deploy-bifrost
 
-.. end
-
 Deploy Bifrost manually
 -----------------------
 
@@ -346,8 +318,6 @@ Deploy Bifrost manually
       --privileged --name bifrost_deploy \
       kolla/ubuntu-source-bifrost-deploy:3.0.1
 
-   .. end
-
 #. Copy Configuration Files
 
    .. code-block:: console
@@ -357,24 +327,18 @@ Deploy Bifrost manually
       docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
       docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
 
-   .. end
-
 #. Bootstrap Bifrost
 
    .. code-block:: console
 
       docker exec -it bifrost_deploy bash
 
-   .. end
-
 #. Generate an SSH Key
 
    .. code-block:: console
 
       ssh-keygen
 
-   .. end
-
 #. Bootstrap and Start Services
 
    .. code-block:: console
@@ -392,8 +356,6 @@ Deploy Bifrost manually
       -e @/etc/bifrost/dib.yml \
       -e skip_package_install=true
 
-   .. end
-
 Validate the Deployed Container
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -403,8 +365,6 @@ Validate the Deployed Container
    cd /bifrost
    . env-vars
 
-.. end
-
 Running "ironic node-list" should return with no nodes, for example
 
 .. code-block:: console
@@ -415,8 +375,6 @@ Running "ironic node-list" should return with no nodes, for example
    +------+------+---------------+-------------+--------------------+-------------+
    +------+------+---------------+-------------+--------------------+-------------+
 
-.. end
-
 Enroll and Deploy Physical Nodes
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -434,16 +392,12 @@ For Development:
 
    tools/kolla-ansible deploy-servers
 
-.. end
-
 For Production:
 
 .. code-block:: console
 
    kolla-ansible deploy-servers
 
-.. end
-
 Manually
 --------
 
@@ -469,8 +423,6 @@ Manually
    -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
    -e @/etc/bifrost/bifrost.yml
 
-.. end
-
 At this point Ironic should clean down the nodes and install the default
 OS image.
 
@@ -503,8 +455,6 @@ done remotely with :command:`ipmitool` and Serial Over LAN. For example
 
    ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
 
-.. end
-
 References
 ~~~~~~~~~~
 
diff --git a/doc/source/reference/central-logging-guide.rst b/doc/source/reference/central-logging-guide.rst
index 540cd63436..cb494db718 100644
--- a/doc/source/reference/central-logging-guide.rst
+++ b/doc/source/reference/central-logging-guide.rst
@@ -18,8 +18,6 @@ the following:
 
    enable_central_logging: "yes"
 
-.. end
-
 Elasticsearch
 ~~~~~~~~~~~~~
 
@@ -89,8 +87,6 @@ First, re-run the server creation with ``--debug``:
    --key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
    demo1
 
-.. end
-
 In this output, look for the key ``X-Compute-Request-Id``. This is a unique
 identifier that can be used to track the request through the system. An
 example ID looks like this:
@@ -99,8 +95,6 @@ example ID looks like this:
 
    X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
 
-.. end
-
 Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
 search bar, minus the leading ``req-``. Assuming some basic filters have been
 added as shown in the previous section, Kibana should now show the path this
diff --git a/doc/source/reference/ceph-guide.rst b/doc/source/reference/ceph-guide.rst
index 3f179912ea..34d27a9969 100644
--- a/doc/source/reference/ceph-guide.rst
+++ b/doc/source/reference/ceph-guide.rst
@@ -45,8 +45,6 @@ operations:
 
    parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
 
-.. end
-
 The following shows an example of using parted to configure ``/dev/sdb`` for
 usage with Kolla.
 
@@ -61,8 +59,6 @@ usage with Kolla.
    Number  Start   End     Size    File system  Name                      Flags
         1      1049kB  10.7GB  10.7GB               KOLLA_CEPH_OSD_BOOTSTRAP
 
-.. end
-
 Bluestore
 ~~~~~~~~~
 
@@ -72,8 +68,6 @@ To prepare a bluestore OSD partition, execute the following operations:
 
    parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS 1 -1
 
-.. end
-
 If only one device is offered, Kolla Ceph will create the bluestore OSD on the
 device. Kolla Ceph will create two partitions for OSD and block separately.
 
@@ -87,8 +81,6 @@ To prepare a bluestore OSD block partition, execute the following operations:
 
    parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_B 1 -1
 
-.. end
-
 To prepare a bluestore OSD block.wal partition, execute the following
 operations:
 
@@ -96,8 +88,6 @@ operations:
 
    parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_W 1 -1
 
-.. end
-
 To prepare a bluestore OSD block.db partition, execute the following
 operations:
 
@@ -105,8 +95,6 @@ operations:
 
    parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_D 1 -1
 
-.. end
-
 Kolla Ceph will handle the bluestore OSD according to the above up to four
 partition labels. In Ceph bluestore OSD, the block.wal and block.db partitions
 are not mandatory.
@@ -127,8 +115,6 @@ Using an external journal drive
 
    The section is only meaningful for Ceph filestore OSD.
 
-.. end
-
 The steps documented above created a journal partition of 5 GByte
 and a data partition with the remaining storage capacity on the same tagged
 drive.
@@ -146,16 +132,12 @@ Prepare the storage drive in the same way as documented above:
 
    parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
 
-.. end
-
 To prepare the journal external drive execute the following command:
 
 .. code-block:: console
 
    parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
 
-.. end
-
 .. note::
 
    Use different suffixes (``_42``, ``_FOO``, ``_FOO42``, ..) to use different external
@@ -182,24 +164,18 @@ of the hosts that have the block devices you have prepped as shown above.
    controller
    compute1
 
-.. end
-
 Enable Ceph in ``/etc/kolla/globals.yml``:
 
 .. code-block:: yaml
 
    enable_ceph: "yes"
 
-.. end
-
 RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
 
 .. code-block:: yaml
 
    enable_ceph_rgw: "yes"
 
-.. end
-
 .. note::
 
     By default RadosGW supports both Swift and S3 API, and it is not
@@ -208,8 +184,6 @@ RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
     compatibility with Swift API completely. After changing the value, run the
     "reconfigure“ command to enable.
 
-.. end
-
 Configure the Ceph store type in ``ansible/group_vars/all.yml``, the default
 value is ``bluestore`` in Rocky:
 
@@ -217,8 +191,6 @@ value is ``bluestore`` in Rocky:
 
    ceph_osd_store_type: "bluestore"
 
-.. end
-
 .. note::
 
     Regarding number of placement groups (PGs)
@@ -229,8 +201,6 @@ value is ``bluestore`` in Rocky:
     *highly* recommended to consult the official Ceph documentation regarding
     these values before running Ceph in any kind of production scenario.
 
-.. end
-
 RGW requires a healthy cluster in order to be successfully deployed. On initial
 start up, RGW will create several pools. The first pool should be in an
 operational state to proceed with the second one, and so on. So, in the case of
@@ -245,8 +215,6 @@ copies for the pools before deployment. Modify the file
    osd pool default size = 1
    osd pool default min size = 1
 
-.. end
-
 To build a high performance and secure Ceph Storage Cluster, the Ceph community
 recommend the use of two separate networks: public network and cluster network.
 Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
@@ -256,8 +224,6 @@ Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
 
    cluster_interface: "eth2"
 
-.. end
-
 For more details, see `NETWORK CONFIGURATION REFERENCE
 <http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks>`_
 of Ceph Documentation.
@@ -271,8 +237,6 @@ Finally deploy the Ceph-enabled OpenStack:
 
    kolla-ansible deploy -i path/to/inventory
 
-.. end
-
 Using Cache Tiering
 -------------------
 
@@ -287,16 +251,12 @@ operations:
 
    parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
 
-.. end
-
 .. note::
 
    To prepare a bluestore OSD as a cache device, change the partition name in
    the above command to "KOLLA_CEPH_OSD_CACHE_BOOTSTRAP_BS". The deployment of
    bluestore cache OSD is the same as bluestore OSD.
 
-.. end
-
 Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
 
 .. code-block:: yaml
@@ -306,16 +266,12 @@ Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
    # Valid options are [ forward, none, writeback ]
    ceph_cache_mode: "writeback"
 
-.. end
-
 After this run the playbooks as you normally would, for example:
 
 .. code-block:: console
 
    kolla-ansible deploy -i path/to/inventory
 
-.. end
-
 Setting up an Erasure Coded Pool
 --------------------------------
 
@@ -338,8 +294,6 @@ To enable erasure coded pools add the following options to your
    # Optionally, you can change the profile
    #ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
 
-.. end
-
 Managing Ceph
 -------------
 
@@ -374,8 +328,6 @@ the number of copies for the pool to 1:
 
    docker exec ceph_mon ceph osd pool set rbd size 1
 
-.. end
-
 All the pools must be modified if Glance, Nova, and Cinder have been deployed.
 An example of modifying the pools to have 2 copies:
 
@@ -383,24 +335,18 @@ An example of modifying the pools to have 2 copies:
 
    for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
 
-.. end
-
 If using a cache tier, these changes must be made as well:
 
 .. code-block:: console
 
    for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
 
-.. end
-
 The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
 
 .. code-block:: console
 
    docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
 
-.. end
-
 Troubleshooting
 ---------------
 
@@ -465,8 +411,6 @@ environment before adding the 3rd node for Ceph:
    kolla1.ducourrier.com
    kolla2.ducourrier.com
 
-.. end
-
 Configuration
 ~~~~~~~~~~~~~
 
@@ -477,8 +421,6 @@ to add a partition label to it as shown below:
 
    parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
 
-.. end
-
 Make sure to run this command on each of the 3 nodes or the deployment will
 fail.
 
@@ -510,8 +452,6 @@ existing inventory file:
    kolla2.ducourrier.com
    kolla3.ducourrier.com
 
-.. end
-
 It is now time to enable Ceph in the environment by editing the
 ``/etc/kolla/globals.yml`` file:
 
@@ -523,8 +463,6 @@ It is now time to enable Ceph in the environment by editing the
    glance_backend_file: "no"
    glance_backend_ceph: "yes"
 
-.. end
-
 Deployment
 ~~~~~~~~~~
 
@@ -534,4 +472,3 @@ Finally deploy the Ceph-enabled configuration:
 
    kolla-ansible deploy -i path/to/inventory-file
 
-.. end
diff --git a/doc/source/reference/cinder-guide-hnas.rst b/doc/source/reference/cinder-guide-hnas.rst
index df6fb3c840..e710c4c1b8 100644
--- a/doc/source/reference/cinder-guide-hnas.rst
+++ b/doc/source/reference/cinder-guide-hnas.rst
@@ -86,16 +86,12 @@ contents:
    hnas_iscsi_svc0_hdp = FS-Baremetal1
    hnas_iscsi_svc0_iscsi_ip = <svc0_ip>
 
-.. end
-
 Then set password for the backend in ``/etc/kolla/passwords.yml``:
 
 .. code-block:: yaml
 
    hnas_iscsi_password: supervisor
 
-.. end
-
 NFS backend
 -----------
 
@@ -105,8 +101,6 @@ Enable cinder hnas backend nfs in ``/etc/kolla/globals.yml``
 
    enable_cinder_backend_hnas_nfs: "yes"
 
-.. end
-
 Create or modify the file ``/etc/kolla/config/cinder.conf`` and
 add the contents:
 
@@ -126,16 +120,12 @@ add the contents:
    hnas_nfs_svc0_volume_type = nfs_gold
    hnas_nfs_svc0_hdp = <svc0_ip>/<export_name>
 
-.. end
-
 Then set password for the backend in ``/etc/kolla/passwords.yml``:
 
 .. code-block:: yaml
 
    hnas_nfs_password: supervisor
 
-.. end
-
 Configuration on Kolla deployment
 ---------------------------------
 
@@ -146,8 +136,6 @@ Enable Shared File Systems service and HNAS driver in
 
    enable_cinder: "yes"
 
-.. end
-
 Configuration on HNAS
 ---------------------
 
@@ -159,8 +147,6 @@ List the available tenants:
 
    openstack project list
 
-.. end
-
 Create a network to the given tenant (service), providing the tenant ID,
 a name for the network, the name of the physical network over which the
 virtual network is implemented, and the type of the physical mechanism by
@@ -171,8 +157,6 @@ which the virtual network is implemented:
    neutron net-create --tenant-id <SERVICE_ID> hnas_network \
    --provider:physical_network=physnet2 --provider:network_type=flat
 
-.. end
-
 Create a subnet to the same tenant (service), the gateway IP of this subnet,
 a name for the subnet, the network ID created before, and the CIDR of
 subnet:
@@ -182,8 +166,6 @@ subnet:
    neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
    --name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
 
-.. end
-
 Add the subnet interface to a router, providing the router ID and subnet
 ID created before:
 
@@ -191,8 +173,6 @@ ID created before:
 
    neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
 
-.. end
-
 Create volume
 ~~~~~~~~~~~~~
 
@@ -202,8 +182,6 @@ Create a non-bootable volume.
 
    openstack volume create --size 1 my-volume
 
-.. end
-
 Verify Operation.
 
 .. code-block:: console
@@ -258,8 +236,6 @@ Verify Operation.
    | 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume     | in-use         |    1 | Attached to private-instance on /dev/vdb  |
    +--------------------------------------+---------------+----------------+------+-------------------------------------------+
 
-.. end
-
 For more information about how to manage volumes, see the
 `Manage volumes
 <https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html>`__.
diff --git a/doc/source/reference/cinder-guide.rst b/doc/source/reference/cinder-guide.rst
index ecd13417e1..cb9ffeeb9d 100644
--- a/doc/source/reference/cinder-guide.rst
+++ b/doc/source/reference/cinder-guide.rst
@@ -32,8 +32,6 @@ group.  For example with the devices ``/dev/sdb`` and ``/dev/sdc``:
    pvcreate /dev/sdb /dev/sdc
    vgcreate cinder-volumes /dev/sdb /dev/sdc
 
-.. end
-
 During development, it may be desirable to use file backed block storage. It
 is possible to use a file and mount it as a block device via the loopback
 system.
@@ -46,16 +44,12 @@ system.
    pvcreate $free_device
    vgcreate cinder-volumes $free_device
 
-.. end
-
 Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``:
 
 .. code-block:: yaml
 
    enable_cinder_backend_lvm: "yes"
 
-.. end
-
 .. note::
 
    There are currently issues using the LVM backend in a multi-controller setup,
@@ -71,8 +65,6 @@ where the volumes are to be stored:
 
    /kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
 
-.. end
-
 In this example, ``/kolla_nfs`` is the directory on the storage node which will
 be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and
 ``rw,sync,no_root_squash`` means make the share read-write, synchronous, and
@@ -84,8 +76,6 @@ Then start ``nfsd``:
 
    systemctl start nfs
 
-.. end
-
 On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
 each storage node:
 
@@ -94,16 +84,12 @@ each storage node:
    storage01:/kolla_nfs
    storage02:/kolla_nfs
 
-.. end
-
 Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``:
 
 .. code-block:: yaml
 
    enable_cinder_backend_nfs: "yes"
 
-.. end
-
 Validation
 ~~~~~~~~~~
 
@@ -114,8 +100,6 @@ Create a volume as follows:
    openstack volume create --size 1 steak_volume
    <bunch of stuff printed>
 
-.. end
-
 Verify it is available. If it says "error", then something went wrong during
 LVM creation of the volume.
 
@@ -129,24 +113,18 @@ LVM creation of the volume.
    | 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available |    1 |             |
    +--------------------------------------+--------------+-----------+------+-------------+
 
-.. end
-
 Attach the volume to a server using:
 
 .. code-block:: console
 
    openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
 
-.. end
-
 Check the console log to verify the disk addition:
 
 .. code-block:: console
 
    openstack console log show steak_server
 
-.. end
-
 A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
 If the disk stays in the available state, something went wrong during the
 iSCSI mounting of the volume to the guest VM.
@@ -168,8 +146,6 @@ exist on the server and following parameter must be specified in
 
    enable_cinder_backend_lvm: "yes"
 
-.. end
-
 For Ubuntu and LVM2/iSCSI
 -------------------------
 
@@ -195,8 +171,6 @@ targeted for nova compute role.
 
      mount -t configfs /etc/rc.local /sys/kernel/config
 
-  .. end
-
 Cinder backend with external iSCSI storage
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -207,6 +181,4 @@ the following parameter must be specified in ``globals.yml``:
 
    enable_cinder_backend_iscsi: "yes"
 
-.. end
-
 Also ``enable_cinder_backend_lvm`` should be set to ``no`` in this case.
diff --git a/doc/source/reference/designate-guide.rst b/doc/source/reference/designate-guide.rst
index 723a6ff462..eda5476c4e 100644
--- a/doc/source/reference/designate-guide.rst
+++ b/doc/source/reference/designate-guide.rst
@@ -25,8 +25,6 @@ Enable Designate service in ``/etc/kolla/globals.yml``
 
    enable_designate: "yes"
 
-.. end
-
 Configure Designate options in ``/etc/kolla/globals.yml``
 
 .. important::
@@ -40,8 +38,6 @@ Configure Designate options in ``/etc/kolla/globals.yml``
    designate_backend: "bind9"
    designate_ns_record: "sample.openstack.org"
 
-.. end
-
 Neutron and Nova Integration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -51,16 +47,12 @@ Create default Designate Zone for Neutron:
 
    openstack zone create --email admin@sample.openstack.org sample.openstack.org.
 
-.. end
-
 Create designate-sink custom configuration folder:
 
 .. code-block:: console
 
    mkdir -p /etc/kolla/config/designate/
 
-.. end
-
 Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
 
 .. code-block:: console
@@ -70,16 +62,12 @@ Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
    [handler:neutron_floatingip]
    zone_id = <ZONE_ID>
 
-.. end
-
 Reconfigure Designate:
 
 .. code-block:: console
 
    kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate
 
-.. end
-
 Verify operation
 ~~~~~~~~~~~~~~~~
 
@@ -89,16 +77,12 @@ List available networks:
 
    openstack network list
 
-.. end
-
 Associate a domain to a network:
 
 .. code-block:: console
 
    neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org.
 
-.. end
-
 Start an instance:
 
 .. code-block:: console
@@ -110,8 +94,6 @@ Start an instance:
      --nic net-id=${NETWORK_ID} \
      my-vm
 
-.. end
-
 Check DNS records in Designate:
 
 .. code-block:: console
@@ -130,8 +112,6 @@ Check DNS records in Designate:
    | e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org.           | A    | 192.168.190.232                             | ACTIVE | NONE   |
    +--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
 
-.. end
-
 Query instance DNS information to Designate ``dns_interface`` IP address:
 
 .. code-block:: console
@@ -139,8 +119,6 @@ Query instance DNS information to Designate ``dns_interface`` IP address:
    dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A
    192.168.190.232
 
-.. end
-
 For more information about how Designate works, see
 `Designate, a DNSaaS component for OpenStack
 <https://docs.openstack.org/designate/latest/>`__.
diff --git a/doc/source/reference/external-ceph-guide.rst b/doc/source/reference/external-ceph-guide.rst
index d6b336775c..0d5a47847d 100644
--- a/doc/source/reference/external-ceph-guide.rst
+++ b/doc/source/reference/external-ceph-guide.rst
@@ -26,8 +26,6 @@ disable Ceph deployment in ``/etc/kolla/globals.yml``
 
    enable_ceph: "no"
 
-.. end
-
 There are flags indicating individual services to use ceph or not which default
 to the value of ``enable_ceph``. Those flags now need to be activated in order
 to activate external Ceph integration. This can be done individually per
@@ -41,8 +39,6 @@ service in ``/etc/kolla/globals.yml``:
    gnocchi_backend_storage: "ceph"
    enable_manila_backend_cephfs_native: "yes"
 
-.. end
-
 The combination of ``enable_ceph: "no"`` and ``<service>_backend_ceph: "yes"``
 triggers the activation of external ceph mechanism in Kolla.
 
@@ -59,8 +55,6 @@ nodes where ``cinder-volume`` and ``cinder-backup`` will run:
    [storage]
    compute01
 
-.. end
-
 Configuring External Ceph
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -85,8 +79,6 @@ Step 1 is done by using Kolla's INI merge mechanism: Create a file in
    rbd_store_user = glance
    rbd_store_ceph_conf = /etc/ceph/ceph.conf
 
-.. end
-
 Now put ceph.conf and the keyring file (name depends on the username created in
 Ceph) into the same directory, for example:
 
@@ -101,8 +93,6 @@ Ceph) into the same directory, for example:
    auth_service_required = cephx
    auth_client_required = cephx
 
-.. end
-
 .. code-block:: console
 
    $ cat /etc/kolla/config/glance/ceph.client.glance.keyring
@@ -110,8 +100,6 @@ Ceph) into the same directory, for example:
    [client.glance]
    key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
 
-.. end
-
 Kolla will pick up all files named ``ceph.*`` in this directory and copy them
 to the ``/etc/ceph/`` directory of the container.
 
@@ -138,8 +126,6 @@ the following configuration:
    volume_driver=cinder.volume.drivers.rbd.RBDDriver
    rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
 
-.. end
-
 .. note::
 
    ``cinder_rbd_secret_uuid`` can be found in ``/etc/kolla/passwords.yml`` file.
@@ -159,8 +145,6 @@ the following configuration:
    backup_ceph_stripe_count = 0
    restore_discard_excess_bytes = true
 
-.. end
-
 Next, copy the ``ceph.conf`` file into ``/etc/kolla/config/cinder/``:
 
 .. code-block:: ini
@@ -173,8 +157,6 @@ Next, copy the ``ceph.conf`` file into ``/etc/kolla/config/cinder/``:
    auth_service_required = cephx
    auth_client_required = cephx
 
-.. end
-
 Separate configuration options can be configured for
 cinder-volume and cinder-backup by adding ceph.conf files to
 ``/etc/kolla/config/cinder/cinder-volume`` and
@@ -197,8 +179,6 @@ to these directories, for example:
    [client.cinder]
    key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
 
-.. end
-
 .. code-block:: console
 
    $ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
@@ -206,8 +186,6 @@ to these directories, for example:
    [client.cinder-backup]
    key = AQC9wNBYrD8MOBAAwUlCdPKxWZlhkrWIDE1J/w==
 
-.. end
-
 .. code-block:: console
 
    $ cat /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
@@ -215,8 +193,6 @@ to these directories, for example:
    [client.cinder]
    key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
 
-.. end
-
 It is important that the files are named ``ceph.client*``.
 
 Nova
@@ -230,8 +206,6 @@ Put ceph.conf, nova client keyring file and cinder client keyring file into
    $ ls /etc/kolla/config/nova
    ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
 
-.. end
-
 Configure nova-compute to use Ceph as the ephemeral back end by creating
 ``/etc/kolla/config/nova/nova-compute.conf`` and adding the following
 configurations:
@@ -244,8 +218,6 @@ configurations:
    images_rbd_ceph_conf=/etc/ceph/ceph.conf
    rbd_user=nova
 
-.. end
-
 .. note::
 
    ``rbd_user`` might vary depending on your environment.
@@ -264,8 +236,6 @@ the following configuration:
    ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring
    ceph_conffile = /etc/ceph/ceph.conf
 
-.. end
-
 Put ceph.conf and gnocchi client keyring file in
 ``/etc/kolla/config/gnocchi``:
 
@@ -274,8 +244,6 @@ Put ceph.conf and gnocchi client keyring file in
    $ ls /etc/kolla/config/gnocchi
    ceph.client.gnocchi.keyring ceph.conf gnocchi.conf
 
-.. end
-
 Manila
 ------
 
@@ -301,8 +269,6 @@ in Ceph) into the same directory, for example:
    auth_service_required = cephx
    auth_client_required = cephx
 
-.. end
-
 .. code-block:: console
 
    $ cat /etc/kolla/config/manila/ceph.client.manila.keyring
@@ -310,8 +276,6 @@ in Ceph) into the same directory, for example:
    [client.manila]
    key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
 
-.. end
-
 For more details on the rest of the Manila setup, such as creating the share
 type ``default_share_type``, please see `Manila in Kolla
 <https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html>`__.
diff --git a/doc/source/reference/external-mariadb-guide.rst b/doc/source/reference/external-mariadb-guide.rst
index 7d1a2989be..e1d8b899b7 100644
--- a/doc/source/reference/external-mariadb-guide.rst
+++ b/doc/source/reference/external-mariadb-guide.rst
@@ -33,8 +33,6 @@ by ensuring the following line exists within ``/etc/kolla/globals.yml`` :
 
    enable_mariadb: "no"
 
-.. end
-
 There are two ways in which you can use external MariaDB:
 * Using an already load-balanced MariaDB address
 * Using an external MariaDB cluster
@@ -53,8 +51,6 @@ need to do the following:
       [mariadb:children]
       myexternalmariadbloadbalancer.com
 
-   .. end
-
 
 #. Define ``database_address`` in ``/etc/kolla/globals.yml`` file:
 
@@ -62,8 +58,6 @@ need to do the following:
 
       database_address: myexternalloadbalancer.com
 
-   .. end
-
 .. note::
 
    If ``enable_external_mariadb_load_balancer`` is set to ``no``
@@ -82,8 +76,6 @@ Using this way, you need to adjust the inventory file:
    myexternaldbserver2.com
    myexternaldbserver3.com
 
-.. end
-
 If you choose to use haproxy for load balancing between the
 members of the cluster, every node within this group
 needs to be resolvable and reachable from all
@@ -97,8 +89,6 @@ according to the following configuration:
 
    enable_external_mariadb_load_balancer: yes
 
-.. end
-
 Using External MariaDB with a privileged user
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -111,8 +101,6 @@ and set the ``database_password`` in ``/etc/kolla/passwords.yml`` file:
 
    database_password: mySuperSecurePassword
 
-.. end
-
 If the MariaDB ``username`` is not ``root``, set ``database_username`` in
 ``/etc/kolla/globals.yml`` file:
 
@@ -120,8 +108,6 @@ If the MariaDB ``username`` is not ``root``, set ``database_username`` in
 
    database_username: "privillegeduser"
 
-.. end
-
 Using preconfigured databases / users:
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -132,8 +118,6 @@ The first step you need to take is to set ``use_preconfigured_databases`` to
 
    use_preconfigured_databases: "yes"
 
-.. end
-
 .. note::
 
    when the ``use_preconfigured_databases`` flag is set to ``"yes"``, you need
@@ -153,8 +137,6 @@ In order to achieve this, you will need to define the user names in the
    keystone_database_user: preconfigureduser1
    nova_database_user: preconfigureduser2
 
-.. end
-
 Also, you will need to set the passwords for all databases in the
 ``/etc/kolla/passwords.yml`` file
 
@@ -172,8 +154,6 @@ all you need to do is the following steps:
 
       use_common_mariadb_user: "yes"
 
-   .. end
-
 #. Set the database_user within ``/etc/kolla/globals.yml`` to
    the one provided to you:
 
@@ -181,8 +161,6 @@ all you need to do is the following steps:
 
       database_user: mycommondatabaseuser
 
-   .. end
-
 #. Set the common password for all components within
    ``/etc/kolla/passwords.yml``. In order to achieve that you
    could use the following command:
@@ -191,4 +169,3 @@ all you need to do is the following steps:
 
       sed -i -r -e 's/([a-z_]{0,}database_password:+)(.*)$/\1 mycommonpass/gi' /etc/kolla/passwords.yml
 
-   .. end
diff --git a/doc/source/reference/horizon-guide.rst b/doc/source/reference/horizon-guide.rst
index 450d408edb..65e9cdeb64 100644
--- a/doc/source/reference/horizon-guide.rst
+++ b/doc/source/reference/horizon-guide.rst
@@ -27,4 +27,3 @@ a file named custom_local_settings should be created under the directory
                 ('material', 'Material', 'themes/material'),
    ]
 
-.. end
diff --git a/doc/source/reference/hyperv-guide.rst b/doc/source/reference/hyperv-guide.rst
index 026f443bb5..d337915931 100644
--- a/doc/source/reference/hyperv-guide.rst
+++ b/doc/source/reference/hyperv-guide.rst
@@ -58,8 +58,6 @@ Virtual Interface the following PowerShell may be used:
    PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
    PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
 
-.. end
-
 .. note::
 
    It is very important to make sure that when you are using a Hyper-V node
@@ -76,8 +74,6 @@ running and started automatically.
    PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
    PS C:\> Start-Service MSiSCSI
 
-.. end
-
 Preparation for Kolla deployer node
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -87,8 +83,6 @@ Hyper-V role is required, enable it in ``/etc/kolla/globals.yml``:
 
    enable_hyperv: "yes"
 
-.. end
-
 Hyper-V options are also required in ``/etc/kolla/globals.yml``:
 
 .. code-block:: yaml
@@ -98,8 +92,6 @@ Hyper-V options are also required in ``/etc/kolla/globals.yml``:
    vswitch_name: <HyperV virtual switch name>
    nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
 
-.. end
-
 If tenant networks are to be built using VLAN add corresponding type in
 ``/etc/kolla/globals.yml``:
 
@@ -107,8 +99,6 @@ If tenant networks are to be built using VLAN add corresponding type in
 
    neutron_tenant_network_types: 'flat,vlan'
 
-.. end
-
 The virtual switch is the same one created on the HyperV setup part.
 For nova_msi_url, different Nova MSI (Mitaka/Newton/Ocata) versions can
 be found on `Cloudbase website
@@ -128,8 +118,6 @@ Add the Hyper-V node in ``ansible/inventory`` file:
    ansible_connection=winrm
    ansible_winrm_server_cert_validation=ignore
 
-.. end
-
 ``pywinrm`` package needs to be installed in order for Ansible to work
 on the HyperV node:
 
@@ -137,8 +125,6 @@ on the HyperV node:
 
    pip install "pywinrm>=0.2.2"
 
-.. end
-
 .. note::
 
    In case of a test deployment with controller and compute nodes as
@@ -149,8 +135,6 @@ on the HyperV node:
 
    Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList <VLAN ID> -NativeVlanId 0 <VM name>
 
-.. end
-
 networking-hyperv mechanism driver is needed for neutron-server to
 communicate with HyperV nova-compute. This can be built with source
 images by default. Manually it can be intalled in neutron-server
@@ -160,8 +144,6 @@ container with pip:
 
    pip install "networking-hyperv>=4.0.0"
 
-.. end
-
 For neutron_extension_drivers, ``port_security`` and ``qos`` are
 currently supported by the networking-hyperv mechanism driver.
 By default only ``port_security`` is set.
@@ -177,15 +159,11 @@ OpenStack HyperV services can be inspected and managed from PowerShell:
    PS C:\> Get-Service nova-compute
    PS C:\> Get-Service neutron-hyperv-agent
 
-.. end
-
 .. code-block:: console
 
    PS C:\> Restart-Service nova-compute
    PS C:\> Restart-Service neutron-hyperv-agent
 
-.. end
-
 For more information on OpenStack HyperV, see
 `Hyper-V virtualization platform
 <https://docs.openstack.org/ocata/config-reference/compute/hypervisor-hyper-v.html>`__.
diff --git a/doc/source/reference/ironic-guide.rst b/doc/source/reference/ironic-guide.rst
index b34987d0c9..a08d0c4e5d 100644
--- a/doc/source/reference/ironic-guide.rst
+++ b/doc/source/reference/ironic-guide.rst
@@ -17,8 +17,6 @@ Enable Ironic in ``/etc/kolla/globals.yml``:
 
    enable_ironic: "yes"
 
-.. end
-
 In the same file, define a range of IP addresses that will be available for use
 by Ironic inspector, as well as a network to be used for the Ironic cleaning
 network:
@@ -28,8 +26,6 @@ network:
    ironic_dnsmasq_dhcp_range: "192.168.5.100,192.168.5.110"
    ironic_cleaning_network: "public1"
 
-.. end
-
 In the same file, optionally a default gateway to be used for the Ironic
 Inspector inspection network:
 
@@ -37,8 +33,6 @@ Inspector inspection network:
 
    ironic_dnsmasq_default_gateway: 192.168.5.1
 
-.. end
-
 In the same file, specify the PXE bootloader file for Ironic Inspector. The
 file is relative to the ``/tftpboot`` directory. The default is ``pxelinux.0``,
 and should be correct for x86 systems. Other platforms may require a different
@@ -49,8 +43,6 @@ value, for example aarch64 on Debian requires
 
    ironic_dnsmasq_boot_file: pxelinux.0
 
-.. end
-
 Ironic inspector also requires a deploy kernel and ramdisk to be placed in
 ``/etc/kolla/config/ironic/``. The following example uses coreos which is
 commonly used in Ironic deployments, though any compatible kernel/ramdisk may
@@ -64,16 +56,12 @@ be used:
    $ curl https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz \
      -o /etc/kolla/config/ironic/ironic-agent.initramfs
 
-.. end
-
 You may optionally pass extra kernel parameters to the inspection kernel using:
 
 .. code-block:: yaml
 
    ironic_inspector_kernel_cmdline_extras: ['ipa-lldp-timeout=90.0', 'ipa-collect-lldp=1']
 
-.. end
-
 in ``/etc/kolla/globals.yml``.
 
 Enable iPXE booting (optional)
@@ -86,8 +74,6 @@ true in ``/etc/kolla/globals.yml``:
 
     enable_ironic_ipxe: "yes"
 
-.. end
-
 This will enable deployment of a docker container, called ironic_ipxe, running
 the web server which iPXE uses to obtain it's boot images.
 
@@ -98,8 +84,6 @@ The port used for the iPXE webserver is controlled via ``ironic_ipxe_port`` in
 
     ironic_ipxe_port: "8089"
 
-.. end
-
 The following changes will occur if iPXE booting is enabled:
 
 - Ironic will be configured with the ``ipxe_enabled`` configuration option set
@@ -117,8 +101,6 @@ Run the deploy as usual:
 
   $ kolla-ansible deploy
 
-.. end
-
 
 Post-deployment configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -129,8 +111,6 @@ initialise the cloud with some defaults (only to be used for demo purposes):
 
   tools/init-runonce
 
-.. end
-
 Add the deploy kernel and ramdisk to Glance. Here we're reusing the same images
 that were fetched for the Inspector:
 
@@ -142,8 +122,6 @@ that were fetched for the Inspector:
   openstack image create --disk-format ari --container-format ari --public \
     --file /etc/kolla/config/ironic/ironic-agent.initramfs deploy-initrd
 
-.. end
-
 Create a baremetal flavor:
 
 .. code-block:: console
@@ -152,8 +130,6 @@ Create a baremetal flavor:
   openstack flavor set my-baremetal-flavor --property \
     resources:CUSTOM_BAREMETAL_RESOURCE_CLASS=1
 
-.. end
-
 Create the baremetal node and associate a port. (Ensure to substitute correct
 values for the kernel, ramdisk, and MAC address for your baremetal node)
 
@@ -171,8 +147,6 @@ values for the kernel, ramdisk, and MAC address for your baremetal node)
 
   openstack baremetal port create 52:54:00:ff:15:55 --node 57aa574a-5fea-4468-afcf-e2551d464412
 
-.. end
-
 Make the baremetal node available to nova:
 
 .. code-block:: console
@@ -180,8 +154,6 @@ Make the baremetal node available to nova:
   openstack baremetal node manage 57aa574a-5fea-4468-afcf-e2551d464412
   openstack baremetal node provide 57aa574a-5fea-4468-afcf-e2551d464412
 
-.. end
-
 It may take some time for the node to become available for scheduling in nova.
 Use the following commands to wait for the resources to become available:
 
@@ -190,8 +162,6 @@ Use the following commands to wait for the resources to become available:
   openstack hypervisor stats show
   openstack hypervisor show 57aa574a-5fea-4468-afcf-e2551d464412
 
-.. end
-
 Booting the baremetal
 ~~~~~~~~~~~~~~~~~~~~~
 You can now use the following sample command to boot the baremetal instance:
@@ -201,8 +171,6 @@ You can now use the following sample command to boot the baremetal instance:
   openstack server create --image cirros --flavor my-baremetal-flavor \
     --key-name mykey --network public1 demo1
 
-.. end
-
 Notes
 ~~~~~
 
@@ -215,8 +183,6 @@ requests may not be hitting various pieces of the process:
 
   tcpdump -i <interface> port 67 or port 68 or port 69 -e -n
 
-.. end
-
 Configuring the Web Console
 ---------------------------
 Configuration based off upstream `Node web console
@@ -231,8 +197,6 @@ Set ironic_console_serial_speed in ``/etc/kolla/globals.yml``:
 
    ironic_console_serial_speed: 9600n8
 
-.. end
-
 Deploying using virtual baremetal (vbmc + libvirt)
 --------------------------------------------------
 See https://brk3.github.io/post/kolla-ironic-libvirt/
diff --git a/doc/source/reference/kuryr-guide.rst b/doc/source/reference/kuryr-guide.rst
index 12204f62c2..7071af3d49 100644
--- a/doc/source/reference/kuryr-guide.rst
+++ b/doc/source/reference/kuryr-guide.rst
@@ -22,8 +22,6 @@ To allow Docker daemon connect to the etcd, add the following in the
 
    ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
 
-.. end
-
 The IP address is host running the etcd service. ```2375``` is port that
 allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
 port.
@@ -37,16 +35,12 @@ following variables
    enable_etcd: "yes"
    enable_kuryr: "yes"
 
-.. end
-
 Deploy the OpenStack cloud and kuryr network plugin
 
 .. code-block:: console
 
    kolla-ansible deploy
 
-.. end
-
 Create a Virtual Network
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -54,24 +48,18 @@ Create a Virtual Network
 
    docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1
 
-.. end
-
 To list the created network:
 
 .. code-block:: console
 
    docker network ls
 
-.. end
-
 The created network is also available from OpenStack CLI:
 
 .. code-block:: console
 
    openstack network list
 
-.. end
-
 For more information about how kuryr works, see
 `kuryr (OpenStack Containers Networking)
 <https://docs.openstack.org/kuryr/latest/>`__.
diff --git a/doc/source/reference/manila-guide.rst b/doc/source/reference/manila-guide.rst
index 15d7532adc..fd1742f793 100644
--- a/doc/source/reference/manila-guide.rst
+++ b/doc/source/reference/manila-guide.rst
@@ -42,8 +42,6 @@ Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
    enable_cinder: "yes"
    enable_ceph: "yes"
 
-.. end
-
 Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
 
 .. code-block:: console
@@ -51,8 +49,6 @@ Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
    enable_manila: "yes"
    enable_manila_backend_generic: "yes"
 
-.. end
-
 By default Manila uses instance flavor id 100 for its file systems. For Manila
 to work, either create a new nova flavor with id 100 (use *nova flavor-create*)
 or change *service_instance_flavor_id* to use one of the default nova flavor
@@ -67,8 +63,6 @@ contents:
    [generic]
    service_instance_flavor_id = 2
 
-.. end
-
 Verify Operation
 ~~~~~~~~~~~~~~~~
 
@@ -86,8 +80,6 @@ to verify successful launch of each process:
    | manila-share     | share1@generic | nova | enabled |   up  | 2014-10-18T01:30:57.000000 |       None      |
    +------------------+----------------+------+---------+-------+----------------------------+-----------------+
 
-.. end
-
 Launch an Instance
 ~~~~~~~~~~~~~~~~~~
 
@@ -112,8 +104,6 @@ Create a default share type before running manila-share service:
    | 8a35da28-0f74-490d-afff-23664ecd4f01 | default_share_type | public     | -          | driver_handles_share_servers : True | snapshot_support : True |
    +--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
 
-.. end
-
 Create a manila share server image to the Image service:
 
 .. code-block:: console
@@ -146,8 +136,6 @@ Create a manila share server image to the Image service:
    | visibility       | public                               |
    +------------------+--------------------------------------+
 
-.. end
-
 List available networks to get id and subnets of the private network:
 
 .. code-block:: console
@@ -159,8 +147,6 @@ List available networks to get id and subnets of the private network:
    | 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 |
    +--------------------------------------+---------+----------------------------------------------------+
 
-.. end
-
 Create a shared network
 
 .. code-block:: console
@@ -187,8 +173,6 @@ Create a shared network
    | description       | None                                 |
    +-------------------+--------------------------------------+
 
-.. end
-
 Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
 ``/etc/kolla/config/manila-share.conf`` file)
 
@@ -196,8 +180,6 @@ Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
 
    # nova flavor-create manila-service-flavor 100 128 0 1
 
-.. end
-
 Create a share
 ~~~~~~~~~~~~~~
 
@@ -234,8 +216,6 @@ Create a NFS share using the share network:
    | metadata                    | {}                                   |
    +-----------------------------+--------------------------------------+
 
-.. end
-
 After some time, the share status should change from ``creating``
 to ``available``:
 
@@ -249,8 +229,6 @@ to ``available``:
    | e1e06b14-ba17-48d4-9e0b-ca4d59823166 | demo-share1 | 1    | NFS         | available | False     | default_share_type                   | share1@generic#GENERIC      | nova              |
    +--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
 
-.. end
-
 Configure user access to the new share before attempting to mount it via the
 network:
 
@@ -258,8 +236,6 @@ network:
 
    # manila access-allow demo-share1 ip INSTANCE_PRIVATE_NETWORK_IP
 
-.. end
-
 Mount the share from an instance
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -304,24 +280,18 @@ Get export location from share
    | metadata                    | {}                                                                   |
    +-----------------------------+----------------------------------------------------------------------+
 
-.. end
-
 Create a folder where the mount will be placed:
 
 .. code-block:: console
 
    # mkdir ~/test_folder
 
-.. end
-
 Mount the NFS share in the instance using the export location of the share:
 
 .. code-block:: console
 
    # mount -v 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 ~/test_folder
 
-.. end
-
 Share Migration
 ~~~~~~~~~~~~~~~
 
@@ -340,8 +310,6 @@ Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
    [DEFAULT]
    data_node_access_ip = 10.10.10.199
 
-.. end
-
 .. note::
 
    Share migration requires have more than one back end configured.
@@ -356,8 +324,6 @@ Use the manila migration command, as shown in the following example:
      --new_share_type share_type --new_share_network share_network \
      shareID destinationHost
 
-.. end
-
 - ``--force-host-copy``: Forces the generic host-based migration mechanism and
   bypasses any driver optimizations.
 - ``destinationHost``: Is in this format ``host#pool`` which includes
@@ -391,8 +357,6 @@ check progress.
    | total_progress | 100                     |
    +----------------+-------------------------+
 
-.. end
-
 Use the :command:`manila migration-complete shareID` command to complete share
 migration process.
 
diff --git a/doc/source/reference/manila-hnas-guide.rst b/doc/source/reference/manila-hnas-guide.rst
index 3010def640..f4bf452367 100644
--- a/doc/source/reference/manila-hnas-guide.rst
+++ b/doc/source/reference/manila-hnas-guide.rst
@@ -80,8 +80,6 @@ Enable Shared File Systems service and HNAS driver in
    enable_manila: "yes"
    enable_manila_backend_hnas: "yes"
 
-.. end
-
 Configure the OpenStack networking so it can reach HNAS Management
 interface and HNAS EVS Data interface.
 
@@ -95,8 +93,6 @@ In ``/etc/kolla/globals.yml`` set:
    neutron_bridge_name: "br-ex,br-ex2"
    neutron_external_interface: "eth1,eth2"
 
-.. end
-
 .. note::
 
    ``eth1`` is used to Neutron external interface and ``eth2`` is
@@ -127,8 +123,6 @@ List the available tenants:
 
    $ openstack project list
 
-.. end
-
 Create a network to the given tenant (service), providing the tenant ID,
 a name for the network, the name of the physical network over which the
 virtual network is implemented, and the type of the physical mechanism by
@@ -139,16 +133,12 @@ which the virtual network is implemented:
    $ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
      --provider:physical_network=physnet2 --provider:network_type=flat
 
-.. end
-
 *Optional* - List available networks:
 
 .. code-block:: console
 
    $ neutron net-list
 
-.. end
-
 Create a subnet to the same tenant (service), the gateway IP of this subnet,
 a name for the subnet, the network ID created before, and the CIDR of
 subnet:
@@ -158,16 +148,12 @@ subnet:
    $ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
      --name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
 
-.. end
-
 *Optional* - List available subnets:
 
 .. code-block:: console
 
    $ neutron subnet-list
 
-.. end
-
 Add the subnet interface to a router, providing the router ID and subnet
 ID created before:
 
@@ -175,8 +161,6 @@ ID created before:
 
    $ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
 
-.. end
-
 Create a file system on HNAS. See the `Hitachi HNAS reference <http://www.hds.com/assets/pdf/hus-file-module-file-services-administration-guide.pdf>`_.
 
 .. important ::
@@ -193,8 +177,6 @@ Create a route in HNAS to the tenant network:
    $ console-context --evs <EVS_ID_IN_USE> route-net-add --gateway <FLAT_NETWORK_GATEWAY> \
      <TENANT_PRIVATE_NETWORK>
 
-.. end
-
 .. important ::
 
    Make sure multi-tenancy is enabled and routes are configured per EVS.
@@ -204,8 +186,6 @@ Create a route in HNAS to the tenant network:
    $ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
      10.0.0.0/24
 
-.. end
-
 Create a share
 ~~~~~~~~~~~~~~
 
@@ -221,8 +201,6 @@ Create a default share type before running manila-share service:
    | 3e54c8a2-1e50-455e-89a0-96bb52876c35 | default_share_hitachi | public     | -          | driver_handles_share_servers : False | snapshot_support : True |
    +--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
 
-.. end
-
 Create a NFS share using the HNAS back end:
 
 .. code-block:: console
@@ -232,8 +210,6 @@ Create a NFS share using the HNAS back end:
      --description "My Manila share" \
      --share-type default_share_hitachi
 
-.. end
-
 Verify Operation:
 
 .. code-block:: console
@@ -246,8 +222,6 @@ Verify Operation:
    | 721c0a6d-eea6-41af-8c10-72cd98985203 | mysharehnas    | 1    | NFS         | available | False     | default_share_hitachi | control@hnas1#HNAS1     | nova              |
    +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
 
-.. end
-
 .. code-block:: console
 
    $ manila show mysharehnas
@@ -288,8 +262,6 @@ Verify Operation:
    | metadata                    | {}                                                              |
    +-----------------------------+-----------------------------------------------------------------+
 
-.. end
-
 .. _hnas_configure_multiple_back_ends:
 
 Configure multiple back ends
@@ -314,8 +286,6 @@ Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
    [DEFAULT]
    enabled_share_backends = generic,hnas1,hnas2
 
-.. end
-
 Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
 
 .. path /etc/kolla/config/manila-share.conf
@@ -352,8 +322,6 @@ Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
    hitachi_hnas_evs_ip = <evs_ip>
    hitachi_hnas_file_system_name = FS-Manila2
 
-.. end
-
 For more information about how to manage shares, see the
 `Manage shares
 <https://docs.openstack.org/manila/latest/user/create-and-manage-shares.html>`__.
diff --git a/doc/source/reference/networking-guide.rst b/doc/source/reference/networking-guide.rst
index 8c722b2c04..67a8063da6 100644
--- a/doc/source/reference/networking-guide.rst
+++ b/doc/source/reference/networking-guide.rst
@@ -27,8 +27,6 @@ as the following example shows:
 
    enable_neutron_provider_networks: "yes"
 
-.. end
-
 Enabling Neutron Extensions
 ===========================
 
@@ -44,8 +42,6 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
 
    enable_neutron_sfc: "yes"
 
-.. end
-
 Verification
 ------------
 
@@ -65,8 +61,6 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
 
    enable_neutron_vpnaas: "yes"
 
-.. end
-
 Verification
 ------------
 
@@ -83,8 +77,6 @@ and versioning may differ depending on deploy configuration):
    CONTAINER ID   IMAGE                                                               COMMAND         CREATED          STATUS        PORTS  NAMES
    97d25657d55e   operator:5000/kolla/oraclelinux-source-neutron-vpnaas-agent:4.0.0   "kolla_start"   44 minutes ago   Up 44 minutes        neutron_vpnaas_agent
 
-.. end
-
 Kolla-Ansible includes a small script that can be used in tandem with
 ``tools/init-runonce`` to verify the VPN using two routers and two Nova VMs:
 
@@ -93,8 +85,6 @@ Kolla-Ansible includes a small script that can be used in tandem with
    tools/init-runonce
    tools/init-vpn
 
-.. end
-
 Verify both VPN services are active:
 
 .. code-block:: console
@@ -108,8 +98,6 @@ Verify both VPN services are active:
    | edce15db-696f-46d8-9bad-03d087f1f682 | vpn_east | 058842e0-1d01-4230-af8d-0ba6d0da8b1f | ACTIVE |
    +--------------------------------------+----------+--------------------------------------+--------+
 
-.. end
-
 Two VMs can now be booted, one on vpn_east, the other on vpn_west, and
 encrypted ping packets observed being sent from one to the other.
 
@@ -129,8 +117,6 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
 
    enable_opendaylight: "yes"
 
-.. end
-
 Networking-ODL is an additional Neutron plugin that allows the OpenDaylight
 SDN Controller to utilize its networking virtualization features.
 For OpenDaylight to work, the Networking-ODL plugin has to be installed in
@@ -152,8 +138,6 @@ OpenDaylight ``globals.yml`` configurable options with their defaults include:
    opendaylight_features: "odl-mdsal-apidocs,odl-netvirt-openstack"
    opendaylight_allowed_network_types: '"flat", "vlan", "vxlan"'
 
-.. end
-
 Clustered OpenDaylight Deploy
 -----------------------------
 
@@ -221,8 +205,6 @@ config and regenerating your grub file.
 
    default_hugepagesz=2M hugepagesz=2M hugepages=25000
 
-.. end
-
 As dpdk is a userspace networking library it requires userspace compatible
 drivers to be able to control the physical interfaces on the platform.
 dpdk technically support 3 kernel drivers ``igb_uio``,``uio_pci_generic``, and
@@ -252,8 +234,6 @@ To enable ovs-dpdk, add the following configuration to
    tunnel_interface: "dpdk_bridge"
    neutron_bridge_name: "dpdk_bridge"
 
-.. end
-
 Unlike standard Open vSwitch deployments, the interface specified by
 neutron_external_interface should have an ip address assigned.
 The ip address assigned to neutron_external_interface will be moved to
@@ -306,8 +286,6 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
 
    enable_neutron_sriov: "yes"
 
-.. end
-
 Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add
 ``sriovnicswitch`` to the ``mechanism_drivers``. Also, the provider
 networks used by SRIOV should be configured. Both flat and VLAN are configured
@@ -325,8 +303,6 @@ with the same physical network name in this example:
    [ml2_type_flat]
    flat_networks = sriovtenant1
 
-.. end
-
 Add ``PciPassthroughFilter`` to scheduler_default_filters
 
 The ``PciPassthroughFilter``, which is required by Nova Scheduler service
@@ -343,8 +319,6 @@ required by The Nova Scheduler service on the controller node.
    scheduler_default_filters = <existing filters>, PciPassthroughFilter
    scheduler_available_filters = nova.scheduler.filters.all_filters
 
-.. end
-
 Edit the ``/etc/kolla/config/nova.conf`` file and add PCI device whitelisting.
 this is needed by OpenStack Compute service(s) on the Compute.
 
@@ -354,8 +328,6 @@ this is needed by OpenStack Compute service(s) on the Compute.
    [pci]
    passthrough_whitelist = [{"devname": "ens785f0", "physical_network": "sriovtenant1"}]
 
-.. end
-
 Modify the ``/etc/kolla/config/neutron/sriov_agent.ini`` file. Add physical
 network to interface mapping. Specific VFs can also be excluded here. Leaving
 blank means to enable all VFs for the interface:
@@ -367,8 +339,6 @@ blank means to enable all VFs for the interface:
    physical_device_mappings = sriovtenant1:ens785f0
    exclude_devices =
 
-.. end
-
 Run deployment.
 
 Verification
@@ -392,8 +362,6 @@ output of both ``lspci`` and ``ip link show``.  For example:
    vf 2 MAC fa:16:3e:92:cf:12, spoof checking on, link-state auto, trust off
    vf 3 MAC fa:16:3e:00:a3:01, vlan 1000, spoof checking on, link-state auto, trust off
 
-.. end
-
 Verify the SRIOV Agent container is running on the compute node(s):
 
 .. code-block:: console
@@ -402,8 +370,6 @@ Verify the SRIOV Agent container is running on the compute node(s):
    CONTAINER ID   IMAGE                                                                COMMAND        CREATED         STATUS         PORTS  NAMES
    b03a8f4c0b80   10.10.10.10:4000/registry/centos-source-neutron-sriov-agent:17.04.0  "kolla_start"  18 minutes ago  Up 18 minutes         neutron_sriov_agent
 
-.. end
-
 Verify the SRIOV Agent service is present and UP:
 
 .. code-block:: console
@@ -416,8 +382,6 @@ Verify the SRIOV Agent service is present and UP:
    | 7c06bda9-7b87-487e-a645-cc6c289d9082 | NIC Switch agent   | av09-18-wcp | None              | :-)   | UP    | neutron-sriov-nic-agent   |
    +--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
 
-.. end
-
 Create a new provider network. Set ``provider-physical-network`` to the
 physical network name that was configured in ``/etc/kolla/config/nova.conf``.
 Set ``provider-network-type`` to the desired type. If using VLAN, ensure
@@ -442,16 +406,12 @@ Create a subnet with a DHCP range for the provider network:
      --allocation-pool start=11.0.0.5,end=11.0.0.100 \
      sriovnet1_sub1
 
-.. end
-
 Create a port on the provider network with ``vnic_type`` set to ``direct``:
 
 .. code-block:: console
 
    # openstack port create --network sriovnet1 --vnic-type=direct sriovnet1-port1
 
-.. end
-
 Start a new instance with the SRIOV port assigned:
 
 .. code-block:: console
@@ -471,8 +431,6 @@ dmesg on the compute node where the instance was placed.
    [ 2896.850028] ixgbe 0000:05:00.0: Setting VLAN 1000, QOS 0x0 on VF 3
    [ 2897.403367] vfio-pci 0000:05:10.4: enabling device (0000 -> 0002)
 
-.. end
-
 For more information see `OpenStack SRIOV documentation <https://docs.openstack.org/neutron/pike/admin/config-sriov.html>`_.
 
 Nova SRIOV
@@ -508,8 +466,6 @@ Compute service on the compute node also require the ``alias`` option under the
    passthrough_whitelist = [{"vendor_id": "8086", "product_id": "10fb"}]
    alias = [{"vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf1"}]
 
-.. end
-
 Run deployment.
 
 Verification
@@ -522,16 +478,12 @@ device from the PCI alias:
 
    # openstack flavor set sriov-flavor --property "pci_passthrough:alias"="vf1:1"
 
-.. end
-
 Start a new instance using the flavor:
 
 .. code-block:: console
 
    # openstack server create --flavor sriov-flavor --image fc-26 vm2
 
-.. end
-
 Verify VF devices were created and the instance starts successfully as in
 the Neutron SRIOV case.
 
diff --git a/doc/source/reference/nova-fake-driver.rst b/doc/source/reference/nova-fake-driver.rst
index 33f0e06324..df003f8f46 100644
--- a/doc/source/reference/nova-fake-driver.rst
+++ b/doc/source/reference/nova-fake-driver.rst
@@ -32,8 +32,6 @@ the command line options.
    enable_nova_fake: "yes"
    num_nova_fake_per_node: 5
 
-.. end
-
 Each Compute node will run 5 ``nova-compute`` containers and 5
 ``neutron-plugin-agent`` containers. When booting instance, there will be
 no real instances created. But :command:`nova list` shows the fake instances.
diff --git a/doc/source/reference/osprofiler-guide.rst b/doc/source/reference/osprofiler-guide.rst
index afe8fd0bcd..2b8284b5d9 100644
--- a/doc/source/reference/osprofiler-guide.rst
+++ b/doc/source/reference/osprofiler-guide.rst
@@ -25,8 +25,6 @@ Enable ``OSprofiler`` in ``/etc/kolla/globals.yml`` file:
    enable_osprofiler: "yes"
    enable_elasticsearch: "yes"
 
-.. end
-
 Verify operation
 ----------------
 
@@ -43,8 +41,6 @@ UUID for :command:`openstack server create` command.
      --image cirros --flavor m1.tiny --key-name mykey \
      --nic net-id=${NETWORK_ID} demo
 
-.. end
-
 The previous command will output the command to retrieve OSprofiler trace.
 
 .. code-block:: console
@@ -52,8 +48,6 @@ The previous command will output the command to retrieve OSprofiler trace.
    $ osprofiler trace show --html <TRACE_ID> --connection-string \
      elasticsearch://<api_interface_address>:9200
 
-.. end
-
 For more information about how OSprofiler works, see
 `OSProfiler – Cross-project profiling library
 <https://docs.openstack.org/osprofiler/latest/>`__.
diff --git a/doc/source/reference/resource-constraints.rst b/doc/source/reference/resource-constraints.rst
index a70c4c6943..3649f26d0c 100644
--- a/doc/source/reference/resource-constraints.rst
+++ b/doc/source/reference/resource-constraints.rst
@@ -28,8 +28,6 @@ The resources currently supported by Kolla Ansible are:
     kernel_memory
     blkio_weight
 
-.. end
-
 Pre-deployment Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -89,8 +87,6 @@ add the following to the dimensions options section in
    default_container_dimensions:
      cpuset_cpus: "1"
 
-.. end
-
 For example, to constrain the number of CPUs that may be used by
 the ``nova_libvirt`` container, add the following to the dimensions
 options section in ``/etc/kolla/globals.yml``:
@@ -100,8 +96,6 @@ options section in ``/etc/kolla/globals.yml``:
    nova_libvirt_dimensions:
      cpuset_cpus: "2"
 
-.. end
-
 Deployment
 ~~~~~~~~~~
 
@@ -111,4 +105,3 @@ To deploy resource constrained containers, run the deployment as usual:
 
   $ kolla-ansible deploy -i /path/to/inventory
 
-.. end
diff --git a/doc/source/reference/skydive-guide.rst b/doc/source/reference/skydive-guide.rst
index 1940bea627..56732bb6ec 100644
--- a/doc/source/reference/skydive-guide.rst
+++ b/doc/source/reference/skydive-guide.rst
@@ -23,8 +23,6 @@ Enable Skydive in ``/etc/kolla/globals.yml`` file:
    enable_skydive: "yes"
    enable_elasticsearch: "yes"
 
-.. end
-
 Verify operation
 ----------------
 
diff --git a/doc/source/reference/swift-guide.rst b/doc/source/reference/swift-guide.rst
index 3197c209b0..92b780404d 100644
--- a/doc/source/reference/swift-guide.rst
+++ b/doc/source/reference/swift-guide.rst
@@ -33,8 +33,6 @@ for three disks:
        (( index++ ))
    done
 
-.. end
-
 For evaluation, loopback devices can be used in lieu of real disks:
 
 .. code-block:: console
@@ -49,8 +47,6 @@ For evaluation, loopback devices can be used in lieu of real disks:
        (( index++ ))
    done
 
-.. end
-
 Disks without a partition table
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -67,8 +63,6 @@ Given hard disks with labels swd1, swd2, swd3, use the following settings in
    swift_devices_match_mode: "prefix"
    swift_devices_name: "swd"
 
-.. end
-
 Rings
 ~~~~~
 
@@ -93,8 +87,6 @@ the environment variable and create ``/etc/kolla/config/swift`` directory:
    KOLLA_SWIFT_BASE_IMAGE="kolla/oraclelinux-source-swift-base:4.0.0"
    mkdir -p /etc/kolla/config/swift
 
-.. end
-
 Generate Object Ring
 --------------------
 
@@ -120,8 +112,6 @@ To generate Swift object ring, run the following commands:
        done
    done
 
-.. end
-
 Generate Account Ring
 ---------------------
 
@@ -147,8 +137,6 @@ To generate Swift account ring, run the following commands:
        done
    done
 
-.. end
-
 Generate Container Ring
 -----------------------
 
@@ -183,8 +171,6 @@ To generate Swift container ring, run the following commands:
          /etc/kolla/config/swift/${ring}.builder rebalance;
    done
 
-.. end
-
 For more information, see
 https://docs.openstack.org/project-install-guide/object-storage/ocata/initial-rings.html
 
@@ -197,8 +183,6 @@ Enable Swift in ``/etc/kolla/globals.yml``:
 
    enable_swift : "yes"
 
-.. end
-
 Once the rings are in place, deploying Swift is the same as any other Kolla
 Ansible service:
 
@@ -206,8 +190,6 @@ Ansible service:
 
    # kolla-ansible deploy -i <path/to/inventory-file>
 
-.. end
-
 Verification
 ~~~~~~~~~~~~
 
diff --git a/doc/source/reference/tacker-guide.rst b/doc/source/reference/tacker-guide.rst
index 7977a80ce9..72f1fb4add 100644
--- a/doc/source/reference/tacker-guide.rst
+++ b/doc/source/reference/tacker-guide.rst
@@ -59,8 +59,6 @@ In order to enable them, you need to edit the file
    enable_mistral: "yes"
    enable_redis: "yes"
 
-.. end
-
 .. warning::
 
    Barbican is required in multinode deployments to share VIM fernet_keys.
@@ -74,8 +72,6 @@ Deploy tacker and related services.
 
    $ kolla-ansible deploy
 
-.. end
-
 Verification
 ~~~~~~~~~~~~
 
@@ -85,24 +81,18 @@ Generate the credentials file.
 
    $ kolla-ansible post-deploy
 
-.. end
-
 Source credentials file.
 
 .. code-block:: console
 
    $ . /etc/kolla/admin-openrc.sh
 
-.. end
-
 Create base neutron networks and glance images.
 
 .. code-block:: console
 
    $ ./tools/init-runonce
 
-.. end
-
 .. note::
 
    ``init-runonce`` file is located in ``$PYTHON_PATH/kolla-ansible``
@@ -123,16 +113,12 @@ Install python-tackerclient.
 
    $ pip install python-tackerclient
 
-.. end
-
 Execute ``deploy-tacker-demo`` script to initialize the VNF creation.
 
 .. code-block:: console
 
    $ ./deploy-tacker-demo
 
-.. end
-
 Tacker demo script will create sample VNF Descriptor (VNFD) file,
 then register a default VIM, create a tacker VNFD and finally
 deploy a VNF from the previously created VNFD.
@@ -153,8 +139,6 @@ Verify tacker VNF status is ACTIVE.
    | c52fcf99-101d-427b-8a2d-c9ef54af8b1d | kolla-sample-vnf | {"VDU1": "10.0.0.10"} | ACTIVE | eb3aa497-192c-4557-a9d7-1dff6874a8e6 | 27e8ea98-f1ff-4a40-a45c-e829e53b3c41 |
    +--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+
 
-.. end
-
 Verify nova instance status is ACTIVE.
 
 .. code-block:: console
@@ -167,8 +151,6 @@ Verify nova instance status is ACTIVE.
    | d2d59eeb-8526-4826-8f1b-c50b571395e2 | ta-cf99-101d-427b-8a2d-c9ef54af8b1d-VDU1-fchiv6saay7p | ACTIVE | demo-net=10.0.0.10 | cirros | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d-VDU1_flavor-yl4bzskwxdkn |
    +--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+
 
-.. end
-
 Verify Heat stack status is CREATE_COMPLETE.
 
 .. code-block:: console
@@ -181,8 +163,6 @@ Verify Heat stack status is CREATE_COMPLETE.
    | 289a6686-70f6-4db7-aa10-ed169fe547a6 | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d | 1243948e59054aab83dbf2803e109b3f | CREATE_COMPLETE | 2017-08-23T09:49:50Z | None         |
    +--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+
 
-.. end
-
 After the correct functionality of tacker is verified, tacker demo
 can be cleaned up executing ``cleanup-tacker`` script.
 
@@ -190,4 +170,3 @@ can be cleaned up executing ``cleanup-tacker`` script.
 
    $ ./cleanup-tacker
 
-.. end
diff --git a/doc/source/reference/vmware-guide.rst b/doc/source/reference/vmware-guide.rst
index 737429a519..7878f1f550 100644
--- a/doc/source/reference/vmware-guide.rst
+++ b/doc/source/reference/vmware-guide.rst
@@ -92,24 +92,18 @@ For more information, please see `VMware NSX-V documentation <https://docs.vmwar
    </service>
    </ConfigRoot>
 
-.. end
-
 Then refresh the firewall config by:
 
 .. code-block:: console
 
    # esxcli network firewall refresh
 
-.. end
-
 Verify that the firewall config is applied:
 
 .. code-block:: console
 
    # esxcli network firewall ruleset list
 
-.. end
-
 Deployment
 ----------
 
@@ -121,8 +115,6 @@ Enable VMware nova-compute plugin and NSX-V neutron-server plugin in
    nova_compute_virt_type: "vmware"
    neutron_plugin_agent: "vmware_nsxv"
 
-.. end
-
 .. note::
 
    VMware NSX-V also supports Neutron FWaaS, LBaaS and VPNaaS services, you can enable
@@ -141,8 +133,6 @@ If you want to set VMware datastore as cinder backend, enable it in
    cinder_backend_vmwarevc_vmdk: "yes"
    vmware_datastore_name: "TestDatastore"
 
-.. end
-
 If you want to set VMware datastore as glance backend, enable it in
 ``/etc/kolla/globals.yml``:
 
@@ -152,8 +142,6 @@ If you want to set VMware datastore as glance backend, enable it in
    vmware_vcenter_name: "TestDatacenter"
    vmware_datastore_name: "TestDatastore"
 
-.. end
-
 VMware options are required in ``/etc/kolla/globals.yml``, these options should
 be configured correctly according to your NSX-V environment.
 
@@ -167,8 +155,6 @@ Options for ``nova-compute`` and ``ceilometer``:
    vmware_vcenter_insecure: "True"
    vmware_vcenter_datastore_regex: ".*"
 
-.. end
-
 .. note::
 
    The VMware vCenter password has to be set in ``/etc/kolla/passwords.yml``.
@@ -177,8 +163,6 @@ Options for ``nova-compute`` and ``ceilometer``:
 
       vmware_vcenter_host_password: "admin"
 
-   .. end
-
 Options for Neutron NSX-V support:
 
 .. code-block:: yaml
@@ -214,8 +198,6 @@ Options for Neutron NSX-V support:
 
       vmware_nsxv_password: "nsx_manager_password"
 
-   .. end
-
 Then you should start :command:`kolla-ansible` deployment normally as
 KVM/QEMU deployment.
 
@@ -243,8 +225,6 @@ Enable VMware nova-compute plugin and NSX-V neutron-server plugin in
    nova_compute_virt_type: "vmware"
    neutron_plugin_agent: "vmware_dvs"
 
-.. end
-
 If you want to set VMware datastore as Cinder backend, enable it in
 ``/etc/kolla/globals.yml``:
 
@@ -254,8 +234,6 @@ If you want to set VMware datastore as Cinder backend, enable it in
    cinder_backend_vmwarevc_vmdk: "yes"
    vmware_datastore_name: "TestDatastore"
 
-.. end
-
 If you want to set VMware datastore as Glance backend, enable it in
 ``/etc/kolla/globals.yml``:
 
@@ -265,8 +243,6 @@ If you want to set VMware datastore as Glance backend, enable it in
    vmware_vcenter_name: "TestDatacenter"
    vmware_datastore_name: "TestDatastore"
 
-.. end
-
 VMware options are required in ``/etc/kolla/globals.yml``, these options should
 be configured correctly according to the vSphere environment you installed
 before. All option for nova, cinder, glance are the same as VMware-NSX, except
@@ -282,8 +258,6 @@ Options for Neutron NSX-DVS support:
    vmware_dvs_dvs_name: "VDS-1"
    vmware_dvs_dhcp_override_mac: ""
 
-.. end
-
 .. note::
 
    The VMware NSX-DVS password has to be set in ``/etc/kolla/passwords.yml``.
@@ -292,8 +266,6 @@ Options for Neutron NSX-DVS support:
 
       vmware_dvs_host_password: "password"
 
-   .. end
-
 Then you should start :command:`kolla-ansible` deployment normally as
 KVM/QEMU deployment.
 
diff --git a/doc/source/reference/zun-guide.rst b/doc/source/reference/zun-guide.rst
index d4b65bbf16..26e7c98e76 100644
--- a/doc/source/reference/zun-guide.rst
+++ b/doc/source/reference/zun-guide.rst
@@ -21,8 +21,6 @@ To allow Zun Compute connect to the Docker Daemon, add the following in the
 
    ExecStart= -H tcp://<DOCKER_SERVICE_IP>:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://<DOCKER_SERVICE_IP>:2379 --cluster-advertise=<DOCKER_SERVICE_IP>:2375
 
-.. end
-
 .. note::
 
    ``DOCKER_SERVICE_IP`` is zun-compute host IP address. ``2375`` is port that
@@ -38,16 +36,12 @@ following variables:
    enable_kuryr: "yes"
    enable_etcd: "yes"
 
-.. end
-
 Deploy the OpenStack cloud and zun.
 
 .. code-block:: console
 
    $ kolla-ansible deploy
 
-.. end
-
 Verification
 ------------
 
@@ -57,16 +51,12 @@ Verification
 
       $ kolla-ansible post-deploy
 
-   .. end
-
 #. Source credentials file:
 
    .. code-block:: console
 
       $ . /etc/kolla/admin-openrc.sh
 
-   .. end
-
 #. Download and create a glance container image:
 
    .. code-block:: console
@@ -75,16 +65,12 @@ Verification
       $ docker save cirros | openstack image create cirros --public \
         --container-format docker --disk-format raw
 
-   .. end
-
 #. Create zun container:
 
    .. code-block:: console
 
       $ zun create --name test --net network=demo-net cirros ping -c4 8.8.8.8
 
-   .. end
-
    .. note::
 
       Kuryr does not support networks with DHCP enabled, disable DHCP in the
@@ -94,8 +80,6 @@ Verification
 
          $ openstack subnet set --no-dhcp <subnet>
 
-      .. end
-
 #. Verify container is created:
 
    .. code-block:: console
@@ -108,8 +92,6 @@ Verification
       | 3719a73e-5f86-47e1-bc5f-f4074fc749f2 | test | cirros        | Created | None       | 172.17.0.3 | []    |
       +--------------------------------------+------+---------------+---------+------------+------------+-------+
 
-   .. end
-
 #. Start container:
 
    .. code-block:: console
@@ -117,8 +99,6 @@ Verification
       $ zun start test
       Request to start container test has been accepted.
 
-   .. end
-
 #. Verify container:
 
    .. code-block:: console
@@ -134,7 +114,5 @@ Verification
       4 packets transmitted, 4 packets received, 0% packet loss
       round-trip min/avg/max = 95.884/96.376/96.721 ms
 
-   .. end
-
 For more information about how zun works, see
 `zun, OpenStack Container service <https://docs.openstack.org/zun/latest/>`__.
diff --git a/doc/source/user/multi-regions.rst b/doc/source/user/multi-regions.rst
index ad44a30514..1f0de8ee7f 100644
--- a/doc/source/user/multi-regions.rst
+++ b/doc/source/user/multi-regions.rst
@@ -32,8 +32,6 @@ Keystone and Horizon are enabled:
    enable_keystone: "yes"
    enable_horizon: "yes"
 
-.. end
-
 Then, change the value of ``multiple_regions_names`` to add names of other
 regions. In this example, we consider two regions. The current one,
 formerly knows as RegionOne, that is hided behind
@@ -46,8 +44,6 @@ formerly knows as RegionOne, that is hided behind
        - "{{ openstack_region_name }}"
        - "RegionTwo"
 
-.. end
-
 .. note::
 
    Kolla uses these variables to create necessary endpoints into
@@ -83,8 +79,6 @@ the value of ``kolla_internal_fqdn`` in RegionOne:
        project_name: "admin"
        domain_name: "default"
 
-.. end
-
 Configuration files of cinder,nova,neutron,glance... have to be updated to
 contact RegionOne's Keystone. Fortunately, Kolla offers to override all
 configuration files at the same time thanks to the
@@ -97,8 +91,6 @@ implies to create a ``global.conf`` file with the following content:
    www_authenticate_uri = {{ keystone_internal_url }}
    auth_url = {{ keystone_admin_url }}
 
-.. end
-
 The Placement API section inside the nova configuration file also has
 to be updated to contact RegionOne's Keystone. So create, in the same
 directory, a ``nova.conf`` file with below content:
@@ -108,8 +100,6 @@ directory, a ``nova.conf`` file with below content:
    [placement]
    auth_url = {{ keystone_admin_url }}
 
-.. end
-
 The Heat section inside the configuration file also
 has to be updated to contact RegionOne's Keystone. So create, in the same
 directory, a ``heat.conf`` file with below content:
@@ -126,8 +116,6 @@ directory, a ``heat.conf`` file with below content:
    [clients_keystone]
    www_authenticate_uri = {{ keystone_internal_url }}
 
-.. end
-
 The Ceilometer section inside the configuration file also
 has to be updated to contact RegionOne's Keystone. So create, in the same
 directory, a ``ceilometer.conf`` file with below content:
@@ -137,8 +125,6 @@ directory, a ``ceilometer.conf`` file with below content:
    [service_credentials]
    auth_url = {{ keystone_internal_url }}
 
-.. end
-
 And link the directory that contains these files into the
 ``/etc/kolla/globals.yml``:
 
@@ -146,16 +132,12 @@ And link the directory that contains these files into the
 
    node_custom_config: path/to/the/directory/of/global&nova_conf/
 
-.. end
-
 Also, change the name of the current region. For instance, RegionTwo:
 
 .. code-block:: yaml
 
    openstack_region_name: "RegionTwo"
 
-.. end
-
 Finally, disable the deployment of Keystone and Horizon that are
 unnecessary in this region and run ``kolla-ansible``:
 
@@ -164,6 +146,4 @@ unnecessary in this region and run ``kolla-ansible``:
    enable_keystone: "no"
    enable_horizon: "no"
 
-.. end
-
 The configuration is the same for any other region.
diff --git a/doc/source/user/multinode.rst b/doc/source/user/multinode.rst
index 46e0195b3c..64f4a50fbd 100644
--- a/doc/source/user/multinode.rst
+++ b/doc/source/user/multinode.rst
@@ -28,8 +28,6 @@ currently running:
 
    docker_registry: 192.168.1.100:5000
 
-.. end
-
 The Kolla community recommends using registry 2.3 or later. To deploy registry
 with version 2.3 or later, do the following:
 
@@ -38,8 +36,6 @@ with version 2.3 or later, do the following:
    cd kolla
    tools/start-registry
 
-.. end
-
 The Docker registry can be configured as a pull through cache to proxy the
 official Kolla images hosted in Docker Hub. In order to configure the local
 registry as a pull through cache, in the host machine set the environment
@@ -50,8 +46,6 @@ Docker Hub.
 
    export REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io
 
-.. end
-
 .. note::
 
    Pushing to a registry configured as a pull-through cache is unsupported.
@@ -82,8 +76,6 @@ is currently running:
      "insecure-registries" : ["192.168.1.100:5000"]
    }
 
-.. end
-
 Restart Docker by executing the following commands:
 
 For CentOS or Ubuntu with systemd:
@@ -92,16 +84,12 @@ For CentOS or Ubuntu with systemd:
 
    systemctl restart docker
 
-.. end
-
 For Ubuntu with upstart or sysvinit:
 
 .. code-block:: console
 
    service docker restart
 
-.. end
-
 .. _edit-inventory:
 
 Edit the Inventory File
@@ -134,8 +122,6 @@ controls how ansible interacts with remote hosts.
    control01      ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file>
    192.168.122.24 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file>
 
-.. end
-
 .. note::
 
    Additional inventory parameters might be required according to your
@@ -159,8 +145,6 @@ grouped together and changing these around can break your deployment:
    [haproxy:children]
    network
 
-.. end
-
 Deploying Kolla
 ===============
 
@@ -184,8 +168,6 @@ to them:
 
    kolla-ansible prechecks -i <path/to/multinode/inventory/file>
 
-.. end
-
 .. note::
 
    RabbitMQ doesn't work with IP addresses, hence the IP address of
@@ -198,4 +180,3 @@ Run the deployment:
 
    kolla-ansible deploy -i <path/to/multinode/inventory/file>
 
-.. end
diff --git a/doc/source/user/operating-kolla.rst b/doc/source/user/operating-kolla.rst
index 91a5f228a0..6b20468853 100644
--- a/doc/source/user/operating-kolla.rst
+++ b/doc/source/user/operating-kolla.rst
@@ -81,8 +81,6 @@ If upgrading from ``5.0.0`` to ``6.0.0``, upgrade the kolla-ansible package:
 
    pip install --upgrade kolla-ansible==6.0.0
 
-.. end
-
 If this is a minor upgrade, and you do not wish to upgrade kolla-ansible
 itself, you may skip this step.
 
@@ -118,8 +116,6 @@ For the kolla docker images, the ``openstack_release`` is updated to ``6.0.0``:
 
    openstack_release: 6.0.0
 
-.. end
-
 Once the kolla release, the inventory file, and the relevant configuration
 files have been updated in this way, the operator may first want to 'pull'
 down the images to stage the ``6.0.0`` versions. This can be done safely
@@ -131,8 +127,6 @@ Run the command to pull the ``6.0.0`` images for staging:
 
    kolla-ansible pull
 
-.. end
-
 At a convenient time, the upgrade can now be run (it will complete more
 quickly if the images have been staged ahead of time).
 
@@ -145,8 +139,6 @@ To perform the upgrade:
 
    kolla-ansible upgrade
 
-.. end
-
 After this command is complete the containers will have been recreated from the
 new images.
 
@@ -220,4 +212,3 @@ For example:
    kolla-genpwd -p passwords.yml.new
    kolla-mergepwd --old passwords.yml.old --new passwords.yml.new --final /etc/kolla/passwords.yml
 
-.. end
diff --git a/doc/source/user/quickstart.rst b/doc/source/user/quickstart.rst
index 1aca38e264..1d8a88c13e 100644
--- a/doc/source/user/quickstart.rst
+++ b/doc/source/user/quickstart.rst
@@ -35,8 +35,6 @@ Install dependencies
       yum install python-pip
       pip install -U pip
 
-   .. end
-
    For Ubuntu, run:
 
    .. code-block:: console
@@ -45,8 +43,6 @@ Install dependencies
       apt-get install python-pip
       pip install -U pip
 
-   .. end
-
 #. Install the following dependencies:
 
    For CentOS, run:
@@ -55,16 +51,12 @@ Install dependencies
 
       yum install python-devel libffi-devel gcc openssl-devel libselinux-python
 
-   .. end
-
    For Ubuntu, run:
 
    .. code-block:: console
 
       apt-get install python-dev libffi-dev gcc libssl-dev python-selinux python-setuptools
 
-   .. end
-
 #. Install `Ansible <http://www.ansible.com>`__ from distribution packaging:
 
    .. note::
@@ -82,24 +74,18 @@ Install dependencies
 
       yum install ansible
 
-   .. end
-
    For Ubuntu, it can be installed by:
 
    .. code-block:: console
 
       apt-get install ansible
 
-   .. end
-
 #. Use ``pip`` to install or upgrade Ansible to latest version:
 
    .. code-block:: console
 
       pip install -U ansible
 
-   .. end
-
    .. note::
 
       It is recommended to use virtualenv to install non-system packages.
@@ -115,8 +101,6 @@ Install dependencies
       pipelining=True
       forks=100
 
-   .. end
-
 Install Kolla-ansible
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -129,8 +113,6 @@ Install Kolla-ansible for deployment or evaluation
 
       pip install kolla-ansible
 
-   .. end
-
 #. Copy ``globals.yml`` and ``passwords.yml`` to ``/etc/kolla`` directory.
 
    For CentOS, run:
@@ -139,16 +121,12 @@ Install Kolla-ansible for deployment or evaluation
 
       cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/
 
-   .. end
-
    For Ubuntu, run:
 
    .. code-block:: console
 
       cp -r /usr/local/share/kolla-ansible/etc_examples/kolla /etc/
 
-   .. end
-
 #. Copy ``all-in-one`` and ``multinode`` inventory files to
    the current directory.
 
@@ -158,16 +136,12 @@ Install Kolla-ansible for deployment or evaluation
 
       cp /usr/share/kolla-ansible/ansible/inventory/* .
 
-   .. end
-
    For Ubuntu, run:
 
    .. code-block:: console
 
       cp /usr/local/share/kolla-ansible/ansible/inventory/* .
 
-   .. end
-
 Install Kolla for development
 -----------------------------
 
@@ -178,8 +152,6 @@ Install Kolla for development
       git clone https://github.com/openstack/kolla
       git clone https://github.com/openstack/kolla-ansible
 
-   .. end
-
 #. Install requirements of ``kolla`` and ``kolla-ansible``:
 
    .. code-block:: console
@@ -187,8 +159,6 @@ Install Kolla for development
       pip install -r kolla/requirements.txt
       pip install -r kolla-ansible/requirements.txt
 
-   .. end
-
 #. Copy the configuration files to ``/etc/kolla`` directory.
    ``kolla-ansible`` holds the configuration files ( ``globals.yml`` and
    ``passwords.yml``) in ``etc/kolla``.
@@ -198,8 +168,6 @@ Install Kolla for development
       mkdir -p /etc/kolla
       cp -r kolla-ansible/etc/kolla/* /etc/kolla
 
-   .. end
-
 #. Copy the inventory files to the current directory. ``kolla-ansible`` holds
    inventory files ( ``all-in-one`` and ``multinode``) in the
    ``ansible/inventory`` directory.
@@ -208,8 +176,6 @@ Install Kolla for development
 
       cp kolla-ansible/ansible/inventory/* .
 
-   .. end
-
 Prepare initial configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -253,8 +219,6 @@ than one node, edit ``multinode`` inventory:
       localhost       ansible_connection=local become=true
       # use localhost and sudo
 
-   .. end
-
    To learn more about inventory files, check
    `Ansible documentation <http://docs.ansible.com/ansible/latest/intro_inventory.html>`_.
 
@@ -264,8 +228,6 @@ than one node, edit ``multinode`` inventory:
 
       ansible -i multinode all -m ping
 
-   .. end
-
    .. note::
 
       Ubuntu might not come with python pre-installed. That will cause
@@ -285,8 +247,6 @@ For deployment or evaluation, run:
 
    kolla-genpwd
 
-.. end
-
 For development, run:
 
 .. code-block:: console
@@ -294,8 +254,6 @@ For development, run:
    cd kolla-ansible/tools
    ./generate_passwords.py
 
-.. end
-
 Kolla globals.yml
 -----------------
 
@@ -324,8 +282,6 @@ There are a few options that are required to deploy Kolla-Ansible:
 
      kolla_base_distro: "centos"
 
-  .. end
-
   Next "type" of installation needs to be configured.
   Choices are:
 
@@ -348,8 +304,6 @@ There are a few options that are required to deploy Kolla-Ansible:
 
      kolla_install_type: "source"
 
-  .. end
-
   To use DockerHub images, the default image tag has to be overridden. Images are
   tagged with release names. For example to use stable Pike images set
 
@@ -357,8 +311,6 @@ There are a few options that are required to deploy Kolla-Ansible:
 
      openstack_release: "pike"
 
-  .. end
-
   It's important to use same version of images as kolla-ansible. That
   means if pip was used to install kolla-ansible, that means it's latest stable
   version so ``openstack release`` should be set to queens. If git was used with
@@ -369,8 +321,6 @@ There are a few options that are required to deploy Kolla-Ansible:
 
      openstack_release: "master"
 
-  .. end
-
 * Networking
 
   Kolla-Ansible requires a few networking options to be set.
@@ -383,8 +333,6 @@ There are a few options that are required to deploy Kolla-Ansible:
 
      network_interface: "eth0"
 
-  .. end
-
   Second interface required is dedicated for Neutron external (or public)
   networks, can be vlan or flat, depends on how the networks are created.
   This interface should be active without IP address. If not, instances
@@ -394,8 +342,6 @@ There are a few options that are required to deploy Kolla-Ansible:
 
      neutron_external_interface: "eth1"
 
-  .. end
-
   To learn more about network configuration, refer `Network overview
   <https://docs.openstack.org/kolla-ansible/latest/admin/production-architecture-guide.html#network-configuration>`_.
 
@@ -408,8 +354,6 @@ There are a few options that are required to deploy Kolla-Ansible:
 
      kolla_internal_vip_address: "10.1.0.250"
 
-  .. end
-
 * Enable additional services
 
   By default Kolla-Ansible provides a bare compute kit, however it does provide
@@ -420,8 +364,6 @@ There are a few options that are required to deploy Kolla-Ansible:
 
      enable_cinder: "yes"
 
-  .. end
-
   Kolla now supports many OpenStack services, there is
   `a list of available services
   <https://github.com/openstack/kolla-ansible/blob/master/README.rst#openstack-services>`_.
@@ -446,24 +388,18 @@ the correct versions.
 
         kolla-ansible -i ./multinode bootstrap-servers
 
-     .. end
-
   #. Do pre-deployment checks for hosts:
 
      .. code-block:: console
 
         kolla-ansible -i ./multinode prechecks
 
-     .. end
-
   #. Finally proceed to actual OpenStack deployment:
 
      .. code-block:: console
 
         kolla-ansible -i ./multinode deploy
 
-     .. end
-
 * For development, run:
 
   #. Bootstrap servers with kolla deploy dependencies:
@@ -473,24 +409,18 @@ the correct versions.
         cd kolla-ansible/tools
         ./kolla-ansible -i ../ansible/inventory/multinode bootstrap-servers
 
-     .. end
-
   #. Do pre-deployment checks for hosts:
 
      .. code-block:: console
 
         ./kolla-ansible -i ../ansible/inventory/multinode prechecks
 
-     .. end
-
   #. Finally proceed to actual OpenStack deployment:
 
      .. code-block:: console
 
         ./kolla-ansible -i ../ansible/inventory/multinode deploy
 
-     .. end
-
 When this playbook finishes, OpenStack should be up, running and functional!
 If error occurs during execution, refer to
 `troubleshooting guide <https://docs.openstack.org/kolla-ansible/latest/user/troubleshooting.html>`_.
@@ -504,8 +434,6 @@ Using OpenStack
 
       pip install python-openstackclient python-glanceclient python-neutronclient
 
-   .. end
-
 #. OpenStack requires an openrc file where credentials for admin user
    are set. To generate this file:
 
@@ -516,8 +444,6 @@ Using OpenStack
         kolla-ansible post-deploy
         . /etc/kolla/admin-openrc.sh
 
-     .. end
-
    * For development, run:
 
      .. code-block:: console
@@ -526,8 +452,6 @@ Using OpenStack
         ./kolla-ansible post-deploy
         . /etc/kolla/admin-openrc.sh
 
-     .. end
-
 #. Depending on how you installed Kolla-Ansible, there is a script that will
    create example networks, images, and so on.
 
@@ -538,20 +462,15 @@ Using OpenStack
 
         . /usr/share/kolla-ansible/init-runonce
 
-     .. end
-
      Run ``init-runonce`` script on Ubuntu:
 
      .. code-block:: console
 
         . /usr/local/share/kolla-ansible/init-runonce
 
-     .. end
-
    * For development, run:
 
      .. code-block:: console
 
         . kolla-ansible/tools/init-runonce
 
-     .. end
diff --git a/doc/source/user/troubleshooting.rst b/doc/source/user/troubleshooting.rst
index 81c26a95f4..9e8ef3c6e1 100644
--- a/doc/source/user/troubleshooting.rst
+++ b/doc/source/user/troubleshooting.rst
@@ -26,8 +26,6 @@ process or a problem in the ``globals.yml`` configuration.
       EOF
       systemctl restart docker
 
-   .. end
-
 To correct the problem where Operators have a misconfigured environment,
 the Kolla community has added a precheck feature which ensures the
 deployment targets are in a state where Kolla may deploy to them. To
@@ -37,8 +35,6 @@ run the prechecks:
 
    kolla-ansible prechecks
 
-.. end
-
 If a failure during deployment occurs it nearly always occurs during evaluation
 of the software. Once the Operator learns the few configuration options
 required, it is highly unlikely they will experience a failure in deployment.
@@ -54,8 +50,6 @@ remove the failed deployment:
 
    kolla-ansible destroy -i <<inventory-file>>
 
-.. end
-
 Any time the tags of a release change, it is possible that the container
 implementation from older versions won't match the Ansible playbooks in a new
 version. If running multinode from a registry, each node's Docker image cache
@@ -66,8 +60,6 @@ refresh the docker cache from the local Docker registry:
 
    kolla-ansible pull
 
-.. end
-
 Debugging Kolla
 ~~~~~~~~~~~~~~~
 
@@ -78,8 +70,6 @@ targets by executing:
 
    docker ps -a
 
-.. end
-
 If any of the containers exited, this indicates a bug in the container. Please
 seek help by filing a `launchpad bug <https://bugs.launchpad.net/kolla-ansible/+filebug>`__
 or contacting the developers via IRC.
@@ -90,8 +80,6 @@ The logs can be examined by executing:
 
    docker exec -it fluentd bash
 
-.. end
-
 The logs from all services in all containers may be read from
 ``/var/log/kolla/SERVICE_NAME``
 
@@ -101,8 +89,6 @@ If the stdout logs are needed, please run:
 
    docker logs <container-name>
 
-.. end
-
 Note that most of the containers don't log to stdout so the above command will
 provide no information.