From dfb45813f27cbd17d0e25797071c2370e85cd758 Mon Sep 17 00:00:00 2001
From: Chandan Kumar <chkumar@redhat.com>
Date: Mon, 5 Jun 2017 18:23:46 +0530
Subject: [PATCH] Fixed CentOS Vagrant and NFS setup instructions

* Removed Fedora 22 instructions, it is not used too much.
* Fixed tox -e docs issues

Change-Id: I8d30ae962180bf71eec10c4ab69f8479905ee21c
---
 doc/ceph-guide.rst       | 16 ++++++------
 doc/osprofiler-guide.rst | 10 +++++---
 doc/vagrant-dev-env.rst  | 55 +++++++++++++++++++++++++---------------
 3 files changed, 48 insertions(+), 33 deletions(-)

diff --git a/doc/ceph-guide.rst b/doc/ceph-guide.rst
index a533b03946..4796d70897 100644
--- a/doc/ceph-guide.rst
+++ b/doc/ceph-guide.rst
@@ -260,10 +260,10 @@ from each Ceph monitor node:
 Simple 3 Node Example
 =====================
 
-This example will show how to deploy Ceph in a very simple setup using 3 storage
-nodes. 2 of those nodes (kolla1 and kolla2) will also provide other services
-like control, network, compute, monitoring and compute. The 3rd (kolla3) node
-will only act as a storage node.
+This example will show how to deploy Ceph in a very simple setup using 3
+storage nodes. 2 of those nodes (kolla1 and kolla2) will also provide other
+services like control, network, compute, monitoring and compute. The 3rd
+(kolla3) node will only act as a storage node.
 
 This example will only focus on the Ceph aspect of the deployment and assumes
 that you can already deploy a fully functional environment using 2 nodes that
@@ -271,10 +271,10 @@ does not employ Ceph yet. So we will be adding to the existing multinode
 inventory file you already have.
 
 Each of the 3 nodes are assumed to have two disk, ``/dev/sda`` (40GB)
-and ``/dev/sdb`` (10GB). Size is not all that important... but for now make sure
-each sdb disk are of the same size and are at least 10GB. This example will use
-a single disk (/dev/sdb) for both Ceph data and journal. It will not implement
-caching.
+and ``/dev/sdb`` (10GB). Size is not all that important... but for now make
+sure each sdb disk are of the same size and are at least 10GB. This example
+will use a single disk (/dev/sdb) for both Ceph data and journal. It will not
+implement caching.
 
 Here is the top part of the multinode inventory file used in the example
 environment before adding the 3rd node for Ceph:
diff --git a/doc/osprofiler-guide.rst b/doc/osprofiler-guide.rst
index 65ad92acab..b5ce2a7734 100644
--- a/doc/osprofiler-guide.rst
+++ b/doc/osprofiler-guide.rst
@@ -29,9 +29,10 @@ Verify operation
 
 Retrieve ``osprofiler_secret`` key present at ``/etc/kolla/passwords.yml``.
 
-Profiler UUIDs can be created executing OpenStack clients (Nova, Glance, Cinder, Heat, Keystone)
-with ``--profile`` option or using the official Openstack client with ``--os-profile``.
-In example to get the OSprofiler trace UUID for ``openstack server create``.
+Profiler UUIDs can be created executing OpenStack clients (Nova, Glance,
+Cinder, Heat, Keystone) with ``--profile`` option or using the official
+Openstack client with ``--os-profile``. In example to get the OSprofiler trace
+UUID for ``openstack server create``.
 
 .. code-block:: console
 
@@ -48,7 +49,8 @@ The previous command will output the command to retrieve OSprofiler trace.
 
 .. code-block:: console
 
-    $ osprofiler trace show --html <TRACE_ID> --connection-string elasticsearch://<api_interface_address>:9200
+    $ osprofiler trace show --html <TRACE_ID> --connection-string \
+      elasticsearch://<api_interface_address>:9200
 
 For more information about how OSprofiler works, see
 `OSProfiler – Cross-project profiling library
diff --git a/doc/vagrant-dev-env.rst b/doc/vagrant-dev-env.rst
index 4901733e86..8491f9e75d 100644
--- a/doc/vagrant-dev-env.rst
+++ b/doc/vagrant-dev-env.rst
@@ -41,17 +41,16 @@ choice. Various downloads can be found at the `Vagrant downloads
 
 Install required dependencies as follows:
 
-On CentOS 7::
+On CentOS::
 
-    sudo yum install vagrant ruby-devel libvirt-devel libvirt-python zlib-devel libpng-devel gcc git
-
-On Fedora 22 or later::
-
-    sudo dnf install vagrant ruby-devel libvirt-devel libvirt-python zlib-devel libpng-devel gcc git
+  sudo yum install ruby-devel libvirt-devel zlib-devel libpng-devel gcc \
+  qemu-kvm qemu-img libvirt libvirt-python libvirt-client virt-install \
+  bridge-utils
 
 On Ubuntu 16.04 or later::
 
-    sudo apt-get install vagrant ruby-dev ruby-libvirt python-libvirt libvirt-dev nfs-kernel-server zlib-dev libpng-dev gcc git
+  sudo apt-get install vagrant ruby-dev ruby-libvirt python-libvirt \
+  libvirt-dev nfs-kernel-server zlib-dev libpng-dev gcc git
 
 .. note:: Many distros ship outdated versions of Vagrant by default. When in
           doubt, always install the latest from the downloads page above.
@@ -59,36 +58,50 @@ On Ubuntu 16.04 or later::
 Next install the hostmanager plugin so all hosts are recorded in ``/etc/hosts``
 (inside each vm)::
 
-    vagrant plugin install vagrant-hostmanager vagrant-vbguest
+  vagrant plugin install vagrant-hostmanager
+
+If you are going to use VirtualBox, then install vagrant-vbguest::
+
+  vagrant plugin install vagrant-vbguest
 
 Vagrant supports a wide range of virtualization technologies. This
 documentation describes libvirt. To install vagrant-libvirt plugin::
 
-    vagrant plugin install --plugin-version ">= 0.0.31" vagrant-libvirt
+  vagrant plugin install --plugin-version ">= 0.0.31" vagrant-libvirt
 
 Some Linux distributions offer vagrant-libvirt packages, but the version they
 provide tends to be too old to run Kolla. A version of >= 0.0.31 is required.
 
+To use libvirt from Vagrant with a low privileges user without being asked for
+a password, add the user to the libvirt group::
+
+  sudo gpasswd -a ${USER} libvirt
+  newgrp libvirt
+
 Setup NFS to permit file sharing between host and VMs. Contrary to the rsync
 method, NFS allows both way synchronization and offers much better performance
-than VirtualBox shared folders. On Fedora 22::
+than VirtualBox shared folders. On CentOS::
 
+    # Add the virtual interfaces to the internal zone
+    sudo firewall-cmd --zone=internal --add-interface=virbr0
+    sudo firewall-cmd --zone=internal --add-interface=virbr1
+    # Enable nfs, rpc-bind and mountd services for firewalld
+    sudo firewall-cmd --permanent --zone=internal --add-service=nfs
+    sudo firewall-cmd --permanent --zone=internal --add-service=rpc-bind
+    sudo firewall-cmd --permanent --zone=internal --add-service=mountd
+    sudo firewall-cmd --permanent --zone=internal --add-port=2049/udp
+    sudo firewall-cmd --permanent --add-port=2049/tcp
+    sudo firewall-cmd --permanent --add-port=111/udp
+    sudo firewall-cmd --permanent --add-port=111/tcp
+    sudo firewall-cmd --reload
+    # Start required services for NFS
+    sudo systemctl restart firewalld
     sudo systemctl start nfs-server
     sudo systemctl start rpcbind.service
-    sudo systemctl start mountd.service
-    firewall-cmd --permanent --add-port=2049/udp
-    firewall-cmd --permanent --add-port=2049/tcp
-    firewall-cmd --permanent --add-port=111/udp
-    firewall-cmd --permanent --add-port=111/tcp
-    firewall-cmd --permanent --add-service=nfs
-    firewall-cmd --permanent --add-service=rpcbind
-    firewall-cmd --permanent --add-service=mountd
-    sudo systemctl restart firewalld
 
 Ensure your system has libvirt and associated software installed and setup
-correctly. On Fedora 22::
+correctly. On CentOS::
 
-    sudo dnf install @virtualization
     sudo systemctl start libvirtd
     sudo systemctl enable libvirtd