chore: use sphinx-lint on the docs and releasenotes

Enable sphinx-lint via pre-commit and fix up the existing issues.
The most common one was single backtick vs double backtick. In
ReStructured Text you must use a double backtick while Markdown
uses the single backtick for a fixed-width font. The other issue
was a tabs vs spaces change in a file.

Change-Id: I28e91272d67d13db0fefaa7165e0ba887086eae9
Signed-off-by: Doug Goldstein <cardoe@cardoe.com>
This commit is contained in:
Doug Goldstein
2025-08-04 22:06:23 -05:00
parent 35052e51b3
commit f750cb2d83
9 changed files with 105 additions and 99 deletions

View File

@@ -8,3 +8,9 @@ repos:
- id: mixed-line-ending
args: ['--fix', 'lf']
- id: check-merge-conflict
- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v1.0.0
hooks:
- id: sphinx-lint
args: [--enable=default-role]
files: ^doc/|releasenotes

View File

@@ -21,7 +21,7 @@ log aggregator and processor.
Fluent-bit, Fluentd meet OpenStack-Helm's logging requirements for gathering,
aggregating, and delivering of logged events. Fluent-bit runs as a daemonset on
each node and mounts the `/var/lib/docker/containers` directory. The Docker
each node and mounts the ``/var/lib/docker/containers`` directory. The Docker
container runtime engine directs events posted to stdout and stderr to this
directory on the host. Fluent-bit then forward the contents of that directory to
Fluentd. Fluentd runs as deployment at the designated nodes and expose service

View File

@@ -37,7 +37,7 @@ Typical networking API request is an operation of create/update/delete:
* port
Neutron-server service is scheduled on nodes with
`openstack-control-plane=enabled` label.
``openstack-control-plane=enabled`` label.
neutron-rpc-server
~~~~~~~~~~~~~~~~~~
@@ -77,7 +77,7 @@ implementing the interface. You can see the endpoints to class mapping in
`setup.cfg <https://github.com/openstack/neutron/blob/412c49b3930ce8aecb0a07aec50a9607058e5bc7/setup.cfg#L69>`_.
If the SDN of your choice is using the ML2 core plugin, then the extra
options in `neutron/ml2/plugins/ml2_conf.ini` should be configured:
options in ``neutron/ml2/plugins/ml2_conf.ini`` should be configured:
.. code-block:: ini
@@ -92,10 +92,10 @@ options in `neutron/ml2/plugins/ml2_conf.ini` should be configured:
mech_drivers = openvswitch, l2population
SDNs implementing ML2 driver can add extra/plugin-specific configuration
options in `neutron/ml2/plugins/ml2_conf.ini`. Or define its own `ml2_conf_<name>.ini`
options in ``neutron/ml2/plugins/ml2_conf.ini``. Or define its own ``ml2_conf_<name>.ini``
file where configs specific to the SDN would be placed.
The above configuration options are handled by `neutron/values.yaml`:
The above configuration options are handled by ``neutron/values.yaml``:
.. code-block:: yaml
@@ -119,7 +119,7 @@ The above configuration options are handled by `neutron/values.yaml`:
Neutron-rpc-server service is scheduled on nodes with
`openstack-control-plane=enabled` label.
``openstack-control-plane=enabled`` label.
neutron-dhcp-agent
~~~~~~~~~~~~~~~~~~
@@ -127,7 +127,7 @@ DHCP agent is running dnsmasq process which is serving the IP assignment and
DNS info. DHCP agent is dependent on the L2 agent wiring the interface.
So one should be aware that when changing the L2 agent, it also needs to be
changed in the DHCP agent. The configuration of the DHCP agent includes
option `interface_driver`, which will instruct how the tap interface created
option ``interface_driver``, which will instruct how the tap interface created
for serving the request should be wired.
.. code-block:: yaml
@@ -170,14 +170,14 @@ There is also a need for DHCP agent to pass ovs agent config file
--config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini
{{- end }}
This requirement is OVS specific, the `ovsdb_connection` string is defined
in `openvswitch_agent.ini` file, specifying how DHCP agent can connect to ovs.
This requirement is OVS specific, the ``ovsdb_connection`` string is defined
in ``openvswitch_agent.ini`` file, specifying how DHCP agent can connect to ovs.
When using other SDNs, running the DHCP agent may not be required. When the
SDN solution is addressing the IP assignments in another way, neutron's
DHCP agent should be disabled.
neutron-dhcp-agent service is scheduled to run on nodes with the label
`openstack-control-plane=enabled`.
``openstack-control-plane=enabled``.
neutron-l3-agent
~~~~~~~~~~~~~~~~
@@ -190,7 +190,7 @@ If the SDN implements its own version of L3 networking, neutron-l3-agent
should not be started.
neutron-l3-agent service is scheduled to run on nodes with the label
`openstack-control-plane=enabled`.
``openstack-control-plane=enabled``.
neutron-metadata-agent
~~~~~~~~~~~~~~~~~~~~~~
@@ -201,7 +201,7 @@ and L3 agents. Other SDNs may require to force the config driver in nova,
since the metadata service is not exposed by it.
neutron-metadata-agent service is scheduled to run on nodes with the label
`openstack-control-plane=enabled`.
``openstack-control-plane=enabled``.
Configuring network plugin
@@ -220,7 +220,7 @@ a new configuration option is added:
This option will allow to configure the Neutron services in proper way, by
checking what is the actual backed set in :code:`neutron/values.yaml`.
In order to meet modularity criteria of Neutron chart, section `manifests` in
In order to meet modularity criteria of Neutron chart, section ``manifests`` in
:code:`neutron/values.yaml` contains boolean values describing which Neutron's
Kubernetes resources should be deployed:
@@ -266,7 +266,7 @@ networking functionality that SDN is providing.
OpenVSwitch
~~~~~~~~~~~
The ovs set of daemonsets are running on the node labeled
`openvswitch=enabled`. This includes the compute and controller/network nodes.
``openvswitch=enabled``. This includes the compute and controller/network nodes.
For more flexibility, OpenVSwitch as a tool was split out of Neutron chart, and
put in separate chart dedicated OpenVSwitch. Neutron OVS agent remains in
Neutron chart. Splitting out the OpenVSwitch creates possibilities to use it
@@ -277,8 +277,8 @@ neutron-ovs-agent
As part of Neutron chart, this daemonset is running Neutron OVS agent.
It is dependent on having :code:`openvswitch-db` and :code:`openvswitch-vswitchd`
deployed and ready. Since its the default choice of the networking backend,
all configuration is in place in `neutron/values.yaml`. :code:`neutron-ovs-agent`
should not be deployed when another SDN is used in `network.backend`.
all configuration is in place in ``neutron/values.yaml``. :code:`neutron-ovs-agent`
should not be deployed when another SDN is used in ``network.backend``.
Script in :code:`neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl`
is responsible for determining the tunnel interface and its IP for later usage
@@ -287,7 +287,7 @@ init container and main container with :code:`neutron-ovs-agent` via file
:code:`/tmp/pod-shared/ml2-local-ip.ini`.
Configuration of OVS bridges can be done via
`neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl`. The
``neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl``. The
script is configuring the external network bridge and sets up any
bridge mappings defined in :code:`conf.auto_bridge_add`. These
values should align with
@@ -314,7 +314,7 @@ than the default loopback mechanism.
Linuxbridge
~~~~~~~~~~~
Linuxbridge is the second type of Neutron reference architecture L2 agent.
It is running on nodes labeled `linuxbridge=enabled`. As mentioned before,
It is running on nodes labeled ``linuxbridge=enabled``. As mentioned before,
all nodes that are requiring the L2 services need to be labeled with linuxbridge.
This includes both the compute and controller/network nodes. It is not possible
to label the same node with both openvswitch and linuxbridge (or any other
@@ -333,8 +333,8 @@ using file :code:`/tmp/pod-shared/ml2-local-ip.ini` with main linuxbridge
container.
In order to use linuxbridge in your OpenStack-Helm deployment, you need to
label the compute and controller/network nodes with `linuxbridge=enabled`
and use this `neutron/values.yaml` override:
label the compute and controller/network nodes with ``linuxbridge=enabled``
and use this ``neutron/values.yaml`` override:
.. code-block:: yaml

View File

@@ -14,14 +14,14 @@ There are issues:
chart tarball remains unchanged.
* We use `chart-testing`_ to lint the charts. The chart-testing tool
requires that the chart version is bumped every time any file in the
chart directory is changed. In every chart, we have a `values_overrides`
chart directory is changed. In every chart, we have a ``values_overrides``
directory where we store the version-specific overrides as well as
example overrides for some specific configurations. These overrides are
not part of the chart tarball, but when they are changed, we bump the
chart version.
* We use `apiVersion: v1` in `Chart.yaml`, and dependencies are stored in a
separate `requirements.yaml` file. However, `apiVersion: v2` allows defining
dependencies directly in the `Chart.yaml` file.
* We use ``apiVersion: v1`` in ``Chart.yaml``, and dependencies are stored in a
separate ``requirements.yaml`` file. However, ``apiVersion: v2`` allows defining
dependencies directly in the ``Chart.yaml`` file.
* We track the release notes in a separate directory and we don't have a
CHANGELOG.md file in chart tarballs.
* Chart maintainers are assumed to update the same release notes file
@@ -39,10 +39,10 @@ Proposed Change
We propose to do the following:
* Move values overrides to a separate directory.
* Use `apiVersion: v2` in `Chart.yaml`.
* Use ``apiVersion: v2`` in ``Chart.yaml``.
* Move release notes to the CHANGELOG.md files.
* Once the Openstack is released we will bump the version of all charts to
this new release, for example `2025.1.0`.
this new release, for example ``2025.1.0``.
Semver assumes the following:
* MAJOR version when you make incompatible API changes
@@ -59,13 +59,13 @@ We propose to do the following:
Instead, we will increment the PATCH automatically when building the tarball.
The PATCH will be calculated as the number of commits related to a given
chart after the latest git tag.
So for example if the latest tag is `2024.2.0` and we have 3 commits
So for example if the latest tag is ``2024.2.0`` and we have 3 commits
in the nova chart after this tag, the version of the nova tarball will be
`2024.2.3`.
``2024.2.3``.
All the tarballs will be published with the build metadata showing
the commit SHA sum with which the tarball is built. The tarball
version will look like `2025.1.X+<osh_commit_sha>_<osh_infra_commit_sha>`.
version will look like ``2025.1.X+<osh_commit_sha>_<osh_infra_commit_sha>``.
Implementation
==============
@@ -84,23 +84,23 @@ implemented.
Values overrides
~~~~~~~~~~~~~~~~
Move values_overrides from all charts to a separate directory `values`
with the hierarchy `values_overrides/<chart-name>/<feature1>_<feature2>.yaml`.
Move values_overrides from all charts to a separate directory ``values``
with the hierarchy ``values_overrides/<chart-name>/<feature1>_<feature2>.yaml``.
The Openstack-Helm plugin is able to lookup the overrides in an arbitrary directory,
but the directory structure must be as described above.
Update the version of all charts to `2024.2.0`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All the charts must be updated to the version `2024.2.0` in a single commit.
Update the version of all charts to ``2024.2.0``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All the charts must be updated to the version ``2024.2.0`` in a single commit.
While developing the charts we will not change the version of the charts in
their Chart.yaml files in the git repo. So, in the git repos the versions
of all charts will be the same, e.g. `2024.2.0`. It will be changed
of all charts will be the same, e.g. ``2024.2.0``. It will be changed
twice a year when the Openstack is released and the version update
commit will be tagged appropriately.
However when we build a chart the tarball version will be updated every time.
The tarball version will be calculated automatically
`2024.2.X+<osh_commit_sha>_<osh_infra_commit_sha>` where `X` is the number
``2024.2.X+<osh_commit_sha>_<osh_infra_commit_sha>`` where ``X`` is the number
of commits related to the chart after the latest tag.
.. code-block:: bash
@@ -113,20 +113,20 @@ of commits related to the chart after the latest tag.
.. note::
When the chart itself is not changed but is re-built with the new version
of the helm-toolkit, the PATCH will not be changed and the tarball will
be published with the same version but with the new build metadata (`${OSH_INFRA_COMMIT_SHA}`).
be published with the same version but with the new build metadata (``${OSH_INFRA_COMMIT_SHA}``).
Set git tag for the Openstack-Helm repositories
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We will set the git tag `2024.2.0` for all the Openstack-Helm repositories.
We will set the git tag ``2024.2.0`` for all the Openstack-Helm repositories.
These tags are set by means of submitting a patch to the openstack/releases
repository. Since that we will set such tag twice a year when the Openstack
is released.
Update `apiVersion` in `Chart.yaml`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Update `apiVersion` to `v2` in all `Chart.yaml` files and
migrate the dependecies (helm-toolkit) from `requirements.yaml`
to `Chart.yaml`.
Update ``apiVersion`` in ``Chart.yaml``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Update ``apiVersion`` to ``v2`` in all ``Chart.yaml`` files and
migrate the dependecies (helm-toolkit) from ``requirements.yaml``
to ``Chart.yaml``.
Reorganize the process of managing release notes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -136,19 +136,19 @@ It generates the release notes report using the git history.
We suggest the following workflow:
* When a chart is updated, the maintainer runs the `reno new <chart>` command to create
a new release note file `releasenotes/notes/<chart>-<hash>.yaml`.
* When a chart is updated, the maintainer runs the ``reno new <chart>`` command to create
a new release note file ``releasenotes/notes/<chart>-<hash>.yaml``.
* The maintainer fills in the new release note file with the necessary information.
* The maintainer commits the release note file.
* While building the tarball we will use `reno report` command with a custom script
* While building the tarball we will use ``reno report`` command with a custom script
to generate the release notes report and automatically prepare
the `<chart>/CHANGELOG.md` file.
the ``<chart>/CHANGELOG.md`` file.
Since we are not going to bump the chart version when we update it, all the
release notes will be bound to some git commits and we be put under the headers
that correspond to git tags.
The format of the `CHANGELOG.md` file:
The format of the ``CHANGELOG.md`` file:
.. code-block:: markdown
@@ -161,12 +161,12 @@ The format of the `CHANGELOG.md` file:
- Some update
- Previous update
Where `X.Y.Z` is the tag in the git repository and the `X.Y.Z` section contains
all the release notes made before the tag was set. The `X.Y.Z-<num_commits_after_X.Y.Z>`
Where ``X.Y.Z`` is the tag in the git repository and the ``X.Y.Z`` section contains
all the release notes made before the tag was set. The ``X.Y.Z-<num_commits_after_X.Y.Z>``
section contains all the release notes made after the tag was set.
At this point we have the only tag `0.1.0`. So, when we set the `2024.2.0` tag almost all
the release notes will go to this tag and the `CHANGELOG.md` file. So it will look like:
At this point we have the only tag ``0.1.0``. So, when we set the ``2024.2.0`` tag almost all
the release notes will go to this tag and the ``CHANGELOG.md`` file. So it will look like:
.. code-block:: markdown
@@ -185,7 +185,7 @@ Update the versioning policy
we will re-build it and publish with the new version according to how it is
described above.
All other charts also will be re-built with this new version of
helm-toolkit (inside) and published with the new build metadata (new `$OSH_INFRA_COMMIT_SHA`).
helm-toolkit (inside) and published with the new build metadata (new ``$OSH_INFRA_COMMIT_SHA``).
Helm-toolkit version will not be pinned in the charts.
* When a particular chart is changed, we will re-build and publish only this chart.
So all charts will be built and published independently of each other.
@@ -201,7 +201,7 @@ Documentation Impact
The user documentation must be updated and it must be emphasized that the chart version
is not equal to the Openstack release version and that the Openstack version is defined
by the images used with the charts. Also it must be explained that a particular version
like `2024.2.X` is compatible with those Openstack releases that were maintained at the time
`2024.2.X` was built and published (i.e `2023.1`, `2023.2`, `2024.1`, `2024.2`).
like ``2024.2.X`` is compatible with those Openstack releases that were maintained at the time
``2024.2.X`` was built and published (i.e ``2023.1``, ``2023.2``, ``2024.1``, ``2024.2``).
.. _chart-testing: https://github.com/helm/chart-testing.git

View File

@@ -8,9 +8,9 @@ Problem Description
Currently when an OpenStack-Helm chart deploys a OpenStack service,
it creates a service account that is used by other Openstack services
to interact with the service's API. For example, the Nova
chart creates a service account called `nova` and other charts
chart creates a service account called ``nova`` and other charts
like Cinder and Neutron configure Cinder and Neutron services
to use the `nova` service account to interact with the Nova API.
to use the ``nova`` service account to interact with the Nova API.
However, there might be scenarios where multiple Nova accounts
are necessary. For instance, if Neutron requires more permissive
@@ -39,13 +39,13 @@ E.g. the Neutron chart will create the following service accounts:
* neutron (used by Neutron to communicate with the Keystone API to check auth tokens
and other services can use it to get access to the Neutron API)
* neutron_nova (used by Neutron to get access to the Nova API instead
of using `nova` service account created by the Nova chart)
of using ``nova`` service account created by the Nova chart)
* neutron_placement (used by Neutron to get access to the Placement API
instead of using `placement` service account managed by the Placement chart)
instead of using ``placement`` service account managed by the Placement chart)
The proposed change is going to be backward compatible because the Neutron
chart will still be able to use the `neutron` and `placement` service accounts
managed by the Nova and Placement charts. Also the `neutron` service account
chart will still be able to use the ``neutron`` and ``placement`` service accounts
managed by the Nova and Placement charts. Also the ``neutron`` service account
can still be used by other charts to communicate with the Neutron API.
Implementation
@@ -60,15 +60,15 @@ Primary assignee:
Values
------
Service accounts credentials are defined in the `values.yaml` files
in the `.Values.endpoints.identity.auth` section. The section contains
Service accounts credentials are defined in the ``values.yaml`` files
in the ``.Values.endpoints.identity.auth`` section. The section contains
a bunch of dicts defining credentials for every service account.
Currently those dicts which correspond to service accounts managed by other charts
must be aligned with those charts values. For example, the Neutron values must
define the `nova` service account the same way as the Nova chart does.
define the ``nova`` service account the same way as the Nova chart does.
The following is the example of how the `.Values.endpoints.identity.auth`
The following is the example of how the ``.Values.endpoints.identity.auth``
section of a chart must be modified. The example is given for the Neutron chart:
.. code-block:: yaml
@@ -103,7 +103,7 @@ section of a chart must be modified. The example is given for the Neutron chart:
# Service account with the following username/password
# will be created by the Keystone user job
# and will be used for Neutron configuration. Also the
# `role` field must be added to assign necessary roles
# ``role`` field must be added to assign necessary roles
# to the service account.
nova:
role: admin,service
@@ -116,7 +116,7 @@ section of a chart must be modified. The example is given for the Neutron chart:
# Service account with the following username/password
# will be created by the Keystone user job
# and will be used for Neutron configuration. Also the
# `role` field must be added to assign necessary roles
# ``role`` field must be added to assign necessary roles
# to the service account.
placement:
role: admin,service
@@ -135,23 +135,23 @@ used by the `Keystone user manifest`_ to create the service accounts.
So the the template that deploys those secrets must be updated to
create the secrets for all service accounts defined in the
`.Values.endpoints.identity.auth` section.
``.Values.endpoints.identity.auth`` section.
Also the `.Values.secrets.identity` section must be updated and
Also the ``.Values.secrets.identity`` section must be updated and
secret names must be added for all service accounts defined in the
`.Values.endpoints.identity.auth` section.
``.Values.endpoints.identity.auth`` section.
Keystone user manifest
----------------------
The Helm-toolkit chart defines the `Keystone user manifest`_
The Helm-toolkit chart defines the ``Keystone user manifest``_
which is used by all Openstack charts to create service accounts.
The manifest must be updated to be able to accept `serviceUsers` parameter
The manifest must be updated to be able to accept ``serviceUsers`` parameter
which will be the list of service accounts to be created by the job.
For backward compatibility if the `serviceUsers` parameter is not given
then the manifest will use the `serviceUser` parameter or `serviceName` parameter
to define the `serviceUsers` as a list with a single element.
For backward compatibility if the ``serviceUsers`` parameter is not given
then the manifest will use the ``serviceUser`` parameter or ``serviceName`` parameter
to define the ``serviceUsers`` as a list with a single element.
.. code-block::

View File

@@ -194,10 +194,10 @@ No change in testing is required, *per se*.
It is expected the new software configuration would be tested with the
current practices.
On top of that, the newly provided `example_values/` must
On top of that, the newly provided ``example_values/`` must
aim for being tested **as soon as possible upon delivery**. Without tests,
those examples will decrepit. The changes in CI pipelines for making use
of `example_values` is outside the scope of this spec.
of ``example_values`` is outside the scope of this spec.
Documentation Impact
====================

View File

@@ -113,9 +113,9 @@ in Helm-Toolkit. The following manifests have yet to be combined:
**Standardization of values**
OpenStack-Helm has developed a number of conventions around the format and
ordering of charts' `values.yaml` file, in support of both reusable Helm-Toolkit
ordering of charts' ``values.yaml`` file, in support of both reusable Helm-Toolkit
functions and ease of developer ramp-up. For 1.0 readiness, OpenStack-Helm must
cement these conventions within a spec, as well as the ordering of `values.yaml`
cement these conventions within a spec, as well as the ordering of ``values.yaml``
keys. These conventions must then be gated to guarantee conformity.
The spec in progress can be found here [1]_.
@@ -137,9 +137,9 @@ in-place upgradability.
In order to maximize flexibility for operators, and to help facilitate
upgrades to newer versions of containerized software without editing
the chart itself, all configuration files will be specified dynamically
based on `values.yaml` and overrides. In most cases the config files
based on ``values.yaml`` and overrides. In most cases the config files
will be generated based on the YAML values tree itself, and in some
cases the config file content will be specified in `values.yaml` as a
cases the config file content will be specified in ``values.yaml`` as a
string literal.
Documentation
@@ -184,7 +184,7 @@ Release notes for the 1.0 release must be prepared, following OpenStack
best practices. The criteria for future changes that should be included
in release notes in an ongoing fashion must be defined / documented as well.
- `values.yaml` changes
- ``values.yaml`` changes
- New charts
- Any other changes to the external interface of OpenStack-Helm
@@ -236,7 +236,7 @@ Primary assignee:
- mattmceuen (Matt McEuen <matt.mceuen@att.com>) for coordination
- powerds (DaeSeong Kim <daeseong.kim@sk.com>) for the
`values.yaml` ordering spec [1]_
``values.yaml`` ordering spec [1]_
- portdirect (Pete Birley <pete@port.direct>) for the
release management spec [2]_
- randeep.jalli (Randeep Jalli <rj2083@att.com>) and

View File

@@ -68,9 +68,9 @@ Steps:
tee /tmp/ceph.yaml << EOF
...
network:
public: ${CEPH_PUBLIC_NETWORK}
cluster: ${CEPH_CLUSTER_NETWORK}
network:
public: ${CEPH_PUBLIC_NETWORK}
cluster: ${CEPH_CLUSTER_NETWORK}
images:
tags:
ceph_bootstrap: 'docker.io/ceph/daemon:master-0351083-luminous-ubuntu-16.04-x86_64'
@@ -84,19 +84,19 @@ Steps:
ceph_rgw: 'docker.io/ceph/daemon:master-0351083-luminous-ubuntu-16.04-x86_64'
ceph_cephfs_provisioner: 'quay.io/external_storage/cephfs-provisioner:v0.1.1'
ceph_rbd_provisioner: 'quay.io/external_storage/rbd-provisioner:v0.1.0'
conf:
ceph:
global:
fsid: ${CEPH_FS_ID}
rgw_ks:
enabled: true
pool:
crush:
tunables: ${CRUSH_TUNABLES}
target:
conf:
ceph:
global:
fsid: ${CEPH_FS_ID}
rgw_ks:
enabled: true
pool:
crush:
tunables: ${CRUSH_TUNABLES}
target:
# NOTE(portdirect): 5 nodes, with one osd per node
osd: 5
pg_per_osd: 100
osd: 5
pg_per_osd: 100
...
EOF

View File

@@ -40,7 +40,7 @@ can be done with the following Ceph command:
admin@kubenode01:~$
Use one of your Ceph Monitors to check the status of the cluster. A
couple of things to note above; our health is `HEALTH\_OK`, we have 3
couple of things to note above; our health is ``HEALTH_OK``, we have 3
mons, we've established a quorum, and we can see that all of our OSDs
are up and in the OSD map.