Fix lint issues with documentation

The doc8 linter found several syntax problems in our docs; primarily a
large number of places we used single-backticks to surround something
when we should've used double-backticks.

This is frontrunning a change that will add these checks to CI.

Change-Id: Ib23b5728c072f2008cb3b19e9fb7192ee5d82413
This commit is contained in:
Jay Faulkner 2024-10-29 14:34:48 -07:00
parent 045249f60d
commit f6191f2969
31 changed files with 120 additions and 119 deletions

View File

@ -16,16 +16,16 @@ How it works
The expected workflow is as follows: The expected workflow is as follows:
#. The node is discovered by manually powering it on and getting the #. The node is discovered by manually powering it on and gets the
`manual-management` hardware type and `agent` power interface. ``manual-management`` hardware type and ``agent`` power interface.
If discovery is not used, a node can be enrolled through the API and then If discovery is not used, a node can be enrolled through the API and then
powered on manually. powered on manually.
#. The operator moves the node to `manageable`. It works because the `agent` #. The operator moves the node to ``manageable``. It works because the ``agent``
power only requires to be able to connect to the agent. power only requires to be able to connect to the agent.
#. The operator moves the node to `available`. Cleaning happens normally via #. The operator moves the node to ``available``. Cleaning happens normally via
the already running agent. If a reboot is needed, it is done by telling the the already running agent. If a reboot is needed, it is done by telling the
agent to reboot the node in-band. agent to reboot the node in-band.

View File

@ -5,9 +5,10 @@ API Audit Logging
================= =================
Audit middleware supports the delivery of CADF audit events via the Oslo messaging Audit middleware supports the delivery of CADF audit events via the Oslo messaging
notifier capability. Based on the `notification_driver` configuration, audit events notifier capability. Based on the ``notification_driver`` configuration, audit
can be routed to messaging infrastructure (notification_driver = messagingv2) event can be routed to messaging infrastructure (notification_driver =
or can be routed to a log file (`[oslo_messaging_notifications]/driver = log`). messagingv2) or can be routed to a log file (
``[oslo_messaging_notifications]/driver = log``).
Audit middleware creates two events per REST API interaction. The first event has Audit middleware creates two events per REST API interaction. The first event has
information extracted from request data and the second one has request outcome information extracted from request data and the second one has request outcome
@ -16,8 +17,8 @@ information extracted from request data and the second one has request outcome
Enabling API Audit Logging Enabling API Audit Logging
========================== ==========================
Audit middleware is available as part of `keystonemiddleware` (>= 1.6) library. Audit middleware is available as part of ``keystonemiddleware`` (>= 1.6)
For information regarding how audit middleware functions refer library. For information regarding how audit middleware functions refer
:keystonemiddleware-doc:`here <audit.html>`. :keystonemiddleware-doc:`here <audit.html>`.
Auditing can be enabled for the Bare Metal service by making the following changes Auditing can be enabled for the Bare Metal service by making the following changes

View File

@ -140,10 +140,10 @@ Use without the Compute Service
------------------------------- -------------------------------
As discussed in other sections, the Bare Metal service has a concept of a As discussed in other sections, the Bare Metal service has a concept of a
`connector` that is used to represent an interface that is intended to ``connector`` that is used to represent an interface that is intended to
be utilized to attach the remote volume. be utilized to attach the remote volume.
In addition to the connectors, we have a concept of a `target` that can be In addition to the connectors, we have a concept of a ``target`` that can be
defined via the API. While a user of this feature through the Compute defined via the API. While a user of this feature through the Compute
service would automatically have a new target record created for them, service would automatically have a new target record created for them,
it is not explicitly required and can be performed manually. it is not explicitly required and can be performed manually.

View File

@ -36,7 +36,7 @@ Preparation:
- Variable value: C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Git\bin - Variable value: C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Git\bin
- Rename virtual switch name in Windows Server 2012R2/ 2016 in - Rename virtual switch name in Windows Server 2012R2/ 2016 in
``Virtual Switch Manager`` into `external`. ``Virtual Switch Manager`` into ``external``.
Implementation: Implementation:
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
@ -56,7 +56,7 @@ Implementation:
git clone https://github.com/cloudbase/windows-openstack-imaging-tools.git git clone https://github.com/cloudbase/windows-openstack-imaging-tools.git
* ``Step 5``: Create & running script `create-windows-cloud-image.ps1`: * ``Step 5``: Create & running script ``create-windows-cloud-image.ps1``:
.. code-block:: console .. code-block:: console

View File

@ -25,7 +25,7 @@ influences how the ironic conductor calculates (and thus allocates)
baremetal nodes under ironic's management. This calculation is performed baremetal nodes under ironic's management. This calculation is performed
independently by each operating conductor and as such if a conductor has independently by each operating conductor and as such if a conductor has
a :oslo.config:option:`conductor.conductor_group` configuration option defined in its a :oslo.config:option:`conductor.conductor_group` configuration option defined in its
`ironic.conf` configuration file, the conductor will then be limited to ``ironic.conf`` configuration file, the conductor will then be limited to
only managing nodes with a matching ``conductor_group`` string. only managing nodes with a matching ``conductor_group`` string.
.. note:: .. note::

View File

@ -370,7 +370,7 @@ Node configuration
* The following parameters are mandatory in ``driver_info`` * The following parameters are mandatory in ``driver_info``
if ``ilo-inspect`` inspect interface is used and SNMPv3 inspection if ``ilo-inspect`` inspect interface is used and SNMPv3 inspection
(`SNMPv3 Authentication` in `HPE iLO4 User Guide`_) is desired: (``SNMPv3 Authentication`` in `HPE iLO4 User Guide`_) is desired:
* ``snmp_auth_user`` : The SNMPv3 user. * ``snmp_auth_user`` : The SNMPv3 user.
@ -899,7 +899,7 @@ The hardware type ``ilo`` supports hardware inspection.
an error. This feature is available in proliantutils release an error. This feature is available in proliantutils release
version >= 2.2.0. version >= 2.2.0.
* The iLO must be updated with SNMPv3 authentication details. * The iLO must be updated with SNMPv3 authentication details.
Please refer to the section `SNMPv3 Authentication` in `HPE iLO4 User Guide`_ Please refer to the section ``SNMPv3 Authentication`` in `HPE iLO4 User Guide`_
for setting up authentication details on iLO. for setting up authentication details on iLO.
The following parameters are mandatory to be given in driver_info The following parameters are mandatory to be given in driver_info
for SNMPv3 inspection: for SNMPv3 inspection:
@ -1583,7 +1583,7 @@ configuration of RAID:
DIB support for Proliant Hardware Manager DIB support for Proliant Hardware Manager
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install ``ironic-python-agent-builder`` following the guide [1]_ Install `ironic-python-agent-builder`_
To create an agent ramdisk with ``Proliant Hardware Manager``, To create an agent ramdisk with ``Proliant Hardware Manager``,
use the ``proliant-tools`` element in DIB:: use the ``proliant-tools`` element in DIB::
@ -1615,7 +1615,7 @@ This clean step is performed as part of automated cleaning and it is disabled
by default. See :ref:`InbandvsOutOfBandCleaning` for more information on by default. See :ref:`InbandvsOutOfBandCleaning` for more information on
enabling/disabling a clean step. enabling/disabling a clean step.
Install ``ironic-python-agent-builder`` following the guide [1]_ Install `ironic-python-agent-builder`_.
To create an agent ramdisk with ``Proliant Hardware Manager``, use the To create an agent ramdisk with ``Proliant Hardware Manager``, use the
``proliant-tools`` element in DIB:: ``proliant-tools`` element in DIB::
@ -1835,7 +1835,7 @@ the node's ``driver_info``. To update SSL certificates into iLO,
refer to `HPE Integrated Lights-Out Security Technology Brief <http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504>`_. refer to `HPE Integrated Lights-Out Security Technology Brief <http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504>`_.
Use iLO hostname or IP address as a 'Common Name (CN)' while Use iLO hostname or IP address as a 'Common Name (CN)' while
generating Certificate Signing Request (CSR). Use the same value as generating Certificate Signing Request (CSR). Use the same value as
`ilo_address` while enrolling node to Bare Metal service to avoid SSL ``ilo_address`` while enrolling node to Bare Metal service to avoid SSL
certificate validation errors related to hostname mismatch. certificate validation errors related to hostname mismatch.
Rescue mode support Rescue mode support
@ -2072,5 +2072,5 @@ more information.
.. _`Guidelines for SPP ISO`: https://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/spp .. _`Guidelines for SPP ISO`: https://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/spp
.. _`SUM`: https://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/hpsum/index.aspx .. _`SUM`: https://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/hpsum/index.aspx
.. _`SUM User Guide`: https://h20565.www2.hpe.com/hpsc/doc/public/display?docId=c05210448 .. _`SUM User Guide`: https://h20565.www2.hpe.com/hpsc/doc/public/display?docId=c05210448
.. [1] `ironic-python-agent-builder`: https://docs.openstack.org/ironic-python-agent-builder/latest/install/index.html .. _`ironic-python-agent-builder`: https://docs.openstack.org/ironic-python-agent-builder/latest/install/index.html
.. _`HPE Integrated Lights-Out Security Technology Brief`: http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504 .. _`HPE Integrated Lights-Out Security Technology Brief`: http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504

View File

@ -94,8 +94,8 @@ A node with Intel SST-PP can be configured to use it via
* ``intel_speedselect_config``: * ``intel_speedselect_config``:
Hexadecimal code of Intel SST-PP configuration. Accepted values are Hexadecimal code of Intel SST-PP configuration. Accepted values are
'0x00', '0x01', '0x02'. These values correspond to '0x00', '0x01', '0x02'. These values correspond to
`Intel SST-PP Config Base`, `Intel SST-PP Config 1`, ``Intel SST-PP Config Base``, ``Intel SST-PP Config 1``,
`Intel SST-PP Config 2` respectively. The input value must be a string. ``Intel SST-PP Config 2`` respectively. The input value must be a string.
* ``socket_count``: * ``socket_count``:
Number of sockets in the node. The input value must be a positive Number of sockets in the node. The input value must be a positive

View File

@ -88,7 +88,7 @@ Changing The Default IPMI Credential Persistence Method
- ``store_cred_in_env``: :oslo.config:option:`ipmi.store_cred_in_env`. - ``store_cred_in_env``: :oslo.config:option:`ipmi.store_cred_in_env`.
The `store_cred_in_env` configuration option allow users to switch The ``store_cred_in_env`` configuration option allow users to switch
between file-based and environment variable persistence methods for between file-based and environment variable persistence methods for
IPMI password. IPMI password.
@ -227,7 +227,7 @@ a value that can be used from the list provided (from last to first):
cipher_suite_versions = 1,2,3,6,7,8,11,12 cipher_suite_versions = 1,2,3,6,7,8,11,12
To find the suitable values for this configuration, you can check the field To find the suitable values for this configuration, you can check the field
`RMCP+ Cipher Suites` after running an ``ipmitool`` command, e.g: ``RMCP+ Cipher Suites`` after running an ``ipmitool`` command, e.g:
.. code-block:: console .. code-block:: console

View File

@ -248,7 +248,7 @@ installs two firmware updates.
}] }]
It is also possible to use `runbooks` for firmware updates. It is also possible to use ``runbooks`` for firmware updates.
.. code-block:: console .. code-block:: console
@ -281,9 +281,9 @@ In the following example, the JSON is specified directly on the command line:
.. note:: .. note::
For Dell machines you must extract the firmimgFIT.d9 from the iDRAC.exe For Dell machines you must extract the firmimgFIT.d9 from the iDRAC.exe
This can be done using the command `7za e iDRAC_<VERSION>.exe`. This can be done using the command ``7za e iDRAC_<VERSION>.exe``.
.. note:: .. note::
For HPE machines you must extract the ilo5_<version>.bin from the For HPE machines you must extract the ilo5_<version>.bin from the
ilo5_<version>.fwpkg ilo5_<version>.fwpkg
This can be done using the command `7za e ilo<version>.fwpkg`. This can be done using the command ``7za e ilo<version>.fwpkg``.

View File

@ -31,7 +31,7 @@ for the individual tests will be outlined below.
CPU burn-in CPU burn-in
=========== ===========
The options, following a `agent_burnin_` + stress-ng stressor (`cpu`) + The options, following a ``agent_burnin_`` + stress-ng stressor (``cpu``) +
stress-ng option schema, are: stress-ng option schema, are:
* ``agent_burnin_cpu_timeout`` (default: 24 hours) * ``agent_burnin_cpu_timeout`` (default: 24 hours)
@ -57,7 +57,7 @@ Then launch the test with:
Memory burn-in Memory burn-in
============== ==============
The options, following a `agent_burnin_` + stress-ng stressor (`vm`) + The options, following a ``agent_burnin_`` + stress-ng stressor (``vm``) +
stress-ng option schema, are: stress-ng option schema, are:
* ``agent_burnin_vm_timeout`` (default: 24 hours) * ``agent_burnin_vm_timeout`` (default: 24 hours)
@ -85,7 +85,7 @@ Then launch the test with:
Disk burn-in Disk burn-in
============ ============
The options, following a `agent_burnin_` + fio stressor (`fio_disk`) + The options, following a ``agent_burnin_`` + fio stressor (``fio_disk``) +
fio option schema, are: fio option schema, are:
* agent_burnin_fio_disk_runtime (default: 0, meaning no time limit) * agent_burnin_fio_disk_runtime (default: 0, meaning no time limit)

View File

@ -14,7 +14,7 @@ After a successful inspection, you can get both parts as JSON with:
$ baremetal node inventory save <NODE> $ baremetal node inventory save <NODE>
Use `jq` to filter the parts you need, e.g. only the inventory itself: Use ``jq`` to filter the parts you need, e.g. only the inventory itself:
.. code-block:: console .. code-block:: console

View File

@ -117,7 +117,7 @@ Format of JSON for deploy steps argument is described in `Deploy step format`_
section. section.
.. note:: .. note::
Starting with `ironicclient` 4.6.0 you can provide a YAML file for Starting with ``ironicclient`` 4.6.0 you can provide a YAML file for
``--deploy-steps``. ``--deploy-steps``.
Excluding the default steps Excluding the default steps
@ -190,7 +190,7 @@ An invocation of a deploy step is defined in a deploy template as follows::
} }
A deploy template contains a list of one or more such steps. Each combination A deploy template contains a list of one or more such steps. Each combination
of `interface` and `step` may only be specified once in a deploy template. of ``interface`` and ``step`` may only be specified once in a deploy template.
Matching deploy templates Matching deploy templates
------------------------- -------------------------

View File

@ -61,8 +61,8 @@ new fields, while macroversion bumps are backwards-incompatible and may have
fields removed. fields removed.
Versioned notifications are emitted by default to the Versioned notifications are emitted by default to the
`ironic_versioned_notifications` topic. This can be changed and it is ``ironic_versioned_notifications`` topic. This can be changed and it is
configurable in the ironic.conf with the `versioned_notifications_topics` configurable in the ironic.conf with the ``versioned_notifications_topics``
config option. config option.
Available notifications Available notifications

View File

@ -34,7 +34,7 @@ Compute-Baremetal Power Sync
Each ``nova-compute`` process in the Compute service runs a periodic task which Each ``nova-compute`` process in the Compute service runs a periodic task which
synchronizes the power state of servers between its database and the compute synchronizes the power state of servers between its database and the compute
driver. If enabled, it runs at an interval defined by the driver. If enabled, it runs at an interval defined by the
`sync_power_state_interval` config option on the ``nova-compute`` process. ``sync_power_state_interval`` config option on the ``nova-compute`` process.
In case of the compute driver being baremetal driver, this sync will happen In case of the compute driver being baremetal driver, this sync will happen
between the databases of the compute and baremetal services. Since the sync between the databases of the compute and baremetal services. Since the sync
happens on the ``nova-compute`` process, the state in the compute database happens on the ``nova-compute`` process, the state in the compute database

View File

@ -485,8 +485,8 @@ RAID deployments where Ironic does not have access to any image metadata
Using RAID in nova flavor for scheduling Using RAID in nova flavor for scheduling
======================================== ========================================
The operator can specify the `raid_level` capability in nova flavor for node to be selected The operator can specify the ``raid_level`` capability in nova flavor for node
for scheduling:: to be selected for scheduling::
openstack flavor set my-baremetal-flavor --property capabilities:raid_level="1+0" openstack flavor set my-baremetal-flavor --property capabilities:raid_level="1+0"

View File

@ -208,7 +208,7 @@ directory back::
API Errors API Errors
========== ==========
The `debug_tracebacks_in_api` config option may be set to return tracebacks The ``debug_tracebacks_in_api`` config option may be set to return tracebacks
in the API response for all 4xx and 5xx errors. in the API response for all 4xx and 5xx errors.
.. _retrieve_deploy_ramdisk_logs: .. _retrieve_deploy_ramdisk_logs:
@ -428,7 +428,7 @@ the IPMI port to be unreachable through ipmitool, as shown:
$ ipmitool -I lan -H ipmi_host -U ipmi_user -P ipmi_pass chassis power status $ ipmitool -I lan -H ipmi_host -U ipmi_user -P ipmi_pass chassis power status
Error: Unable to establish LAN session Error: Unable to establish LAN session
To fix this, enable `IPMI over lan` setting using your BMC tool or web app. To fix this, enable ``IPMI over lan`` setting using your BMC tool or web app.
Troubleshooting lanplus interface Troubleshooting lanplus interface
--------------------------------- ---------------------------------
@ -441,7 +441,7 @@ When working with lanplus interfaces, you may encounter the following error:
Error in open session response message : insufficient resources for session Error in open session response message : insufficient resources for session
Error: Unable to establish IPMI v2 / RMCP+ session Error: Unable to establish IPMI v2 / RMCP+ session
To fix that issue, please enable `RMCP+ Cipher Suite3 Configuration` setting To fix that issue, please enable ``RMCP+ Cipher Suite3 Configuration`` setting
using your BMC tool or web app. using your BMC tool or web app.
Why are my nodes stuck in a "-ing" state? Why are my nodes stuck in a "-ing" state?

View File

@ -40,10 +40,10 @@ Create a new Job
================ ================
Identify among the existing jobs the one that most closely resembles the Identify among the existing jobs the one that most closely resembles the
scenario you want to test, the existing job will be used as `parent` in your scenario you want to test, the existing job will be used as ``parent`` in your
job definition. job definition.
Now you will only need to either overwrite or add variables to your job Now you will only need to either overwrite or add variables to your job
definition under the `vars` section to represent the desired scenario. definition under the ``vars`` section to represent the desired scenario.
The code block below shows the minimal structure of a new job definition that The code block below shows the minimal structure of a new job definition that
you need to add to ironic-jobs.yaml_. you need to add to ironic-jobs.yaml_.
@ -58,8 +58,8 @@ you need to add to ironic-jobs.yaml_.
<var1>: <new value> <var1>: <new value>
After having the definition of your new job you just need to add the job name After having the definition of your new job you just need to add the job name
to the project.yaml_ under `check` and `gate`. Only jobs that are voting to the project.yaml_ under ``check`` and ``gate``. Only jobs that are voting
should be in the `gate` section. should be in the ``gate`` section.
.. code-block:: yaml .. code-block:: yaml

View File

@ -83,7 +83,7 @@ Feature Submission Process
about the RFE, and whether to approve it will occur. If the RFE has not about the RFE, and whether to approve it will occur. If the RFE has not
been triaged and you'd like it to receive immediate attention, add it to been triaged and you'd like it to receive immediate attention, add it to
the Open Discussion section of our the Open Discussion section of our
`weekly meeting agenda <https://wiki.openstack.org/wiki/Meetings/Ironic>`, `weekly meeting agenda <https://wiki.openstack.org/wiki/Meetings/Ironic>`_,
and, timezone permitting, attend the meeting to advocate for your RFE. and, timezone permitting, attend the meeting to advocate for your RFE.
#. Contributors will evaluate the RFE and may advise the submitter to file a #. Contributors will evaluate the RFE and may advise the submitter to file a
@ -111,7 +111,7 @@ Change Tracking
Please ensure work related to a bug or RFE is tagged with the bug. This Please ensure work related to a bug or RFE is tagged with the bug. This
generally is a "Closes-bug", "Partial-bug" or "Related-bug" tag as described generally is a "Closes-bug", "Partial-bug" or "Related-bug" tag as described
in the in the
`Git Commit messages guide <https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references>``. `Git Commit messages guide <https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references>`_.
.. note:: **RFEs may only be approved by members of the ironic-core team**. .. note:: **RFEs may only be approved by members of the ironic-core team**.
@ -253,8 +253,8 @@ Ironic Specs Process
Specifications must follow the template which can be found at Specifications must follow the template which can be found at
`specs/template.rst <https://opendev.org/openstack/ironic-specs/src/branch/ `specs/template.rst <https://opendev.org/openstack/ironic-specs/src/branch/
master/specs/template.rst>`_, which is quite self-documenting. Specifications are master/specs/template.rst>`_, which is quite self-documenting. Specifications are
proposed by adding them to the `specs/approved` directory, adding a soft link proposed by adding them to the ``specs/approved`` directory, adding a soft link
to it from the `specs/not-implemented` directory, and posting it for to it from the ``specs/not-implemented`` directory, and posting it for
review to Gerrit. For more information, please see the `README <https://git. review to Gerrit. For more information, please see the `README <https://git.
openstack.org/cgit/openstack/ironic-specs/tree/README.rst>`_. openstack.org/cgit/openstack/ironic-specs/tree/README.rst>`_.

View File

@ -5,7 +5,7 @@ Debugging CI failures
===================== =====================
If you see `FAILURE` in one or more jobs for your patch please don't panic. If you see ``FAILURE`` in one or more jobs for your patch please don't panic.
This guide may help you to find the initial reason for the failure. This guide may help you to find the initial reason for the failure.
When clicking in the failed job you will be redirect to the Zuul web page that When clicking in the failed job you will be redirect to the Zuul web page that
contains all the information about the job build. contains all the information about the job build.
@ -14,14 +14,14 @@ contains all the information about the job build.
Zuul Web Page Zuul Web Page
============= =============
The page has three tabs: `Summary`, `Logs` and `Console`. The page has three tabs: ``Summary``, ``Logs`` and ``Console``.
* Summary: Contains overall information about the build of the job, if the job * Summary: Contains overall information about the build of the job, if the job
build failed it will contain a general output of the failure. build failed it will contain a general output of the failure.
* Logs: Contains all configurations and log files about all services that * Logs: Contains all configurations and log files about all services that
were used in the job. This will give you an overall idea of the failures and were used in the job. This will give you an overall idea of the failures and
you can identify services that may be involved. The `job-output` file can you can identify services that may be involved. The ``job-output`` file can
give an overall idea of the failures and what services may be involved. give an overall idea of the failures and what services may be involved.
* Console: Contains all the playbooks that were executed, by clicking in the * Console: Contains all the playbooks that were executed, by clicking in the

View File

@ -61,7 +61,7 @@ The minimum required interfaces are:
.. note:: .. note::
Most of the hardware types should not override this interface. Most of the hardware types should not override this interface.
* `power` implements power actions for the hardware. These common * ``power`` implements power actions for the hardware. These common
implementations may be used, if supported by the hardware: implementations may be used, if supported by the hardware:
* :py:class:`ironic.drivers.modules.ipmitool.IPMIPower` * :py:class:`ironic.drivers.modules.ipmitool.IPMIPower`
@ -74,7 +74,7 @@ The minimum required interfaces are:
Power actions in Ironic are blocking - methods of a power interface should Power actions in Ironic are blocking - methods of a power interface should
not return until the power action is finished or errors out. not return until the power action is finished or errors out.
* `management` implements additional out-of-band management actions, such as * ``management`` implements additional out-of-band management actions, such as
setting a boot device. A few common implementations exist and may be used, setting a boot device. A few common implementations exist and may be used,
if supported by the hardware: if supported by the hardware:

View File

@ -134,6 +134,6 @@ volume with tempest in the environment::
Please note that the storage interface will only indicate errors based upon Please note that the storage interface will only indicate errors based upon
the state of the node and the configuration present. As such a node does not the state of the node and the configuration present. As such a node does not
exclusively have to boot via a remote volume, and as such `validate` actions exclusively have to boot via a remote volume, and as such ``validate`` actions
upon nodes may be slightly misleading. If an appropriate `volume target` is upon nodes may be slightly misleading. If an appropriate ``volume target`` is
defined, no error should be returned for the boot interface. defined, no error should be returned for the boot interface.

View File

@ -5,7 +5,7 @@ Jobs description
================ ================
The description of each jobs that runs in the CI when you submit a patch for The description of each jobs that runs in the CI when you submit a patch for
`openstack/ironic` is visible in :ref:`table_jobs_description`. ``openstack/ironic`` is visible in :ref:`table_jobs_description`.
.. _table_jobs_description: .. _table_jobs_description:
@ -20,62 +20,62 @@ The description of each jobs that runs in the CI when you submit a patch for
Python3 Python3
* - ironic-tempest-functional-python3 * - ironic-tempest-functional-python3
- Deploys Ironic in standalone mode and runs tempest functional tests - Deploys Ironic in standalone mode and runs tempest functional tests
that matches the regex `ironic_tempest_plugin.tests.api` under Python3 that matches the regex ``ironic_tempest_plugin.tests.api`` under Python3
* - ironic-grenade * - ironic-grenade
- Deploys Ironic in a DevStack and runs upgrade for all enabled services. - Deploys Ironic in a DevStack and runs upgrade for all enabled services.
* - ironic-standalone * - ironic-standalone
- Deploys Ironic in standalone mode and runs tempest tests that match - Deploys Ironic in standalone mode and runs tempest tests that match
the regex `ironic_standalone`. the regex ``ironic_standalone``.
* - ironic-standalone-redfish * - ironic-standalone-redfish
- Deploys Ironic in standalone mode and runs tempest tests that match - Deploys Ironic in standalone mode and runs tempest tests that match
the regex `ironic_standalone` using the redfish driver. the regex ``ironic_standalone`` using the redfish driver.
* - ironic-tempest-partition-bios-redfish-pxe * - ironic-tempest-partition-bios-redfish-pxe
- Deploys Ironic in DevStack, configured to use dib ramdisk partition - Deploys Ironic in DevStack, configured to use dib ramdisk partition
image with `pxe` boot and `redfish` driver. image with ``pxe`` boot and ``redfish`` driver.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`ironic_tempest_plugin.tests.scenario`, also deploys 1 virtual ``ironic_tempest_plugin.tests.scenario``, also deploys 1 virtual
baremetal. baremetal.
* - ironic-tempest-partition-uefi-redfish-vmedia * - ironic-tempest-partition-uefi-redfish-vmedia
- Deploys Ironic in DevStack, configured to use dib ramdisk partition - Deploys Ironic in DevStack, configured to use dib ramdisk partition
image with `vmedia` boot and `redfish` driver. image with ``vmedia`` boot and ``redfish`` driver.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`ironic_tempest_plugin.tests.scenario`, also deploys 1 virtual ``ironic_tempest_plugin.tests.scenario``, also deploys 1 virtual
baremetal. baremetal.
* - ironic-tempest-wholedisk-bios-snmp-pxe * - ironic-tempest-wholedisk-bios-snmp-pxe
- Deploys Ironic in DevStack, configured to use a pre-built dib - Deploys Ironic in DevStack, configured to use a pre-built dib
ramdisk wholedisk image that is downloaded from a Swift temporary url, ramdisk wholedisk image that is downloaded from a Swift temporary url,
`pxe` boot and `snmp` driver. ``pxe`` boot and ``snmp`` driver.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. ``ironic_tempest_plugin.tests.scenario`` and deploys 1 virtual baremetal.
* - ironic-tempest-partition-uefi-ipmi-pxe * - ironic-tempest-partition-uefi-ipmi-pxe
- Deploys Ironic in DevStack, configured to use dib ramdisk, a partition - Deploys Ironic in DevStack, configured to use dib ramdisk, a partition
image, `pxe` boot in UEFI mode and `ipmi` hardware type. image, ``pxe`` boot in UEFI mode and ``ipmi`` hardware type.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`ironic_tempest_plugin.tests.scenario`, also deploys 1 virtual ``ironic_tempest_plugin.tests.scenario``, also deploys 1 virtual
baremetal. baremetal.
* - ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode * - ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode
- Deploys Ironic in a multinode DevStack, configured to use a pre-build - Deploys Ironic in a multinode DevStack, configured to use a pre-build
tinyipa ramdisk wholedisk image that is downloaded from a Swift tinyipa ramdisk wholedisk image that is downloaded from a Swift
temporary url, `pxe` boot and `ipmi` driver. temporary url, ``pxe`` boot and ``ipmi`` driver.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`(ironic_tempest_plugin.tests.scenario|test_schedule_to_all_nodes)` ``(ironic_tempest_plugin.tests.scenario|test_schedule_to_all_nodes)``
and deploys 7 virtual baremetal. and deploys 7 virtual baremetal.
* - ironic-tempest-bios-ipmi-direct-tinyipa * - ironic-tempest-bios-ipmi-direct-tinyipa
- Deploys Ironic in DevStack, configured to use a pre-build tinyipa - Deploys Ironic in DevStack, configured to use a pre-build tinyipa
ramdisk wholedisk image that is downloaded from a Swift temporary url, ramdisk wholedisk image that is downloaded from a Swift temporary url,
`pxe` boot and `ipmi` driver. ``pxe`` boot and ``ipmi`` driver.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. ``ironic_tempest_plugin.tests.scenario`` and deploys 1 virtual baremetal.
* - ironic-tempest-bfv * - ironic-tempest-bfv
- Deploys Ironic in DevStack with cinder enabled, so it can deploy - Deploys Ironic in DevStack with cinder enabled, so it can deploy
baremetal using boot from volume. baremetal using boot from volume.
Runs tempest tests that match the regex `baremetal_boot_from_volume` Runs tempest tests that match the regex ``baremetal_boot_from_volume``
and deploys 3 virtual baremetal nodes using boot from volume. and deploys 3 virtual baremetal nodes using boot from volume.
* - ironic-tempest-ipa-partition-uefi-pxe-grub2 * - ironic-tempest-ipa-partition-uefi-pxe-grub2
- Deploys Ironic in DevStack, configured to use pxe with uefi and grub2 - Deploys Ironic in DevStack, configured to use pxe with uefi and grub2
and `ipmi` driver. and ``ipmi`` driver.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. ``ironic_tempest_plugin.tests.scenario`` and deploys 1 virtual baremetal.
* - metalsmith-integration-glance-centos8-legacy * - metalsmith-integration-glance-centos8-legacy
- Tests the integration between Ironic and Metalsmith using Glance as - Tests the integration between Ironic and Metalsmith using Glance as
image source and a CentOS 8 image with legacy (BIOS) local boot. image source and a CentOS 8 image with legacy (BIOS) local boot.
@ -85,27 +85,27 @@ The description of each jobs that runs in the CI when you submit a patch for
* - ironic-inspector-tempest * - ironic-inspector-tempest
- Deploys Ironic and Ironic Inspector in DevStack, configured to use a - Deploys Ironic and Ironic Inspector in DevStack, configured to use a
pre-build tinyipa ramdisk wholedisk image that is downloaded from a pre-build tinyipa ramdisk wholedisk image that is downloaded from a
Swift temporary url, `pxe` boot and `ipmi` driver. Swift temporary url, ``pxe`` boot and ``ipmi`` driver.
Runs tempest tests that match the regex `InspectorBasicTest` and Runs tempest tests that match the regex ``InspectorBasicTest`` and
deploys 1 virtual baremetal. deploys 1 virtual baremetal.
* - ironic-inspector-tempest-managed-non-standalone * - ironic-inspector-tempest-managed-non-standalone
- Deploys Ironic and Ironic Inspector in DevStack, configured to use a - Deploys Ironic and Ironic Inspector in DevStack, configured to use a
pre-build tinyipa ramdisk wholedisk image that is downloaded from a pre-build tinyipa ramdisk wholedisk image that is downloaded from a
Swift temporary url, `pxe` boot and `ipmi` driver. Swift temporary url, ``pxe`` boot and ``ipmi`` driver.
Boot is managed by ironic, ironic-inspector runs in non-standalone mode. Boot is managed by ironic, ironic-inspector runs in non-standalone mode.
Runs tempest tests that match the regex `InspectorBasicTest` and Runs tempest tests that match the regex ``InspectorBasicTest`` and
deploys 1 virtual baremetal. deploys 1 virtual baremetal.
* - ironic-inspector-tempest-partition-bios-redfish-vmedia * - ironic-inspector-tempest-partition-bios-redfish-vmedia
- Deploys Ironic and Ironic Inspector in DevStack, configured to use - Deploys Ironic and Ironic Inspector in DevStack, configured to use
`vmedia` boot and `redfish` driver. ``vmedia`` boot and ``redfish`` driver.
Runs tempest tests that match the regex `InspectorBasicTest` and Runs tempest tests that match the regex ``InspectorBasicTest`` and
deploys 1 virtual baremetal. deploys 1 virtual baremetal.
* - ironic-tempest-ipa-wholedisk-bios-ipmi-direct-dib * - ironic-tempest-ipa-wholedisk-bios-ipmi-direct-dib
- Deploys Ironic in DevStack, configured to use a pre-built dib - Deploys Ironic in DevStack, configured to use a pre-built dib
ramdisk wholedisk image that is downloaded from http url, `pxe` boot ramdisk wholedisk image that is downloaded from http url, ``pxe`` boot
and `ipmi` driver. and ``ipmi`` driver.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. ``ironic_tempest_plugin.tests.scenario`` and deploys 1 virtual baremetal.
* - bifrost-integration-tinyipa-ubuntu-focal * - bifrost-integration-tinyipa-ubuntu-focal
- Tests the integration between Ironic and Bifrost using a tinyipa image. - Tests the integration between Ironic and Bifrost using a tinyipa image.
* - bifrost-integration-redfish-vmedia-uefi-centos-9 * - bifrost-integration-redfish-vmedia-uefi-centos-9
@ -113,7 +113,7 @@ The description of each jobs that runs in the CI when you submit a patch for
a dib image based on centos stream 9. a dib image based on centos stream 9.
* - ironic-tempest-pxe_ipmitool-postgres * - ironic-tempest-pxe_ipmitool-postgres
- Deploys Ironic in DevStack, configured to use tinyipa ramdisk partition - Deploys Ironic in DevStack, configured to use tinyipa ramdisk partition
image with `pxe` boot and `ipmi` driver and postgres instead of mysql. image with ``pxe`` boot and ``ipmi`` driver and postgres instead of mysql.
Runs tempest tests that match the regex Runs tempest tests that match the regex
`ironic_tempest_plugin.tests.scenario`, also deploys 1 virtual ``ironic_tempest_plugin.tests.scenario``, also deploys 1 virtual
baremetal. baremetal.

View File

@ -349,10 +349,10 @@ This includes:
typically submit a follow-up patch to do that. An example of this patch is typically submit a follow-up patch to do that. An example of this patch is
`here <https://review.opendev.org/685070>`__. `here <https://review.opendev.org/685070>`__.
* update the `templates` in `.zuul.yaml` or `zuul.d/project.yaml`. * update the ``templates`` in ``.zuul.yaml`` or ``zuul.d/project.yaml``.
The update is necessary to use the job for the next release The update is necessary to use the job for the next release
`openstack-python3-<next_release>-jobs`. An example of this patch is ``openstack-python3-<next_release>-jobs``. An example of this patch is
`here <https://review.opendev.org/#/c/689705/>`__. `here <https://review.opendev.org/#/c/689705/>`__.
We need to submit patches for changes in the stable branch to: We need to submit patches for changes in the stable branch to:

View File

@ -12,7 +12,7 @@ endpoints: A driver vendor passthru and a node vendor passthru.
* The ``VendorInterface`` allows hardware types to expose a custom top-level * The ``VendorInterface`` allows hardware types to expose a custom top-level
functionality which is not specific to a Node. For example, let's say functionality which is not specific to a Node. For example, let's say
the driver `ipmi` exposed a method called `authentication_types` the driver ``ipmi`` exposed a method called ``authentication_types``
that would return what are the authentication types supported. It could that would return what are the authentication types supported. It could
be accessed via the Ironic API like: be accessed via the Ironic API like:
@ -26,9 +26,9 @@ endpoints: A driver vendor passthru and a node vendor passthru.
This limitation will be lifted in the future. This limitation will be lifted in the future.
* The node vendor passthru allows drivers to expose custom functionality * The node vendor passthru allows drivers to expose custom functionality
on per-node basis. For example the same driver `ipmi` exposing a on per-node basis. For example the same driver ``ipmi`` exposing a
method called `send_raw` that would send raw bytes to the BMC, the method method called ``send_raw`` that would send raw bytes to the BMC, the method
also receives a parameter called `raw_bytes` which the value would be also receives a parameter called ``raw_bytes`` which the value would be
the bytes to be sent. It could be accessed via the Ironic API like: the bytes to be sent. It could be accessed via the Ironic API like:
:: ::
@ -52,22 +52,22 @@ to do is write a class inheriting from the `VendorInterface`_ class:
def validate(self, task, **kwargs): def validate(self, task, **kwargs):
pass pass
The `get_properties` is a method that all driver interfaces have, it The ``get_properties`` is a method that all driver interfaces have, it
should return a dictionary of <property>:<description> telling in the should return a dictionary of <property>:<description> telling in the
description whether that property is required or optional so the node description whether that property is required or optional so the node
can be manageable by that driver. For example, a required property for a can be manageable by that driver. For example, a required property for a
`ipmi` driver would be `ipmi_address` which is the IP address or hostname ``ipmi`` driver would be ``ipmi_address`` which is the IP address or hostname
of the node. We are returning an empty dictionary in our example to make of the node. We are returning an empty dictionary in our example to make
it simpler. it simpler.
The `validate` method is responsible for validating the parameters passed The ``validate`` method is responsible for validating the parameters passed
to the vendor methods. Ironic will not introspect into what is passed to the vendor methods. Ironic will not introspect into what is passed
to the drivers, it's up to the developers writing the vendor method to to the drivers, it's up to the developers writing the vendor method to
validate that data. validate that data.
Let's extend the `ExampleVendor` class to support two methods, the Let's extend the ``ExampleVendor`` class to support two methods, the
`authentication_types` which will be exposed on the driver vendor ``authentication_types`` which will be exposed on the driver vendor
passthru endpoint; And the `send_raw` method that will be exposed on passthru endpoint; And the ``send_raw`` method that will be exposed on
the node vendor passthru endpoint: the node vendor passthru endpoint:
.. code-block:: python .. code-block:: python
@ -96,15 +96,15 @@ That's it!
Writing a node or driver vendor passthru method is pretty much the Writing a node or driver vendor passthru method is pretty much the
same, the only difference is how you decorate the methods and the first same, the only difference is how you decorate the methods and the first
parameter of the method (ignoring self). A method decorated with the parameter of the method (ignoring self). A method decorated with the
`@passthru` decorator should expect a Task object as first parameter and ``@passthru`` decorator should expect a Task object as first parameter and
a method decorated with the `@driver_passthru` decorator should expect a method decorated with the ``@driver_passthru`` decorator should expect
a Context object as first parameter. a Context object as first parameter.
Both decorators accept these parameters: Both decorators accept these parameters:
* http_methods: A list of what the HTTP methods supported by that vendor * http_methods: A list of what the HTTP methods supported by that vendor
function. To know what HTTP method that function was invoked with, a function. To know what HTTP method that function was invoked with, a
`http_method` parameter will be present in the `kwargs`. Supported HTTP ``http_method`` parameter will be present in the ``kwargs``. Supported HTTP
methods are *POST*, *PUT*, *GET* and *PATCH*. methods are *POST*, *PUT*, *GET* and *PATCH*.
* method: By default the method name is the name of the python function, * method: By default the method name is the name of the python function,
@ -127,7 +127,7 @@ Both decorators accept these parameters:
.. note:: This parameter was previously called "async". .. note:: This parameter was previously called "async".
The node vendor passthru decorator (`@passthru`) also accepts the following The node vendor passthru decorator (``@passthru``) also accepts the following
parameter: parameter:
* require_exclusive_lock: A boolean value determining whether this method * require_exclusive_lock: A boolean value determining whether this method

View File

@ -40,7 +40,7 @@ Design Goals - Graphical User Interface
* While a graphical interface was developed for Horizon in the form of * While a graphical interface was developed for Horizon in the form of
`ironic-ui <https://git.openstack.org/cgit/openstack/ironic-ui>`_, `ironic-ui <https://git.openstack.org/cgit/openstack/ironic-ui>`_,
currently ironic-ui receives only minimal housekeeping. currently ironic-ui receives only minimal housekeeping.
As Ironic has evolved, ironic-ui is stuck on version `1.34` and knows As Ironic has evolved, ironic-ui is stuck on version ``1.34`` and knows
nothing of our evolution since. Ironic ultimately needs a contributor nothing of our evolution since. Ironic ultimately needs a contributor
with sufficient time to pick up ``ironic-ui`` or to completely with sufficient time to pick up ``ironic-ui`` or to completely
replace it as a functional and customizable user interface. replace it as a functional and customizable user interface.

View File

@ -8,7 +8,7 @@ Configure the Bare Metal service for cleaning
(which is enabled by default), you will need to set the (which is enabled by default), you will need to set the
``cleaning_network`` configuration option. ``cleaning_network`` configuration option.
#. Note the network UUID (the `id` field) of the network you created in #. Note the network UUID (the ``id`` field) of the network you created in
:ref:`configure-networking` or another network you created for cleaning: :ref:`configure-networking` or another network you created for cleaning:
.. code-block:: console .. code-block:: console

View File

@ -46,17 +46,17 @@ Provisioning with IPv6 stateful addressing
------------------------------------------ ------------------------------------------
When using stateful addressing DHCPv6 is providing both addresses and other When using stateful addressing DHCPv6 is providing both addresses and other
configuration via DHCPv6 options such as the bootfile-url and bootfile- configuration via DHCPv6 options such as the bootfile-url and
parameters. bootfile-parameters.
The "identity-association" (IA) construct used by DHCPv6 is challenging when The "identity-association" (IA) construct used by DHCPv6 is challenging when
booting over the network. Firmware, and ramdisks typically end up using booting over the network. Firmware, and ramdisks typically end up using
different DUID/IAID combinations and it is not always possible for one chain- different DUID/IAID combinations and it is not always possible for one
booting stage to release its address before giving control to the next step. In chain-booting stage to release its address before giving control to the next
case the DHCPv6 server is configured with static reservations only the result is step. In case the DHCPv6 server is configured with static reservations only
that booting will fail because the DHCPv6 server has no addresses available. To the result is that booting will fail because the DHCPv6 server has no
get past this issue either configure the DHCPv6 server with multiple address addresses available. To get past this issue either configure the DHCPv6 server
reservations for each host, or use a dynamic range. with multiple address reservations for each host, or use a dynamic range.
.. Note:: Support for multiple address reservations requires dnsmasq version .. Note:: Support for multiple address reservations requires dnsmasq version
2.81 or later. Some distributions may backport this feature to 2.81 or later. Some distributions may backport this feature to

View File

@ -61,7 +61,7 @@ provisioning will happen in a multi-tenant environment (which means using the
.. note:: .. note::
If these ``provisioning_network`` and ``cleaning_network`` values are If these ``provisioning_network`` and ``cleaning_network`` values are
not specified in node's `driver_info` then ironic falls back to the not specified in node's ``driver_info`` then ironic falls back to the
configuration in the ``neutron`` section. configuration in the ``neutron`` section.
Please refer to :doc:`configure-cleaning` for more information about Please refer to :doc:`configure-cleaning` for more information about

View File

@ -30,8 +30,8 @@ You should make the following changes to ``/etc/ironic/ironic.conf``:
auth_strategy=http_basic auth_strategy=http_basic
http_basic_auth_user_file=/etc/ironic/htpasswd http_basic_auth_user_file=/etc/ironic/htpasswd
Only the ``bcrypt`` format is supported, and the Apache `htpasswd` utility can Only the ``bcrypt`` format is supported, and the Apache ``htpasswd``
be used to populate the file with entries, for example: utility can be used to populate the file with entries, for example:
.. code-block:: shell .. code-block:: shell

View File

@ -309,7 +309,7 @@ command, for example:
Building a config drive on the conductor side Building a config drive on the conductor side
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Starting with the Stein release and `ironicclient` 2.7.0, you can request Starting with the Stein release and ``ironicclient`` 2.7.0, you can request
building a configdrive on the server side by providing a JSON with keys building a configdrive on the server side by providing a JSON with keys
``meta_data``, ``user_data`` and ``network_data`` (all optional), e.g.: ``meta_data``, ``user_data`` and ``network_data`` (all optional), e.g.:

View File

@ -113,7 +113,7 @@ Known issues
that use these deploy classes, an error will be thrown during that use these deploy classes, an error will be thrown during
deployment. There is a simple fix. For drivers that expect these deploy deployment. There is a simple fix. For drivers that expect these deploy
classes to handle PXE booting, one can add the following code to the driver's classes to handle PXE booting, one can add the following code to the driver's
`__init__` method:: ``__init__`` method::
from ironic.drivers.modules import pxe from ironic.drivers.modules import pxe
@ -133,8 +133,8 @@ Known issues
# ... # ...
self.boot = fake.FakeBoot() self.boot = fake.FakeBoot()
Additionally, as mentioned before, `ironic.drivers.modules.pxe.PXEDeploy` Additionally, as mentioned before, ``ironic.drivers.modules.pxe.PXEDeploy``
has moved to `ironic.drivers.modules.iscsi_deploy.ISCSIDeploy`, which will has moved to ``ironic.drivers.modules.iscsi_deploy.ISCSIDeploy``, which will
break drivers that use this class. break drivers that use this class.
The Ironic team apologizes profusely for this inconvenience. The Ironic team apologizes profusely for this inconvenience.