Airship V2 baremetal site deployment, provider integration and glossary
Closes: #9 Signed-off-by: James Gu <james.gu@att.com> Change-Id: Iad56ebee975edbf315a6031fd6c3018b5d022357
This commit is contained in:
parent
6ee51e9ca5
commit
362939f370
82
doc/source/airship2/baremetal.rst
Normal file
82
doc/source/airship2/baremetal.rst
Normal file
@ -0,0 +1,82 @@
|
||||
..
|
||||
Copyright 2020-2021 The Airship authors.
|
||||
All Rights Reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
Deploying A Bare Metal Cluster
|
||||
==============================
|
||||
|
||||
The instructions for standing up a greenfield bare metal site can be broken
|
||||
down into three high-level activities:
|
||||
|
||||
1. :ref:`site_setup_guide`: Contains the hardware and network
|
||||
requirement and configuration, and the instructions to set up the build
|
||||
environment.
|
||||
2. :ref:`site_authoring_guide`: Describes how to craft site manifests and configs
|
||||
required for a site deployment performed by Airship.
|
||||
3. :ref:`site_deployment_guide`: Describes how to deploy the site utilizing the
|
||||
manifests created as per the :ref:`site_authoring_guide`.
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: 1
|
||||
|
||||
site-setup.rst
|
||||
site-authoring.rst
|
||||
site-deployment.rst
|
||||
|
||||
Support
|
||||
-------
|
||||
|
||||
Bugs may be viewed and reported using GitHub issues for specific projects in
|
||||
`Airship group <https://github.com/airshipit>`__:
|
||||
|
||||
- `Airship airshipctl <https://github.com/airshipit/airshipctl/issues>`__
|
||||
- `Airship charts <https://github.com/airshipit/charts/issues>`__
|
||||
- `Airship hostconfig-operator <https://github.com/airshipit/hostconfig-operator/issues>`__
|
||||
- `Airship images <https://github.com/airshipit/images/issues>`__
|
||||
- `Airship sip <https://github.com/airshipit/sip/issues>`__
|
||||
- `Airship treasuremap <https://github.com/airshipit/treasuremap/issues>`__
|
||||
- `Airship vino <https://github.com/airshipit/vino/issues>`__
|
||||
|
||||
Terminology
|
||||
-----------
|
||||
|
||||
Please refer to :ref:`glossary` for the terminology used in this
|
||||
document.
|
||||
|
||||
.. _versioning:
|
||||
|
||||
Versioning
|
||||
----------
|
||||
|
||||
This document requires Airship Treasuremap and Airshipctl release version
|
||||
v2.0.0 or newer.
|
||||
|
||||
Airship Treasuremap reference manifests are delivered periodically as release
|
||||
tags in the `Treasuremap Releases`_.
|
||||
|
||||
Airshipctl manifests can be found as release tags in the `Airshipctl Releases`_.
|
||||
|
||||
.. _Airshipctl Releases: https://github.com/airshipit/airshipctl/releases
|
||||
.. _Treasuremap Releases: https://github.com/airshipit/treasuremap/releases
|
||||
|
||||
.. note:: The releases are verified by `Airship in Baremetal Environment`_, and
|
||||
`Airship in Virtualized Environment`_ pipelines before delivery and are
|
||||
recommended for deployments instead of using the master branch directly.
|
||||
|
||||
.. _Airship in Baremetal Environment:
|
||||
https://jenkins.nc.opensource.att.com/job/Deployment/job/stl3-type-core
|
||||
.. _Airship in Virtualized Environment:
|
||||
https://jenkins.nc.opensource.att.com/job/development/job/Airshipctl
|
@ -1,21 +0,0 @@
|
||||
..
|
||||
Copyright 2020-2021 The Airship authors.
|
||||
All Rights Reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
Deploying A Bare Metal Cluster
|
||||
==============================
|
||||
|
||||
Coming soon.
|
||||
|
53
doc/source/airship2/providers.rst
Normal file
53
doc/source/airship2/providers.rst
Normal file
@ -0,0 +1,53 @@
|
||||
..
|
||||
Copyright 2020-2021 The Airship authors.
|
||||
All Rights Reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
Integration with Cluster API Providers
|
||||
======================================
|
||||
|
||||
The `Cluster-API`_ (CAPI) is a Kubernetes project to bring declarative,
|
||||
Kubernetes-style APIs to cluster creation, configuration, and management. By
|
||||
leveraging the Cluster-API for cloud provisioning, Airship takes advantage
|
||||
of upstream efforts to build Kubernetes clusters and manage their lifecycle.
|
||||
Most importantly, we can leverage a number of ``CAPI`` providers that already
|
||||
exist. This allows Airship deployments to target both public and private
|
||||
clouds, such as Azure, AWS, and OpenStack.
|
||||
|
||||
The Site Authoring Guide and Deployment Guide in this document focus on
|
||||
deployment on the bare metal infrastructure, where Airship utilizes the
|
||||
``Metal3-IO`` Cluster API Provider for Managed Bare Metal Hardware (CAPM3) and
|
||||
Cluster API Bootstrap Provider Kubeadmin (CABPK).
|
||||
|
||||
There are Cluster-API Providers that support Kubernetes deployments on top of
|
||||
already provisioned infrastructure, enabling Bring Your Own bare metal use
|
||||
cases as well. Here is a list of references on how to use Airshipctl to create
|
||||
Cluster API management cluster and workload clusters on various different
|
||||
infrastructure providers:
|
||||
|
||||
* `Airshipctl and Cluster API Docker Integration`_
|
||||
* `Airshipctl and Cluster API Openstack Integration`_
|
||||
* `Airshipctl and Cluster API GCP Provider Integration`_
|
||||
* `Airshipctl and Azure Cloud Platform Integration`_
|
||||
|
||||
.. _Cluster-API:
|
||||
https://github.com/kubernetes-sigs/cluster-api
|
||||
.. _Airshipctl and Cluster API Docker Integration:
|
||||
https://docs.airshipit.org/airshipctl/providers/cluster_api_docker.html
|
||||
.. _Airshipctl and Cluster API Openstack Integration:
|
||||
https://docs.airshipit.org/airshipctl/providers/cluster_api_openstack.html
|
||||
.. _Airshipctl and Cluster API GCP Provider Integration:
|
||||
https://docs.airshipit.org/airshipctl/providers/cluster_api_gcp.html
|
||||
.. _Airshipctl and Azure Cloud Platform Integration:
|
||||
https://docs.airshipit.org/airshipctl/providers/cluster_api_azure.html
|
210
doc/source/airship2/site-authoring.rst
Normal file
210
doc/source/airship2/site-authoring.rst
Normal file
@ -0,0 +1,210 @@
|
||||
.. _site_authoring_guide:
|
||||
|
||||
Site Authoring Guide
|
||||
====================
|
||||
|
||||
This guide describes the steps to create the site documents needed by Airship
|
||||
to deploy a standard greenfield bare metal deployment according to your
|
||||
specific environment. In the form of YAML comments, the ``reference-airship-core``
|
||||
site manifests in the Treasuremap git repository contain the tags and
|
||||
descriptions of the required site specific information that the users must
|
||||
provide.
|
||||
|
||||
Airship Layering Approach
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Following the DRY (Don't Repeat Yourself) principle, Airship uses four
|
||||
conceptual layer types to drive consistency and reusability:
|
||||
|
||||
* Function: An atomic, independent building block of a Kubernetes workload,
|
||||
e.g., a HelmRelease, Calico.
|
||||
* Composite: A logical group of multiple Functions integrated and configured
|
||||
for a common purpose. Typical examples include OpenStack or interrelated
|
||||
logging and monitoring components. Composites can also pull in other
|
||||
Composites to further customize their configurations.
|
||||
* Type: A prototypical deployment plan that represents a typical user case,
|
||||
e.g., network cloud, CI/CD pipeline, basic K8S deployment. A Type defines
|
||||
the collection of Composites into a deployable stack, including control
|
||||
plane, workload, and host definitions. A Type also defines the Phases of
|
||||
deployment, which serve to sequence a deployment (or upgrade) into
|
||||
stages that must be performed sequentially, e.g., the Kubernetes cluster
|
||||
must be created before a software workload is deployed. A type can inherit
|
||||
from another type.
|
||||
* Site: A Site is a realization of exactly one Type, and defines (only) the
|
||||
site-specific configurations necessary to deploy the Type to a specific
|
||||
place.
|
||||
|
||||
To learn more about the Airship layering design and mechanism, it is highly
|
||||
recommended to read :ref:`layering-and-deduplication`.
|
||||
|
||||
.. _init_site:
|
||||
|
||||
Initializing New Site
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
It is a very complex and tedious task to create a new site form scratch.
|
||||
Therefore, it is strongly recommended that the user creates the new site based
|
||||
on a reference site. The reference site can be a site that has been already
|
||||
created and deployed by the user, or an example site in the treasuremap git
|
||||
repository.
|
||||
|
||||
The `reference-airship-core`_ site from the treasuremap may be used for such
|
||||
purpose. It is the principal pipeline for integration and continuous deployment
|
||||
testing of Airship on bare metal.
|
||||
|
||||
To create a new site definition from the ``reference-airship-core`` site, the
|
||||
following steps are required:
|
||||
|
||||
1. Clone the ``treasuremap`` repository at the specified reference in the Airship
|
||||
home directory.
|
||||
2. Create a project side-by-side with ``airshipctl`` and ``treasuremap`` directory.
|
||||
3. Copy the reference site manifests to ${PROJECT}/manifests/site/${SITE}.
|
||||
4. Update the site's metadata.yaml appropriately.
|
||||
5. Create and update the Airship config file for the site in
|
||||
``${HOME}/.airship/config``.
|
||||
|
||||
Airshipctl provides a tool ``init_site.sh`` that automates the above site creation
|
||||
tasks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export AIRSHIPCTL_REF_TYPE=tag # type can be "tag", "branch" or "commithash"
|
||||
export AIRSHIPCTL_REF=v2.0.0 # update with the git ref you want to use
|
||||
export TREASUREMAP_REF_TYPE=tag # type can be "tag", "branch" or "commithash"
|
||||
export TREASUREMAP_REF=v2.0.0 # update with the git ref you want to use
|
||||
export REFERENCE_SITE=../treasuremap/manifests/site/reference-airship-core
|
||||
export REFERENCE_TYPE=airship-core # the manifest type the reference site uses
|
||||
|
||||
./tools/init_site.sh
|
||||
|
||||
.. note::
|
||||
The environment variables have default values that point to arishipctl
|
||||
release tag v2.0.0 and treasuremap release tag v2.0.0. You only need
|
||||
to (re)set them in the command line if you want a different release
|
||||
version, a branch or a specific commit.
|
||||
|
||||
To find the Airship release versions and tags, go to :ref:`versioning`. In
|
||||
addition to release tag, the user can also specify a branch (e.g., v2.0)
|
||||
or a specific commit ID when checking out the ``treasuremap`` or ``airshipctl``
|
||||
repository.
|
||||
|
||||
.. _reference-airship-core:
|
||||
https://github.com/airshipit/treasuremap/tree/v2.0/manifests/site/reference-airship-core
|
||||
|
||||
Preparing Deployment Documents
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
After the new site manifests are initialized, you will then need to manually
|
||||
make changes to these files. These site manifests are heavily commented to
|
||||
identify and explain the parameters that need to change when authoring a new
|
||||
site.
|
||||
|
||||
The areas that must be updated for a new site are flagged with the label
|
||||
``NEWSITE-CHANGEME`` in YAML comments. Search for all instances of
|
||||
``NEWSITE-CHANGEME`` in your new site definition. Then follow the instructions
|
||||
that accompany the tag in order to make all needed changes to author your new
|
||||
Airship site.
|
||||
|
||||
Because some files depend on (or may repeat) information from others,
|
||||
the order in which you should build your site files is as follows.
|
||||
|
||||
.. note::
|
||||
|
||||
A helpful practice is to replace the tag ``NEWSITE-CHANGEME`` with
|
||||
``NEWSITE-CHANGED`` along the way when each site specific value is entered.
|
||||
You can run a global search on ``NEWSITE-CHANGEME`` at the end to check if
|
||||
any site fields were missed.
|
||||
|
||||
Network
|
||||
+++++++
|
||||
|
||||
Before you start, collect the following network information:
|
||||
|
||||
* PXE network interface name
|
||||
* The name of the two 25G networks used for the bonded interface
|
||||
* OAM, Calico and Storage VLAN ID's
|
||||
* OAM, Calico and Storage netowrk configuration
|
||||
* PXE, OAM, Calico and Storage IP addresses for ephemeral/controller nodes
|
||||
and worker nodes
|
||||
* Kubernetes and ingress virtual IP address (on OAM)
|
||||
* DNS servers
|
||||
* NTP servers
|
||||
|
||||
First, define the target and ephemeral networking catalogues.
|
||||
|
||||
* ``manifests/site/${SITE}/target/catalogues/networking.yaml``:
|
||||
Contains the network definition in the entire system.
|
||||
* ``manifests/site/${SITE}/target/catalogues/networking-ha.yaml``:
|
||||
Defines the Kubernetes and ingress virtual IP addresses as well as the
|
||||
OAM interface.
|
||||
* ``manifests/site/${SITE}/ephemeral/catalogues/networking.yaml``:
|
||||
Provides only the overrides specific to the ephemeral nodes.
|
||||
|
||||
Last, update network references (e.g., interface name, IP address, port) in
|
||||
the target cluster deployment documents:
|
||||
|
||||
* ``manifests/site/${SITE}/phases/phase-patch.yaml``
|
||||
* ``manifests/site/${SITE}/target/catalogs/versions-airshipctl.yaml``
|
||||
* ``manifests/site/${SITE}/target/controlplane/metal3machinetemplate.yaml``
|
||||
* ``manifests/site/${SITE}/target/controlplane/versions-catalogue-patch.yaml``
|
||||
* ``manifests/site/${SITE}/target/initinfra-networking/patch_calico.yaml``
|
||||
* ``manifests/site/${SITE}/target/workers/metal3machinetemplate.yaml``
|
||||
* ``manifests/site/${SITE}/target/workers/provision/metal3machinetemplate.yaml``
|
||||
* ``manifests/site/${SITE}/target/network-policies/calico_failsafe_rules_patch.yaml``
|
||||
|
||||
Host Inventory
|
||||
++++++++++++++
|
||||
|
||||
Host inventory configuration requires the following information for each server:
|
||||
|
||||
* host name
|
||||
* BMC address
|
||||
* BMC user and password
|
||||
* PXE NIC mac address
|
||||
* OAM | Calico | PXE | storage IP addresses
|
||||
|
||||
Update the host inventory and other ephemeral and target cluster documents:
|
||||
|
||||
* ``manifests/site/${SITE}/host-inventory/hostgenerator/host-generation.yaml``:
|
||||
Lists the host names of the all the nodes in the host inventory.
|
||||
* ``manifests/site/${SITE}/target/catalogues/hosts.yaml``: The host catalogue
|
||||
defines the host information such as BMC address, credential, PXE NIC, IP
|
||||
addresses, hardware profile name, etc., for every single host.
|
||||
* ``manifests/site/${SITE}/ephemeral/bootstrap/baremetalhost.yaml``:
|
||||
Contains the host name of the ephemeral bare metal host.
|
||||
* ``manifests/site/${SITE}/ephemeral/bootstrap/hostgenerato/host-generation.yaml``:
|
||||
Defines the single host in the ephemeral cluster.
|
||||
* ``manifests/site/${SITE}/ephemeral/controlplane/hostgenerator/host-generation.yaml``:
|
||||
Defines the host name of the first controller node to bootstrap ion the
|
||||
target cluster.
|
||||
* ``manifests/site/${SITE}/phases/phase-patch.yaml``: Updates the ephemeral
|
||||
node host name and ISO URL.
|
||||
* ``manifests/site/${SITE}/target/controlplane/hostgenerator/host-generation.yaml``
|
||||
Defines the list of hosts to be deployed in the target cluster.
|
||||
* ``manifests/site/${SITE}/target/workers/hostgenerator/host-generation.yaml``
|
||||
Defines the list of hosts of the worker nodes.
|
||||
|
||||
Downstream Images and Binaries
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For a production environment, the access to external resources such as the
|
||||
``quay.io`` or various ``go`` packages may not be available, or further customized
|
||||
security hardening is required in the images.
|
||||
|
||||
In those cases, the operator will need to host their pre-built images or
|
||||
binaries in a downstream repository or artifactory. The manifests specifying
|
||||
image locations for the Kustomize plugins will need to be updated prior to
|
||||
running airshipctl commands, e.g., replacement-transformer, templater, sops,
|
||||
etc.
|
||||
|
||||
Here is an example ``sed`` command on the cloned airshipctl and treasuremap
|
||||
manifests for updating the image locations:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
find ./airshipctl/manifests/ ./treasuremap/manifests/ -name "*.yaml" -type f -readable -writable -exec sed -i \
|
||||
-e "s,gcr.io/kpt-fn-contrib/sops:v0.1.0,docker-artifacts.my-telco.com/upstream-local/kpt-fn-contrib/sops:v0.1.0,g" -i \
|
||||
-e "s,quay.io/airshipit/templater:latest,docker-artifacts.my-telco.com/upstream-local/airshipit/templater:latest,g" -i \
|
||||
-e "s,quay.io/airshipit/replacement-transformer:latest,docker-artifacts.my-telco.com.com/upstream-local/airshipit/replacement-transformer:latest,g" {} +;
|
||||
|
||||
Now the manifests for the new site are ready for deployment.
|
430
doc/source/airship2/site-deployment.rst
Normal file
430
doc/source/airship2/site-deployment.rst
Normal file
@ -0,0 +1,430 @@
|
||||
.. _site_deployment_guide:
|
||||
|
||||
Site Deployment Guide
|
||||
=====================
|
||||
|
||||
This document is the Airshp 2 site deployment guide for a standard greenfield
|
||||
bare metal deployment. The following sections decribes how to apply the site
|
||||
manifests for a given site.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Before starting, ensure that you have completed :ref:`system requirements and set up <site_setup_guide>`,
|
||||
including the the BIOS and Redfish settings, hardware RAID configuration etc.
|
||||
|
||||
Airshipctl Phases
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
A new concept with Airship 2 is :term:`phases<Phase>`. A phase is a step to be
|
||||
performed in order to achieve a desired state of the managed site. Phases group
|
||||
sets of commands together for a particular deployment step into one phase apply
|
||||
command. This greatly simplifies executing deployment.
|
||||
|
||||
The Airship 2 deployment uses heavily the ``airshipctl`` commands, especially the
|
||||
``airshipctl phase run`` commands. You may find it helpful to get familirized with
|
||||
the `airshipctl command reference`_ and `example usage`_.
|
||||
|
||||
To faciliate the site deployment, the Airship Treasuremap project provides a
|
||||
set of deployment scripts in the ``tools/deployment`` directory. These scripts
|
||||
are wrappers of the `airshipctl` commands with additional flow controls. They
|
||||
are numbered sequentially in the order of the deployment operations.
|
||||
|
||||
The instructions in this document will be based upon the Treasuremap deployment
|
||||
scripts.
|
||||
|
||||
.. _airshipctl command reference:
|
||||
https://docs.airshipit.org/airshipctl/cli/airshipctl.html
|
||||
.. _example usage:
|
||||
https://docs.airshipit.org/airshipctl/architecture.html#example-usage
|
||||
|
||||
Environment Variables
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The deployment steps below use a few additional environment variables that are
|
||||
already configured with default values for a typical deployment or inferred
|
||||
from other configuration files or site manifests. In most situations, users
|
||||
do not need to manually set the values for these environment variables.
|
||||
|
||||
* ``KUBECONFIG``: The location of kubeconfig file. Default value:
|
||||
``$HOME/.airship/kubeconfig``.
|
||||
* ``KUBECONFIG_TARGET_CONTEXT``: The name of the kubeconfig context for the
|
||||
target cluster. Default value: "target-cluster". You can find it defined
|
||||
in the Airshipctl configuration file.
|
||||
* ``KUBECONFIG_EPHEMERAL_CONTEXT``: The name of the kubeconfig context for
|
||||
the ephemeral cluster. Default value: "ephemeral-cluster". You can find it
|
||||
defined in the Airshipctl configuration file.
|
||||
* ``TARGET_IP``: The control plane endpoint IP or host name. Default value:
|
||||
derived from site documents for the ``controlplane-target`` phase. You
|
||||
can run the following command to extract the defined value:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl phase render controlplane-target \
|
||||
-k Metal3Cluster -l airshipit.org/stage=initinfra \
|
||||
2> /dev/null | yq .spec.controlPlaneEndpoint.host | sed 's/"//g'
|
||||
|
||||
* ``TARGET_PORT``: The control plane endpoint port number. Default value:
|
||||
derived from site documents for the ``controlplane-target`` phase. You
|
||||
can run the following command to extract the defined value:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl phase render controlplane-target \
|
||||
-k Metal3Cluster -l airshipit.org/stage=initinfra 2> /dev/null | \
|
||||
yq .spec.controlPlaneEndpoint.port
|
||||
|
||||
* ``TARGET_NODE``: The host name of the first controller node. Default
|
||||
value: derived from site documents for the ``controlplane-ephemeral``
|
||||
phase. You can run the following command to extract the defined value:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl phase render controlplane-ephemeral \
|
||||
-k BareMetalHost -l airshipit.org/k8s-role=controlplane-host 2> /dev/null | \
|
||||
yq .metadata.name | sed 's/"//g'
|
||||
|
||||
* ``WORKER_NODE``: The host name of the worker nodes. Default value: derived
|
||||
from site documents for the ``workers-target`` phase. You can run the
|
||||
following command to extract the defined value:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl phase render workers-target -k BareMetalHost 2> /dev/null | \
|
||||
yq .metadata.name | sed 's/"//g'
|
||||
|
||||
Configuring Airshipctl
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Airship requires a configuration file set that defines the intentions for the
|
||||
site that needs to be created. These configurations include such items as
|
||||
manifest repositories, ephemeral and target cluster context and bootstrap
|
||||
information. The operator seeds an initial configuration using the
|
||||
configuration initialization function.
|
||||
|
||||
The default location of the configuration files is ``$HOME/.airship/config``
|
||||
and ``$HOME/.airship/kubeconfig``.
|
||||
|
||||
When you run the init_site script in the :ref:`init_site` section, the
|
||||
``.airship/config`` file has been already created for you.
|
||||
|
||||
.. warning::
|
||||
If the Redfish api uses self-signed certificate, the user must run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl config set-management-config default --insecure
|
||||
|
||||
This will inject the ``insecure`` flag to the Airship configuration file as
|
||||
follows:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
managementConfiguration:
|
||||
default:
|
||||
insecure: true
|
||||
systemActionRetries: 30
|
||||
systemRebootDelay: 30
|
||||
type: redfish
|
||||
|
||||
Now let's create the ``.airship/kubeconfig``. If you plan to use an existing
|
||||
external kubeconfig file, run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl config import <KUBE_CONFIG>
|
||||
|
||||
Otherwise, create an empty kubeconfig that will be populated later by
|
||||
airshipctl:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
touch ~/.airship/kubeconfig
|
||||
|
||||
More advanced users can use the Airshipctl config commands to generate or
|
||||
update the configuration files.
|
||||
|
||||
To generate an Airshipctl config file from scratch,
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl config init [flags]
|
||||
|
||||
To specify the location of the manifest repository,
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl config set-manifest <MANIFEST_NAME> [flags]
|
||||
|
||||
To create or modify a context in the airshipctl config files,
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl config set-context <CONTEXT_NAME> --manifest <MANIFEST_NAME> [flags]
|
||||
|
||||
Full details on the ``config`` command can be found here_.
|
||||
|
||||
.. _here: https://docs.airshipit.org/airshipctl/cli/airshipctl_config.html
|
||||
|
||||
.. _gen_secrets:
|
||||
|
||||
Generating and Encrypting Secrets
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Airship site manifests contain different types of secrets, such as passwords,
|
||||
keys and certificates in the variable catalogues. Externally provided secrets,
|
||||
such as BMC credentials, are used by Airship and Kubernetes and can also be
|
||||
used by other systems. Secrets can also be internally generated by Airshipctl,
|
||||
e.g., Openstack Keystone password, that no external systems will provide or
|
||||
need.
|
||||
|
||||
To have Airshipctl generate and encrypt the secrets, run the following scrript
|
||||
from the treasuremap directory:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/23_generate_secrets.sh
|
||||
|
||||
The generated secrets will be updated in:
|
||||
|
||||
* ``${PROJECT}/manifests/site/${SITE}/target/generator/results/generated/secrets.yaml``
|
||||
* ``${HOME}/.airship/kubeconfig.yaml``
|
||||
|
||||
It is recommended that you save the generated results, for example, commit them
|
||||
to a git repository along with the rest of site manifests.
|
||||
|
||||
To update the secrets for an already deployed site, you can re-run this script
|
||||
and apply the new secret manifests by re-deploying the whole site.
|
||||
|
||||
For more details and trouble shooting, please refer to
|
||||
`Secrets generation and encryption how-to-guide <https://github.com/airshipit/airshipctl/blob/master/docs/source/secrets-guidelines.md>`_.
|
||||
|
||||
Validating Documents
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
After constituent YAML configurations are finalized, use the document
|
||||
validation tool to lint and check the new site manifests. Resolve any
|
||||
issues that result from the validation before proceeding.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/validate_docs
|
||||
|
||||
.. caution::
|
||||
|
||||
The validate_docs tool will run validation against all sites found in the
|
||||
``manifests/site`` folder. You may want to (temporarily) remove other sites
|
||||
that are not to be deployed to speed up the validation.
|
||||
|
||||
To validate a single site's manifest,
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export MANIFEST_ROOT=./${PROJECT}/manifests
|
||||
export SITE_ROOT=./${PROJECT}/manifests/site
|
||||
cd airshipctl && ./tools/document/validate_site_docs.sh
|
||||
|
||||
Estimated runtime: **5 minutes**
|
||||
|
||||
Building Ephemeral ISO Image
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The goal for this step is to generate a custom targeted image for bootstrapping
|
||||
an ephemeral host with a Kubernetes cluster installed. This image may then
|
||||
be published to a repository to which the ephemeral host will have remote access.
|
||||
Alternatively, an appropriate media delivery mechanism (e.g. USB) can be used to
|
||||
bootstrap the ephemeral host manually.
|
||||
|
||||
.. note:: The generate ISO image content includes:
|
||||
|
||||
- Host OS Image
|
||||
- Runtime engine: Docker/containerd
|
||||
- Kubelet
|
||||
- Kubeadm
|
||||
- YAML file for KubeadmConfig
|
||||
|
||||
First, create an output directory for ephemeral ISO image and run the
|
||||
``bootstrap-iso`` phase:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo mkdir /srv/images
|
||||
airshipctl phase run bootstrap-iso
|
||||
|
||||
Or, run the provided script from the treasuremap directory:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/24_build_images.sh
|
||||
|
||||
Then, copy the generated ephemeral ISO image to the Web hosting server that
|
||||
will serve the ephemeral ISO image. The URL for the image should match what is
|
||||
defined in
|
||||
``manifests/site/{SITE}/ephemeral/bootstrap/remote_direct_configuration.yaml``.
|
||||
|
||||
For example, if you have installed the Apache Web server on the jump host as
|
||||
described in the earlier step, you can simply execute the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo cp /srv/images/ephemeral.iso /var/www/html/
|
||||
|
||||
Estimated runtime: **5 minutes**
|
||||
|
||||
Deploying Ephemeral Node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In this step, we will create an ephemeral Kubernetes instance that ``airshipctl``
|
||||
can communicate with for subsequent steps. This ephemeral host provides a
|
||||
foothold in the target environment so the standard ``cluster-api`` bootstrap
|
||||
flow can be executed.
|
||||
|
||||
First, let's deploy the ephemeral node via Redfish with the ephemeral ISO image
|
||||
generated in previous step:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/25_deploy_ephemeral_node.sh
|
||||
|
||||
Estimated runtime: **10 minutes**
|
||||
|
||||
.. note:: If desired or if Redfish is not available, the ISO image can be
|
||||
mounted through other means, e.g. out-of-band management or a USB drive.
|
||||
|
||||
Now the ephemeral node is estbalished, we can deploy ``Calico``, ``metal3.io`` and
|
||||
``cluster-api`` components onto the ephemeral node:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/26_deploy_capi_ephemeral_node.sh
|
||||
|
||||
Estimated runtime: **10 minutes**
|
||||
|
||||
To use ssh to access the ephemeral node, you will need the OAM IP from the
|
||||
networking catalogue, and the user name and password from the airshipctl phase
|
||||
render command output.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl phase render iso-cloud-init-data
|
||||
|
||||
Deploying Target Cluster
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Now you are ready to use the ephemeral Kubernetes to provision the first target
|
||||
cluster node using the cluster-api bootstrap flow.
|
||||
|
||||
Create the target Kubernetes cluster resources:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/30_deploy_controlplane.sh
|
||||
|
||||
Estimated runtime: **25 minutes**
|
||||
|
||||
Deploy infrastructure components inlcuding Calico and meta3.io:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/31_deploy_initinfra_target_node.sh
|
||||
|
||||
Estimated runtime: **10 minutes**
|
||||
|
||||
Deploy ``cluster-api`` components to the target cluster:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/32_cluster_init_target_node.sh
|
||||
|
||||
Estimated runtime: **1-2 minutes**
|
||||
|
||||
Then, stop the ephemeral host and move Cluster objects to target cluster:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/33_cluster_move_target_node.sh
|
||||
|
||||
Estimated runtime: **1-2 minutes**
|
||||
|
||||
Lastly, complete the target cluster by provisioning the rest of the controller
|
||||
nodes:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/34_deploy_controlplane_target.sh
|
||||
|
||||
Estimated runtime: **30 minutes** (Depends on the number of controller nodes).
|
||||
|
||||
Provisioning Worker Nodes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This step uses the target control plane Kubernetes host to provision the
|
||||
target cluster worker nodes and apply the necessary phases to deploy software
|
||||
on the worker nodes.
|
||||
|
||||
To deploy, classify and provision the worker nodes, run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/35_deploy_worker_node.sh
|
||||
|
||||
Estimated runtime: **20 minutes**
|
||||
|
||||
Now the target cluster is fully operational and ready for workload deployment.
|
||||
|
||||
Deploying Workloads
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Treasuremap type ``airship-core`` deploys the ingress as a workload. The
|
||||
user can add other workload functions to the target workload phase in the
|
||||
``airship-core`` type, or create their own workload phase from scratch.
|
||||
|
||||
Adding a workload function involves two tasks. First, the user will create the
|
||||
function manifest(s) in the ``$PROJECT/manifest/function`` directory. A good
|
||||
example can be found in the `ingress`_ function from Treasuremap. Second, the
|
||||
user overrides the `kustomization`_ of the target workload phase to include
|
||||
the new workload function in the
|
||||
``$PROJECT/manifests/site/$SITE/target/workload/kustomization.yaml``.
|
||||
|
||||
For more detailed reference, please go to `Kustomize`_ and airshipctl `phases`_
|
||||
documentation.
|
||||
|
||||
.. _ingress: https://github.com/airshipit/treasuremap/tree/v2.0/manifests/function/ingress
|
||||
|
||||
.. _kustomization: https://github.com/airshipit/treasuremap/blob/v2.0/manifests/type/airship-core/target/workload/kustomization.yaml
|
||||
|
||||
.. _Kustomize: https://kustomize.io
|
||||
|
||||
.. _phases: https://docs.airshipit.org/airshipctl/phases.html
|
||||
|
||||
To deploy the workloads, run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/36_deploy_workload.sh
|
||||
|
||||
Estimated runtime: Varies by the workload content.
|
||||
|
||||
Accessing Nodes
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Operators can use ssh to access the controller and worker nodes via the OAM IP
|
||||
address. The ssh key can be retrieved using the airshipctl phase render command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl phase render controlplane-ephemeral
|
||||
|
||||
Tearing Down Site
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
To tear down a deployed bare metal site, the user can simply power off all
|
||||
the nodes and clean up the deployment artifacts on the build node as follows:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
airshipctl baremetal poweroff --name <server-name> # alternatively, use iDrac or iLO
|
||||
rm -rf ~/.airship/ /srv/images/*
|
||||
docker rm -f -v $(sudo docker ps --all -q | xargs -I{} sudo bash -c 'if docker inspect {} | grep -q airship; then echo {} ; fi')
|
||||
docker rmi -f $(sudo docker images --all -q | xargs -I{} sudo bash -c 'if docker image inspect {} | grep -q airship; then echo {} ; fi')
|
||||
|
300
doc/source/airship2/site-setup.rst
Normal file
300
doc/source/airship2/site-setup.rst
Normal file
@ -0,0 +1,300 @@
|
||||
.. _site_setup_guide:
|
||||
|
||||
System Requirement and Setup
|
||||
============================
|
||||
|
||||
Component Overview
|
||||
------------------
|
||||
|
||||
Airship uses a command line utility airshipctl that drives the deployment and
|
||||
lifecycling management of Kubernetes clouds and software stacks.
|
||||
|
||||
This utility articulates lifecycle management as a list of phases, or as a
|
||||
plan of phases or plan. For each of these phases, a YAML document set is
|
||||
rendered and Airshipctl transparently utilizes the appropriate set of CNCF
|
||||
projects to deliver that particular phase.
|
||||
|
||||
.. image:: img/airship_architecture_diagram.png
|
||||
|
||||
Node Overview
|
||||
-------------
|
||||
|
||||
This document refers to several types of nodes, which vary in their
|
||||
purpose, and to some degree in their orchestration / setup:
|
||||
|
||||
- **Build node**: This refers to the environment where configuration
|
||||
documents are built for your environment (e.g., your laptop).
|
||||
- **Ephemeral node**: The "ephemeral" or "seed node" refers to a node used
|
||||
to get a new deployment off the ground, and is the first node built
|
||||
in a new deployment environment.
|
||||
- **Controller nodes**: The nodes that make up the control plane. (Note that
|
||||
the ephemeral node will be converted to one of the controller nodes).
|
||||
- **Worker nodes**: The nodes that make up the data plane.
|
||||
|
||||
Hardware Preparation
|
||||
--------------------
|
||||
|
||||
The Treasuremap `reference-airship-core`_ site shows a production-worthy
|
||||
bare metal deployment that includes multiple disks and redundant/bonded
|
||||
network configuration.
|
||||
|
||||
.. Note::
|
||||
Airship hardware requirements are flexible, and the system can be
|
||||
deployed with very minimal requirements if needed (e.g., single disk, single
|
||||
network).
|
||||
|
||||
For simplified non-bonded, and single disk examples, see
|
||||
``manifests/site/test_site``.
|
||||
|
||||
.. _reference-airship-core:
|
||||
|
||||
https://github.com/airshipit/treasuremap/tree/v2.0/manifests/site/reference-airship-core
|
||||
|
||||
BIOS, Redfish and PXE
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Ensure that virtualization is enabled in BIOS.
|
||||
2. Ensure that Redfish IPs assigned, and routed to the environment you will
|
||||
deploy into. Firmware bugs related to Redfish are common. Ensure you are
|
||||
running the latest firmware version for your hardware.
|
||||
3. Set PXE as first boot device and ensure the correct NIC is selected for PXE.
|
||||
|
||||
.. note::
|
||||
* Airship can remotely bootstrap the nodes using Redfish. If Redfish is not
|
||||
available, you can mount the ephemeral ISO image via an alternate
|
||||
mechanism such as USB thumb drive.
|
||||
* Airship 2 has been verified on Dell PowerEdge R740xd servers with iDRAC 9,
|
||||
BIOS Version 2.8.2, iDRAC Firmware Version 4.22.00.53 and Redfish API
|
||||
version 1.
|
||||
|
||||
.. _Disk:
|
||||
|
||||
Disk
|
||||
~~~~
|
||||
|
||||
1. For controller nodes including the ephemeral node:
|
||||
|
||||
- Two-disk RAID-1: Operating System
|
||||
|
||||
2. For worker nodes (tenant data plane):
|
||||
|
||||
- Two-disk RAID-1: Operating System
|
||||
- Remaining disks: configuration per worker host profile
|
||||
|
||||
.. note::
|
||||
|
||||
As of release v2.0.0, the ``reference-airship-core`` example does not
|
||||
support the integration with the `Rook Storage Operator`_. However, the
|
||||
manifests for the Rook deployment can be found in the
|
||||
``manifests/function/rook-operator`` directory. If you plan to include
|
||||
Rook for Ceph storage, it is recommeneded to have the additional disks
|
||||
on all the controller nodes and worker nodes:
|
||||
|
||||
- Two disks JBOD: Ceph Journal and Metadata
|
||||
- Two disks JBOD: Ceph OSD's
|
||||
|
||||
.. _Rook Storage Operator:
|
||||
https://rook.io/
|
||||
|
||||
Network
|
||||
~~~~~~~
|
||||
|
||||
1. Ensure that you have a dedicated PXE interface on untagged/native VLAN.
|
||||
1x1G interface is recommended. The PXE network must have routability to
|
||||
the internet in order to fetch the provisioning disk image; alternately,
|
||||
you may host the image locally on the PXE network itself.
|
||||
|
||||
2. Ensure that you have VLAN segmented networks on all nodes. 2x25G bonded
|
||||
interfaces are recommended.
|
||||
|
||||
The table below is an opinionated example used by Treasuremap reference site
|
||||
``reference-airship-core``, but users can diverge from it as needed. For
|
||||
example, in the simplest configuration, two networks can be configured: one
|
||||
for PXE and one for everything else.
|
||||
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| VLAN/ | Name | Routability | Quantity | MTU | Description |
|
||||
| Network | | | | | |
|
||||
+=========+=============+==============+==========+======+==============================================+
|
||||
| 1023 | OOB/iLO | WAN | IPv4:/26 | 1500 | For HW Redfish addressing |
|
||||
| | | | IPv6:/64 | | |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| eno4 | PXE | Private | IPv4:/25 | 1500 | For bootstrap by Ironic, Metal3 or MaaS |
|
||||
| | | RFC1918 | IPv6:/64 | | |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| 61 | OAM | WAN | IPv4:/26 | 9100 | - Used for operational access to Hosts. |
|
||||
| | | | IPv6:/64 | | - Can reach to OOB, PXE, DNS, NTP, |
|
||||
| | | | | | Airship images and manifest repos |
|
||||
| | | | | | - Hosts all host level endpoints |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| | OAM | WAN | IPv4:/29 | 9100 | - Rack floating VIP for K8S ingress traffic |
|
||||
| | | | | | - Configured as secondary subnet for VLAN 41 |
|
||||
| | | | | | - Hosts all service endpoints |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| 62 | Storage | Private | IPv4:/25 | 9100 | Ceph storage traffic for all hosts, pods and |
|
||||
| | | RFC1918 | IPv6:/64 | | VMs |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| 64 | Calico | Private | IPv4:/25 | 9100 | L2 network used by Calico for BGP peering or |
|
||||
| | | RFC1918 | IPv6:/64 | | or IP-in-IP mesh |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| 82 | Subcluster | Private | IPv4:/22 | 9100 | Private IP ranges to VM based subclusters |
|
||||
| | Net | RFC1918 | IPv6:/64 | | for K8S as a service |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| Private | CNI Pod | Zone Private | IPv4:/16 | N/A | For Kubernetes Pods and objects by Calico |
|
||||
| Reserve | Network | | IPv6:/64 | | |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
| Private | k8S Service| Zone Private | IPv4:/16 | N/A | For K8S service objects and intermediary |
|
||||
| Reserve | Network | | IPv6:/64 | | pods |
|
||||
+---------+-------------+--------------+----------+------+----------------------------------------------+
|
||||
|
||||
See detailed network configuration example in the Treasuremap repo
|
||||
``manifests/site/reference-airship-core/target/catalogues/networking.yaml``
|
||||
configuration file.
|
||||
|
||||
Hardware sizing and minimum requirements
|
||||
----------------------------------------
|
||||
|
||||
+-------------------+----------+----------+----------+
|
||||
| Node | Disk | Memory | CPU |
|
||||
+===================+==========+==========+==========+
|
||||
| Build (laptop) | 10 GB | 4 GB | 1 |
|
||||
+-------------------+----------+----------+----------+
|
||||
| Ephemeral/Control | 500 GB | 64 GB | 24 |
|
||||
+-------------------+----------+----------+----------+
|
||||
| Worker | N/A* | N/A* | N/A* |
|
||||
+-------------------+----------+----------+----------+
|
||||
|
||||
* Workload driven (determined by host profile)
|
||||
|
||||
See detailed hardware configuration in the Treasuremap repo
|
||||
``manifests/site/reference-airship-core/target/catalogues`` folder.
|
||||
|
||||
.. _establishing_build_node:
|
||||
|
||||
Establishing build node
|
||||
-----------------------
|
||||
|
||||
Setting Environment Variables
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Airship deployment tool requires a few environment variables that the
|
||||
operators need to configure on the build node. The environment variables can
|
||||
be persisted by setting them in your profile, or can be set in the shell
|
||||
session before you run the Airship commands and scripts.
|
||||
|
||||
Proxy
|
||||
+++++
|
||||
|
||||
Access to external resources such as ``github``, ``quay.io`` and ``go`` is
|
||||
required for downloading manifests, images and ``go`` packages. If you are
|
||||
behind a proxy server, the following environment variables must be configured
|
||||
on the build node.
|
||||
|
||||
* ``USE_PROXY``: Boolean value to indicate if the proxy setting should be used
|
||||
or not.
|
||||
* ``http_proxy``: Proxy server for HTTP traffic.
|
||||
* ``https_proxy``: Proxy server for HTTPS traffic.
|
||||
* ``no_proxy``: IP addresses or domain names that shouldn’t use the proxy.
|
||||
|
||||
SOPS
|
||||
++++
|
||||
|
||||
For security reasons the secrets in the Airship manifests should not be stored
|
||||
in plain-text form. Airshipctl selects `Mozilla SOPS`_ to encrypt and decrypt
|
||||
the manifests.
|
||||
|
||||
.. _Mozilla SOPS:
|
||||
https://github.com/mozilla/sops
|
||||
|
||||
Two environment variables are needed for the encryption and decryption:
|
||||
|
||||
* ``SOPS_IMPORT_PGP``: Contains public or private key (or set of keys).
|
||||
* ``SOPS_PGP_FP``: Contains a fingerprint of the public key from the list of
|
||||
provided keys in ``SOPS_IMPORT_PGP`` that will be used for encryption.
|
||||
|
||||
The easiest way to generate SOPS keys is to use gpg wizard:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
gpg --full-generate-key
|
||||
|
||||
For demo purpose, you can import the pre-generated SOPs keys used by Airshipctl
|
||||
gate:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
curl -fsSL -o /tmp/key.asc https://raw.githubusercontent.com/mozilla/sops/master/pgp/sops_functional_tests_key.asc
|
||||
export SOPS_IMPORT_PGP="$(cat /tmp/key.asc)"
|
||||
export SOPS_PGP_FP="FBC7B9E2A4F9289AC0C1D4843D16CEE4A27381B4"
|
||||
|
||||
Airship Installation
|
||||
++++++++++++++++++++
|
||||
|
||||
* ``AIRSHIP_CONFIG_MANIFEST_DIRECTORY``: File system path to the Airship
|
||||
manifest directory, which will be the home of all Airship artifacts,
|
||||
including airshipctl, treasuremap, your projects and sites. You can create
|
||||
the directory at a location of your choice.
|
||||
* ``PROJECT``: Name of the project directory to be created in the :ref:`init_site`
|
||||
section.
|
||||
* ``SITE``: Name of the site to be deployed.
|
||||
|
||||
Download Airshipctl
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. On the build node, install the Git package:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt update
|
||||
sudo DEBIAN_FRONTEND=noninteractive apt -y install git
|
||||
|
||||
2. Create the Airship home directory and clone the ``airshipctl`` repository:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir -p $AIRSHIP_CONFIG_MANIFEST_DIRECTORY
|
||||
cd $AIRSHIP_CONFIG_MANIFEST_DIRECTORY
|
||||
git clone https://opendev.org/airship/airshipctl.git
|
||||
cd airshipctl && git checkout <release-tag|branch|commit-hash>
|
||||
|
||||
Install Essential Tools
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Install the essentials tools, including kubectl, kustomize, pip, and yq.
|
||||
|
||||
From the airshipctl directory, run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/10_install_essentials.sh
|
||||
|
||||
It is recommended to add the current user to the ``docker`` group to avoid
|
||||
using sudo in the subsequent steps:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo usermod -aG docker <user>
|
||||
|
||||
2. Install airshipctl executable.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/deployment/21_systemwide_executable.sh
|
||||
|
||||
2. (Optional) Install Apache Web server.
|
||||
|
||||
Airship 2 deployment requires a web server to host the generated ephemeral
|
||||
ISO image. If you don't have an existing web server, you can install an
|
||||
`Apache server`_ on the build node.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt install apache2
|
||||
|
||||
.. note:: The Apache Web server must be accessible by the ephemeral host.
|
||||
|
||||
.. _Apache server:
|
||||
https://ubuntu.com/tutorials/install-and-configure-apache
|
||||
|
||||
After the build node is established, you are ready to start creating your site
|
||||
manifests and deploying the site.
|
@ -72,7 +72,7 @@ Additionally, we provide a reference architecture for easily deploying a
|
||||
smaller, demo site.
|
||||
|
||||
`Airsloop`_ is a fully-authored Airship site that can be quickly deployed as a
|
||||
baremetal, demo lab.
|
||||
bare metal, demo lab.
|
||||
|
||||
.. _Airship-in-a-Bottle: https://opendev.org/airship/treasuremap/src/branch/master/tools/deployment/aiab
|
||||
|
||||
|
@ -48,7 +48,8 @@ developers.
|
||||
:maxdepth: 1
|
||||
|
||||
airship2/airship-in-a-pod.rst
|
||||
airship2/production.rst
|
||||
airship2/baremetal.rst
|
||||
airship2/providers.rst
|
||||
|
||||
.. toctree::
|
||||
:caption: Develop Airship 2
|
||||
@ -63,7 +64,6 @@ developers.
|
||||
Airship Documentation <https://docs.airshipit.org>
|
||||
Airshipctl <https://docs.airshipit.org/airshipctl>
|
||||
Airshipui <https://docs.airshipit.org/airshipui>
|
||||
Treasuremap <https://docs.airshipit.org/treasuremap>
|
||||
|
||||
.. toctree::
|
||||
:caption: Airship 1 Documentation
|
||||
@ -73,6 +73,7 @@ developers.
|
||||
Airship-in-a-Bottle <https://opendev.org/airship/treasuremap/src/branch/master/tools/deployment/aiab>
|
||||
Airsloop: Simple Bare-Metal Airship <https://docs.airshipit.org/treasuremap/airsloop.html>
|
||||
Seaworthy: Production-grade Airship <https://docs.airshipit.org/treasuremap/seaworthy.html>
|
||||
Treasuremap <https://docs.airshipit.org/treasuremap>
|
||||
develop/airship1-developers.rst
|
||||
develop/conventions.rst
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
..
|
||||
Copyright 2017-2020 AT&T Intellectual Property.
|
||||
Copyright 2020-2021 The Airship authors.
|
||||
All Rights Reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
@ -19,130 +19,131 @@
|
||||
Airship Glossary of Terms
|
||||
=========================
|
||||
|
||||
A
|
||||
-
|
||||
.. If you add new entries, keep the alphabetical sorting!
|
||||
|
||||
Airship:
|
||||
.. glossary::
|
||||
|
||||
A collection of interoperable and loosely coupled open source tools that
|
||||
Provide for automated cloud provisioning and life cycle managemente in a
|
||||
completely Declarative and predictable way. The focus of this project is the
|
||||
implementation of a declarative platform to introduce openstack on kubernets,
|
||||
and the lifecycle Management of the resulting cloud.
|
||||
Airship
|
||||
A platform that integrates a collection of best-of-class, interoperable and
|
||||
loosely coupled CNCF projects such as Cluster API, Kustomize, Metal3, and
|
||||
Helm Operator. The goal of Airship is to deliver automated and resilient
|
||||
container-based cloud infrastructure provisioning at scale on both bare
|
||||
metal and public cloud, and life cycle management experience in a completely
|
||||
declarative and predictable way.
|
||||
|
||||
B
|
||||
-
|
||||
|
||||
Bare metal provisioning:
|
||||
Bare metal provisioning
|
||||
The process of installing a specified operating system (OS) on bare metal
|
||||
host hardware. Building on open source bare metal provisioning tools such as
|
||||
OpenStack Ironic, Metal3.io provides a Kubernetes native API for managing
|
||||
bare metal hosts and integrates with the Cluster-API as an infrastructure
|
||||
provider for Cluster-API Machine objects.
|
||||
|
||||
Openstack ironic.
|
||||
Cloud
|
||||
A platform that provides a standard set of interfaces for `IaaS <https://en.wikipedia.org/wiki/Infrastructure_as_a_service>`_ consumers.
|
||||
|
||||
C
|
||||
-
|
||||
Container orchestration platform
|
||||
Set of tools that any organization that operates at scale will need.
|
||||
|
||||
Container orchestration platform:
|
||||
Control Plane
|
||||
From the point of view of the cloud service provider, the control plane
|
||||
refers to the set of resources (hardware, network, storage, etc.)
|
||||
configured to provide cloud services for customers.
|
||||
|
||||
set of tools that any organization that operates at scale will need.
|
||||
Data Plane
|
||||
From the point of view of the cloud service provider, the data plane is
|
||||
the set of resources (hardware, network, storage, etc.) configured to run
|
||||
consumer workloads. When used in Airship deployment, "data plane" refers
|
||||
to the data plane of the tenant clusters.
|
||||
|
||||
D
|
||||
-
|
||||
Executor
|
||||
A usable executor is a combination of an executor YAML definition as well as
|
||||
an executor implementation that adheres to the ``Executor`` interface. A
|
||||
phase uses an executor by referencing the definition. The executor purpose
|
||||
is to do something useful with the rendered document set. Built-in executors
|
||||
include `KubernetesApply`_, `GenericContainer`_, `Clusterctl`_, and several
|
||||
other executors that helps with driving Redfish, bare metal node image
|
||||
creation.
|
||||
|
||||
E
|
||||
-
|
||||
Hardware Profile
|
||||
A hardware profile is a standard way of configuring a bare metal server,
|
||||
including RAID configuration and BIOS settings. An example hardware profile
|
||||
can be found in the `Airshipctl repository`_.
|
||||
|
||||
F
|
||||
-
|
||||
Helm
|
||||
`Helm`_ is a package manager for Kubernetes. Helm Charts help you define,
|
||||
install, and upgrade Kubernetes applications.
|
||||
|
||||
G
|
||||
-
|
||||
Kubernetes
|
||||
An open-source container-orchestration system for automating application
|
||||
deployment, scaling, and management.
|
||||
|
||||
H
|
||||
-
|
||||
Lifecycle management
|
||||
The process of managing the entire lifecycle of a product from inception,
|
||||
through engineering design and manufacture, to service and disposal of
|
||||
manufactured products.
|
||||
|
||||
I
|
||||
-
|
||||
Network function virtualization infrastructure (NFVi)
|
||||
Network architecture concept that uses the technologies of IT virtualization
|
||||
to virtualize entire classes of network node functions into building blocks
|
||||
that may connect, or chain together, to create communication services.
|
||||
|
||||
J
|
||||
-
|
||||
Openstack Ironic (OpenStack bare metal provisioning)
|
||||
An integrated OpenStack program which aims to provision bare metal machines
|
||||
instead of virtual machines, forked from the Nova bare metal driver.
|
||||
|
||||
K
|
||||
-
|
||||
Orchestration
|
||||
Automated configuration, coordination, and management of computer systems and
|
||||
software.
|
||||
|
||||
Kubernetes:
|
||||
Phase
|
||||
`Phase`_ is defined as a Kustomize entrypoint and its relationship to a known
|
||||
Airship executor that takes the rendered document set and performs defined
|
||||
action on it. The goal of phases is to break up the delivery of artifacts
|
||||
into independent document sets. The most common example of such an executor
|
||||
is the built-in ``KubernetesApply`` executor which takes the rendered
|
||||
document set and applies it to a Kubernetes end-point and optionally waits
|
||||
for the workloads to be in a specific state. The airship community provides a
|
||||
set of predefined phases in Treasuremap that allow you to deploy a Kubernetes
|
||||
cluster and manage its workloads. But you can craft your own phases as well.
|
||||
|
||||
An open-source container-orchestration system for automating application
|
||||
deployment, scaling, and management.
|
||||
Phase Plan
|
||||
A plan is a collection of phases that should be executed in sequential order.
|
||||
It provides the mechanism to easily orchestrate a number of phases. The
|
||||
purpose of the plan is to help achieve a complete end to end lifecycle with
|
||||
a single command. Airship Phase Plan is declared in your YAML library. There
|
||||
can be multiple plans, for instance, a plan defined for initial deployment,
|
||||
a plan for updates, and even plans for highly specific purposes. Plans can
|
||||
also share phases, which makes them another fairly light-weight construct and
|
||||
allows YAML engineers to craft any number of specific plans without
|
||||
duplicating plan definitions.
|
||||
|
||||
L
|
||||
-
|
||||
Software defined networking (SDN)
|
||||
Software-defined networking technology is an approach to network management
|
||||
that enables dynamic, programmatically efficient network configuration in
|
||||
order to improve network performance and monitoring making it more like cloud
|
||||
computing than traditional network management.
|
||||
|
||||
Lifecycle management:
|
||||
Stage
|
||||
A stage is a logical grouping of phases articulating a common purpose in the
|
||||
life cycle. There is no airshipctl command that relates to stages, but it is
|
||||
a useful notion for purposes of discussion that we define each of the stages
|
||||
that make-up the life cycle process.
|
||||
|
||||
The process of managing the entire lifecycle of a product from inception,
|
||||
through engineering design and manufacture, to service and disposal of
|
||||
manufactured products.
|
||||
.. _Helm:
|
||||
https://helm.sh
|
||||
|
||||
M
|
||||
-
|
||||
.. _Airshipctl repository:
|
||||
https://github.com/airshipit/airshipctl/tree/master/manifests/function/hardwareprofile-example
|
||||
|
||||
N
|
||||
-
|
||||
.. _Phase:
|
||||
https://docs.airshipit.org/airshipctl/phases.html
|
||||
|
||||
Network function virtualization infrastructure:
|
||||
.. _KubernetesApply:
|
||||
https://github.com/airshipit/airshipctl/blob/master/pkg/phase/executors/k8s_applier.go
|
||||
|
||||
Network architecture concept that uses the technologies of IT virtualization
|
||||
to virtualize entire classes of network node functions into building blocks
|
||||
that may connect, or chain together, to create communication services.
|
||||
.. _GenericContainer:
|
||||
https://github.com/airshipit/airshipctl/blob/master/pkg/phase/executors/container.go
|
||||
|
||||
O
|
||||
-
|
||||
|
||||
Openstack ironic (OpenStack bare metal provisioning):
|
||||
|
||||
An integrated OpenStack program which aims to provision bare metal machines
|
||||
instead of virtual machines, forked from the Nova baremetal driver.
|
||||
|
||||
Orchestration:
|
||||
|
||||
Automated configuration, coordination, and management of computer systems
|
||||
and software.
|
||||
|
||||
P
|
||||
-
|
||||
|
||||
Q
|
||||
-
|
||||
|
||||
R
|
||||
-
|
||||
|
||||
S
|
||||
-
|
||||
|
||||
Software defined networking (sdn):
|
||||
|
||||
Software-defined networking technology is an approach to network management
|
||||
that enables dynamic, programmatically efficient network configuration in order
|
||||
to improve network performance and monitoring making it more like cloud computing
|
||||
than traditional network management.
|
||||
|
||||
T
|
||||
-
|
||||
|
||||
U
|
||||
-
|
||||
|
||||
V
|
||||
-
|
||||
|
||||
W
|
||||
-
|
||||
|
||||
X
|
||||
-
|
||||
|
||||
Y
|
||||
-
|
||||
|
||||
Z
|
||||
-
|
||||
.. _Clusterctl:
|
||||
https://github.com/airshipit/airshipctl/blob/master/pkg/phase/executors/clusterctl.go
|
||||
|
Loading…
Reference in New Issue
Block a user