[WIP] Add AirshipCTL Spec

This spec details the command-line interface, behavior, and design
of the Airship 2.0 target-state `airshipctl` tool.

Change-Id: I43b690c7c43ddc4f87c0d205f2209cd169a1c0e6
Co-Authored-By: Rodolfo Pacheco <rp2723@att.com>
This commit is contained in:
Matt McEuen 2019-05-17 09:19:40 -05:00 committed by Rodolfo Pacheco
parent deea90ad1c
commit 1080aecf3f
4 changed files with 570 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 265 KiB

BIN
images/airshipctl.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 457 KiB

View File

@ -0,0 +1,570 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
single: airshipctl
single: Command Line
====================================================
Airship CTL - Airship 2.0 Command Line Interface
====================================================
*Airshipctl* will be a command-line interface for driving deployments and upgrades
of Airship-based infrastrucure, and it will be the primary user interface for
the Airship 2.0 platform. This spec details the interface, behavior, and
high-level design of the *airshipctl* tool.
Links
=====
The work to author and implement this spec will be tracked under this
`Jira Epic <https://airship.atlassian.net/browse/AIR-2>`_.
Problem description
===================
The main goal of Airship 2.0 is to:
- Allow for smaller deployments by introducing an Ephemeral Airship control plane.
- Further the adoption of entrenched projects.
- Continue vanishing complexity.
A big part of achieving the goals for Airship 2.0 is the inception of a new
component called ``*airshipctl*``.
This spec defines what this component needs to support.
Impacted components
===================
The existing Airship components are not directly impacted by *airshipctl*, however
some of the functionality introduced via
the airshipctl will replace, abstract or encapsulate the purpose of existing tools.
#. Pegleg: Will become a plugin driven by the *airshipctl* document features.
#. Deckhand: Functionality such as Document Revision will now be achieved via
an *airshipctl* document function.
#. Shipyard: The front door of Airship will now become *airshipctl*, which will
interact directly with kubernetes.
Proposed change
===============
Airshipctl Architecture
-----------------------
The heart of the Airship 2.0 platform is the *airshipctl* command line interface.
It places an emphasis on a thick client that is effectively able to speak
to k8s in remote sites and natively understands argo workflows
to drive cluster life cycle management. This is in contrast to Airship 1.0 which
leveraged a Shipyard API in the remote site, which was a long-lived service.
The goals for this pivot are to reduce the underlying infrastructure required to
support updates to an existing site and reduce the number of Airship specific YAML
documents required to produce and manage a site, vastly simplifying the overall design.
This utility is a net-new go module that produces two binaries: *airshipctl*
and airshipui. Both of these utilities operate on a kubernetes cluster
security context, and understand how to interpret and generate a
skeleton Airship document set. The *airshipctl* utility is the main entrypoint
for bootstrapping a cluster, collecting and pushing documents,
and managing workflows.
All functionality provided by the *airshipctl* will support a framework for
supported plugins. Every feature provided by *airshipctl*
will need to be implemented by providing explicit plugins.
Framework
---------
Define and create the framework for the go plugin that will produce *airshipctl*.
This includes defining the plugin framework to ensure it is extensible, logging,
vendoring, and basic CI/CD so the project can gain momentum.
The goal is that all known initial subcommands, e.g. bootstrap, document, and
so on will be plugins themselves, ensuring it supports extensibility day one.
Specifically the framework needs to define:
- Well-defined mechanism to consume and use plugins.
- Guidelines on expectations from plugins.
- Transactional information: Mechanism to identify a request initiated by *airshipctl*
that allows for correlation between CTL and UI, and/or logging.
Configuration Context
---------------------
Both the CLI and the UI will need to be able to leverage a standard k8s
security context for interacting with remote sites. It should also understand
Airship site contexts so that it can be pointed at a local document set and
work with that during subsequent commands without needing
to be fed the same document set with every command similar to interacting
with the same Kubernetes cluster throughout a kubectl session.
The initial proposed list of commands that will be implemented is defined below:
Every command will support :
**GENERIC OPTIONS**
+---------------------+-----------+-------------------------------------------+
| NAME | SHORTHAND | USAGE |
+=====================+===========+===========================================+
| help | h | help for the appropriate command |
+---------------------+-----------+-------------------------------------------+
| version | v | version of airshipctl . Plugins will not |
| | | have independent versions. |
+---------------------+-----------+-------------------------------------------+
Config
------
*Airshipctl* will be able to interact with multiple clusters, and it will follow kubectl's principles on
how to configure access to multiple clusters by using configuration files.
A configuration file describes YAML, location, clusters, users, and contexts.
Modify *airshipctl* configuration files using subcommands like
``*airshipctl* config set current-context my-context``.
In general this command allows us to modify the configuration of
*airshipctl* in a similar fashion as kubectl context management.
*Airshipctl* will have an **airship config** file that can be manually managed or can be updated via the config options.
The loading order follows these rules:
The *--airshipconfig* flag will allow a user to specify a non default config file.
If the *--airshipconfig* flag is set, then only that file is loaded.
The flag may only be set once and no merging takes place.
If $AIRSHIPCONFIG environment variable is set, then it is used as
a list of paths (normal path delimiting rules for your system).
These paths are merged. When a value is modified, it is modified in the file
that defines the stanza. When a value is created, it is created in the first
file that exists. If no files in the chain exist, then it creates the last
file in the list.
Otherwise, ``${HOME}/.airship/config`` is used and no merging takes place.
.. note::
*Ian Howell :* We should consider using the `XDG Directory standard for configuration files: <https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html>`_.
*The blurb above is copying kubectl, I have no strong preference*
Like kubectl, it should support managing clusters, contexts and users.
**CLUSTER**
- ``*airshipctl* *config set-cluster*``
+----------------------+----------------------------------------------------------------------------------------+
| NAME | USAGE |
+======================+========================================================================================+
| set-cluster | Provide cluster name or label, as well as --server and --certificate-authority, or |
| | and --insecure-skip-tls-verify |
+----------------------+----------------------------------------------------------------------------------------+
**CONTEXT**
- ``*airshipctl* *config current-context*``
- ``*airshipctl* *config delete-context*``
- ``*airshipctl* *config get-contexts*``
- ``*airshipctl* *config set-context*``
- ``*airshipctl* *config use-context*``
+----------------------+----------------------------------------------------------------------------------------+
| NAME | USAGE |
+======================+========================================================================================+
| current-context | |
+----------------------+----------------------------------------------------------------------------------------+
| delete-context | |
+----------------------+----------------------------------------------------------------------------------------+
| get-contexts | |
+----------------------+----------------------------------------------------------------------------------------+
| set-context | |
+----------------------+----------------------------------------------------------------------------------------+
| use-context | |
+----------------------+----------------------------------------------------------------------------------------+
**USER**
- ``*airshipctl* *config set-credentials*``
+----------------------+----------------------------------------------------------------------------------------+
| NAME | USAGE |
+==========+====================================================================================================+
| set-credentials | Provide user --username and --password or --clent-certificate and --client-key |
+----------------------+----------------------------------------------------------------------------------------+
**YAML REPOSITORY**
- ``*airshipctl* *config set-repository*``
- ``*airshipctl* *config set-clone-path*``
- ``*airshipctl* *config set-extra-repository*``
- ``*airshipctl* *config set-repo-username*``
- ``*airshipctl* *config set-repo-key*``
+----------------------+----------------------------------------------------------------------------------------+
| NAME | USAGE |
+======================+========================================================================================+
| set-repository | Path or URL to the primary repository (containing site-definition.yaml) repo. |
+----------------------+----------------------------------------------------------------------------------------+
| set-clone-path | The path where the repo will be cloned. |
+----------------------+----------------------------------------------------------------------------------------+
| set-extra-repository | Path or URL of additional repositories. These should be named per the site-definition |
| | file, e.g. -e global=/opt/global -e secrets=/opt/secrets. |
+----------------------+----------------------------------------------------------------------------------------+
| set-repo-username | The SSH username to use when cloning remote authenticated repositories specified in |
| | site-definition file. |
+----------------------+----------------------------------------------------------------------------------------+
| set-repo-key | The SSH public key to use when cloning remote authenticated repositories. |
+----------------------+----------------------------------------------------------------------------------------+
This will allow us to specify a single local or remote repositories for the location of the YAML files.
Bootstrap
---------
The bootstrap plugin will drive the lifecycle required to:
- Stand up a bootstrap cluster if one doesn't exist.
- Initialize a target remote cluster, by configuring at least the
initial server for that cluster.
- The plugin will leverage the intentions for the site provided
by the YAML declarations, including BareMetalHost CRs.
- ``*airshipctl* **boostrap**``
.. note::
- optional [phase] to target just one phase in the bootstrap process
- determine whether bootstrap should try and orchestrate something like k3s itself or expect the user
to have done that prior to bootstrap. Cost of embedding k3s (50MB) is really the k8s images (200MB).
The installation outside of airship is rather simple:
- sudo k3s server &
- # Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
- sudo k3s kubectl get node
- if we expect a pre-existing cluster, ephemeral or not, they will need to provide kubeconfig context for the
ephemeral cluster cluster which may be separate from what is defined for the site
- if this is effectively driving what clusterctl would do, presumably the yaml it needs is within the overall
document set and we can find them (e.g. provider components, machines)
how will we know we need to insert things like baremetalnodes, as that is provider dependant, e.g. required
for metal3-io but not for azure.
**BOOSTRAP OPTIONS**
+-----------------+-----------+--------------------------------------------------------+
| NAME | SHORTHAND | USAGE |
+=================+===========+========================================================+
| init | i | Initialize the cluster with Airship needed elements |
+-----------------+-----------+--------------------------------------------------------+
| isogen | g | Generate iso image for specific server |
+-----------------+-----------+--------------------------------------------------------+
| remotedirect | r | Remotely interact with a host via redfish, install iso |
+-----------------+-----------+--------------------------------------------------------+
| logs | l | Retrieve logs |
+-----------------+-----------+--------------------------------------------------------+
| clusterctl | c | Drive clusterctl calls |
+-----------------+-----------+--------------------------------------------------------+
An ideological sequence diagram for the boostrapping flows is depicted below :
.. image:: ../../images/bootstrap_workflow.png
:width: 600
**init**: Durign the bootstrapping flow , this is utilized at the point at which a bootstrapping cluster has been established.
This command will deliver the Airship required operators , and CRs to the bootstrapping cluster host.
*Airshipctl* relies on teh config explained above , to define credentials, context and cluster settings to identify where
and how to identify against this bootstrapping cluster.
*init* will deliver :
- barmeetal operator CRD
- baremetal actuator CRD
- cluster operator CRD
- Helm tiller
- Argo operator CRD
- Logging related CRD ( or components) . This is TBD.
- MinIO as a Helm chart.
- TBD
**isogen**: This command will accomplish several things :
- Consume from YAML definition the expectations for the bootstrapping cluster host.
i.e Network information, host os source, target for the generated image.
- Generates a customizing iso image that includes the appropriate bootstrapping information for the host.
This will include a minimun boostrapping single hiost kubernetes host.
- The information required to access this host , such as certificates/credentials, endpoints for api, and cluster
information will be produced as an output. Airshipctl config should be able to use this to configure a context
for the target bootstrapping cluster.
This output should also include the Redfish target URL's for the isogenerated image.
**remotedirect**: The approach for configuring the bootstrapping host is to interact directly with it using the
appropriate set of `Redfish api's <https://www.dmtf.org/standards/redfish>`. This operation will be encharge of:
- Connect to the bootstrapping host.
- bootstrap the host with the custom iso image.
- validate with some degree of sanity that the host is available.
**clusterctl**: This command create the target cluster using cluster-api and teh appropriate provided deployed in the
boostrapping cluster by the bootstrap init command. It takes the following four manifests as input:
- cluster.yaml - defines Cluster properties, such as Pod and Services CIDR, Services Domain, etc.
- machines.yaml - defines Machine properties, such as machine size, image, tags, SSH keys, enabled features, as well
as what Kubernetes version will be used for each machine.
- bare metal hosts.yaml - defines the actual BMH's we want to initialize at this point. Conceptually this might only
be teh control plane servers.
- provider-components.yaml - contains deployment manifest for Cluster-API Controller which manages and reconciles
Cluster-API resources related to this provider.
- addons.yaml - used to deploy additional components once the cluster is bootstrapped, such as CNI, CSI or other plugins.
**logs**: During the bootstrapping of teh target cluster there will be logs generated and stored in minIO locally.
This command will provide a mechanism to interact with those logs.
The lifecycle driven by the bootstrap command is depicted by
the flow in the image below:
.. image:: ../../images/airship_2_0_airshipcli_and_clusterctl.png
:width: 600
The goal is for Airship bootstrap to function with Metal3-io, Azure, Openstack,
and the generic-ssh providers to support baremetal, cloud, Openstack vms,
and bring your own infrastructure use cases day one.
Document
--------
Documents are the life-blood of Airship. Airship documents, which are a
collection of metal3-io, machine, cluster-api, armada, and argo documents,
need to be collected, pushed to sites and validated.
This plugin would be responsible for handling all of that client-side
including document revisions, document as secrets, etc.
The goal here is also that the document plugin--for flexibility--should
support pushing a subset of document sets.
For instance, much like we have update_site and deploy_site today, which
basically indicate whether baremetal documents are submitted or not, the
same should be true of Airship 2.0 but with more flexibility.
- ``*airshipctl* **document**``
**OPTIONS**
+----------+---------------------------------------------------------------------------------+
| NAME | USAGE |
+==========+=================================================================================+
| bundle | Provides a mechanism to manage manifest (all YAML) as a bundle |
+----------+---------------------------------------------------------------------------------+
| secret | Provides a mechanism takin to pegleg passphrase and pki generation functions |
+----------+---------------------------------------------------------------------------------+
| validate | Provides options for YAML validation |
+----------+---------------------------------------------------------------------------------+
| init | Create a structure for the YAML documents |
+----------+---------------------------------------------------------------------------------+
| apply | Label document set for specific purpose. Will drive specific target deliveries |
+----------+---------------------------------------------------------------------------------+
- ``*airshipctl* document **bundle**``
In Airhsip 2.0 all the manifest documents are stored in the cluster kubernetes etcd store. Documents are
either CRDs. Armada YAML will also become CRDs in Airship 2.0. All CRDs or any YAML document that is
delivered to a site will be encapsulated as a kubernetes secret document that we are calling a bundle.
A Bundle is a collection of all YAML documents deployed via airshipctl. It would include all CRDs,
such as BaremetalHost, Machine, Armada Manifest, Armada ChartGroups, etc. It would have metadata to identify
information such as :
- Time stamp of when it was created.
- Identity information of its provenance (Source Location, Credentials / Login or whatever is appropriate, etc).
- Untique Id (UUID).
**BUNDLE OPTIONS**
+---------------------+-----------+-------------------------------------------+
| NAME | SHORTHAND | USAGE |
+=====================+===========+===========================================+
| get | g | Get a specific bundle |
+---------------------+-----------+-------------------------------------------+
| apply | a | Apply the bundle from the manifest |
+---------------------+-----------+-------------------------------------------+
| list | l | List all bundle for the specific manifest |
+---------------------+-----------+-------------------------------------------+
| *delete* | d | Delete a specific bundle for the manifest |
+---------------------+-----------+-------------------------------------------+
**SECRET OPTIONS**
Secret functionality allows airshipctl to implement similar functionality as the one explained in the
`Pegleg secrets specification <https://github.com/airshipit/specs/blob/master/specs/approved/pegleg_secrets.rst>`_.
+---------------------+-----------+-------------------------------------------+
| NAME | SHORTHAND | USAGE |
+=====================+===========+===========================================+
| generate-passphrase | | Command to generate a passphrase |
+---------------------+-----------+-------------------------------------------+
| rotate-passphrases | | Rotate passphrases |
+---------------------+-----------+-------------------------------------------+
| generate-pki | | Generate certificates |
+---------------------+-----------+-------------------------------------------+
| rotate-pki | | Rotate certificates |
+---------------------+-----------+-------------------------------------------+
.. note::
*A comment was made to support rollback. I dont think that is applicable here. We can disregard or simply apply the previous revision as needed.*
- ``*airshipctl* document **validate**``
*airshipctl* will be able to trigger validations locally prior to delivering documents to a site.
**VALIDATE OPTIONS**
By default, if no options are specififed the document validate command will run thorugh all avaiable validations.
+-----------+-----------+-----------------------------------------------------------------------------------------------+
| NAME | SHORTHAND | USAGE |
+===========+===========+===============================================================================================+
| lint | l | YAML syntax and schema validation |
+-----------+-----------+-----------------------------------------------------------------------------------------------+
| policy | p | All required policy documents are in-place, and existing documents conform to those policies. |
| | | E.g. if a 3rd party document specifies a layer that is not present in the layering policy, |
| | | that will cause this validation to report an error.. |
+-----------+-----------+-----------------------------------------------------------------------------------------------+
- ``*airshipctl* document **init**``
Initialize document structure and bootstrap contents. It creates
a skeleton Airship document to allow new users to
begin building the definition for a new site.
Executing this command will create the following directory structure:
..<YAML Target Directory>/
global/
software/
charts/
config/
manifests/
secrets/
passphrases/
publickey/
site/
baremetal/
baremetalhosts/
machines/
.. note::
*More detail as to what files or CRs can be automatically generated
in order to be used as templates, or simple because the information
in them is generic*
Cluster
-------
The CLI should support integration with a cluster-registry.
Much of the same information is available in a cluster-api generated
Cluster CRD that a cluster registry should be populated with.
The communities themselves are looking to align on this and we should ensure
*airshipctl* supports the ultimate path.
- ``*airshipctl* cluster``
**CLUSTER OPTIONS**
+----------+-----------+--------------------------------------------------------+
| NAME | SHORTHAND | USAGE |
+==========+===========+========================================================+
| init | i | Initialize the cluster with Airship needed elements |
| | | Deploy Argo , Deploy Operators, etc.. |
+----------+-----------+--------------------------------------------------------+
| status | s | |
+----------+-----------+--------------------------------------------------------+
| state | t | State of the site, CRD and their verions. Argo version |
+----------+-----------+--------------------------------------------------------+
.. note::
*Cluster might not be the best name for this, could be named site instead*
Workflow
--------
Cloud Native workflow will be managed, delivered and triggered using this command entry point.
Workflows are a collection of YAML templates. Each template specifies the action that each step of the workflow will execute.
Each step is instantiated as a container. In order to achieve this the template can specify and use volumes, pvs, init containers,
service account, custom schedulers, etc.
In essence, the workflow command manages a a specific set of declared workflows.
Specifically, we will be leveraging `Argo <https://github.com/argoproj/argo>` workflows.
- ``*airshipctl* workflow``
The workflows that airshipctl support will only be those integrated into teh airshipctl repository.
Essentially all airship specific workflows will be a part of the tool. Anybody within the community can add new workflows
but each of them will need to satisfy certain guidelines, which might include :
- Annotation , Label, Metadata or placeholder input for transactional correlation data.
- Specific directory structure layout for workflow files.
- Accompanying gating test. Every work flow supported by Airship will need to be running thorugh the appropriate CI/CD gating and tests.
The Airshipctl driven workflows will be ephemeral. Airshipctl will deliver them each time as a new instance. Older compleetd workflows will remain
on a site only for informational purposes and can or will be removed eventually .
Airshipctl will not support custom workflows, unless they have been merged into the airshipctl repo.
Custom workflows can be driven by leveraging the infrastructure in place appropriately. AKA as Argo.
Specific Airshipctl workflows would be :
- sitemanage: This workflow is used to deploy and update sites.
The definition of a site state is a collection of YAML. Some Airship provided workflows will
be responsible for effecting the state depicted by this YAML. Whether this
is a first time deployment, or an expansion or simple software stack configuration
changes. This workflow will be responsible for delivering and executing the
intentions described by the YAML.
- redeployhost: This workflow is for redeploying a host. It includes taking care of whatever needs to preserve
workloads when appropriate. This workflow essentially will encapsulate the logic required to prepare and
post configure a host after redeployment. In essence a host should be redeployable by simply altering the
BareMetalHost specification and the associated MachineSpec definition. The reason for a specific workflow, is
that there could be tasks to be performed in preparation of the redeploy, and after the redeploy occurs.
**WORKFLOW OPTIONS**
+--------------+-----------+--------------------------------------------------------------+
| NAME | SHORTHAND | USAGE |
+==============+===========+==============================================================+
| list | l | List workflows |
+--------------+-----------+--------------------------------------------------------------+
| delete | d | Delete a custom workflow |
+--------------+-----------+--------------------------------------------------------------+
| submit | s | submit a workflow |
+--------------+-----------+--------------------------------------------------------------+
| get | g | display details about a workflow |
+--------------+-----------+--------------------------------------------------------------+
| watch | w | watch a workflow until it completes |
+--------------+-----------+--------------------------------------------------------------+
| terminate | t | terminate a workflow |
+--------------+-----------+--------------------------------------------------------------+
| suspend | u | resume a workflow |
+--------------+-----------+--------------------------------------------------------------+
| resume | r | Get a workflow |
+--------------+-----------+--------------------------------------------------------------+
| logs | | view logs of a workflow |
+--------------+-----------+--------------------------------------------------------------+
.. note::
*Other Argo commands TBD. i.e. lint , is that a function of document validate lint maybe,
resubmit and retry, Not sure. The other command help, version should be generic throughout the airshipctl.
The argo command wait , need to determine if we want or not.*
Implementation
==============
.. image:: ../../images/airshipctl.png
:width: 600
Dependencies
============
References
==========
.