Fix docs renderring, enforce instructions and template

This patch applies various documentation renderring fixes,
and enforces application of the instructions and the
template for the file names.

In addition to that it adds a requirement to add patches
related to the spec under specified Gerrit topics.

Change-Id: I36199cf78c30f2ee75c2d716b8919ceae2ab7c42
This commit is contained in:
Roman Gorshunov 2019-03-13 16:12:45 +01:00
parent d317ef57ce
commit c5064ef2eb
10 changed files with 189 additions and 189 deletions

1
.gitignore vendored
View File

@ -5,3 +5,4 @@
/AUTHORS /AUTHORS
/ChangeLog /ChangeLog
.tox .tox
.vscode/

View File

@ -5,18 +5,9 @@
http://creativecommons.org/licenses/by/3.0/legalcode http://creativecommons.org/licenses/by/3.0/legalcode
.. index:: .. index::
single: template single: Airship
single: creating specs single: multi-linux-distros
single: containers
.. note::
Blueprints are written using ReSTructured text.
Add index directives to help others find your spec. E.g.::
.. index::
single: template
single: creating specs
=========================================== ===========================================
Airship Multiple Linux Distribution Support Airship Multiple Linux Distribution Support
@ -30,8 +21,9 @@ and other Linux distro's as new plugins.
Links Links
===== =====
The work to author and implement this spec is tracked in Storyboard: The work to author and implement this spec is tracked in Storyboard
https://storyboard.openstack.org/#!/story/2003699 `2003699 <https://storyboard.openstack.org/#!/story/2003699>`_ and uses Gerrit
topics ``airship_suse``, ``airship_rhel`` and similar.
Problem description Problem description
=================== ===================

View File

@ -150,19 +150,25 @@ Overall Architecture
- Raw rack information from plugin: - Raw rack information from plugin:
::
vlan_network_data: vlan_network_data:
oam: oam:
subnet: 12.0.0.64/26 subnet: 12.0.0.64/26
vlan: '1321' vlan: '1321'
- Rules to define gateway, ip ranges from subnet: - Rules to define gateway, ip ranges from subnet:
::
rule_ip_alloc_offset: rule_ip_alloc_offset:
name: ip_alloc_offset name: ip_alloc_offset
ip_alloc_offset: ip_alloc_offset:
default: 10 default: 10
gateway: 1 gateway: 1
The above rule specify the ip offset to considered to define ip address for gateway, reserved The above rule specify the ip offset to considered to define ip address for gateway, reserved
and static ip ranges from the subnet pool. and static ip ranges from the subnet pool.
So ip range for 12.0.0.64/26 is : 12.0.0.65 ~ 12.0.0.126 So ip range for 12.0.0.64/26 is : 12.0.0.65 ~ 12.0.0.126
@ -177,7 +183,7 @@ Overall Architecture
- Intermediary YAML file information generated after applying the above rules - Intermediary YAML file information generated after applying the above rules
to the raw rack information: to the raw rack information:
:: ::
network: network:
vlan_network_data: vlan_network_data:
@ -192,13 +198,13 @@ Overall Architecture
static_end: 12.0.0.126 ----+ static_end: 12.0.0.126 ----+
vlan: '1321' vlan: '1321'
-- --
- J2 templates for specifying oam network data: It represents the format in - J2 templates for specifying oam network data: It represents the format in
which the site manifests will be generated with values obtained from which the site manifests will be generated with values obtained from
Intermediary YAML Intermediary YAML
:: ::
--- ---
schema: 'drydock/Network/v1' schema: 'drydock/Network/v1'
@ -230,12 +236,12 @@ Overall Architecture
end: {{ data['network']['vlan_network_data']['oam']['static_end'] }} end: {{ data['network']['vlan_network_data']['oam']['static_end'] }}
... ...
-- --
- OAM Network information in site manifests after applying intermediary YAML to J2 - OAM Network information in site manifests after applying intermediary YAML to J2
templates.: templates.:
:: ::
--- ---
schema: 'drydock/Network/v1' schema: 'drydock/Network/v1'
@ -267,7 +273,7 @@ Overall Architecture
end: 12.0.0.126 end: 12.0.0.126
... ...
-- --
Security impact Security impact
--------------- ---------------
@ -308,14 +314,11 @@ plugins.
1) Excel based site Engineering package. This file contains detail specification 1) Excel based site Engineering package. This file contains detail specification
covering IPMI, Public IPs, Private IPs, VLAN, Site Details, etc. covering IPMI, Public IPs, Private IPs, VLAN, Site Details, etc.
2) Excel Specification to aid parsing of the above Excel file. It contains 2) Excel Specification to aid parsing of the above Excel file. It contains
details about specific rows and columns in various sheet which contain the details about specific rows and columns in various sheet which contain the
necessary information to build site manifests. necessary information to build site manifests.
3) Site specific configuration file containing additional configuration like 3) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc. proxy, bgp information, interface names, etc.
4) Intermediary YAML file. In this cases Site Engineering Package and Excel 4) Intermediary YAML file. In this cases Site Engineering Package and Excel
specification are not required. specification are not required.
@ -326,84 +329,95 @@ plugins.
1) End point configuration file containing credentials to enable its access. 1) End point configuration file containing credentials to enable its access.
Each end-point type shall have their access governed by their respective plugins Each end-point type shall have their access governed by their respective plugins
and associated configuration file. and associated configuration file.
2) Site specific configuration file containing additional configuration like 2) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc. These will be used if information proxy, bgp information, interface names, etc. These will be used if information
extracted from remote site is insufficient. extracted from remote site is insufficient.
* Program execution * Program execution
1) CLI Options:
-g, --generate_intermediary Dump intermediary file from passed Excel and 1. CLI Options:
Excel spec.
-m, --generate_manifests Generate manifests from the generated
intermediary file.
-x, --excel PATH Path to engineering Excel file, to be passed
with generate_intermediary. The -s option is
mandatory with this option. Multiple engineering
files can be used. For example: -x file1.xls -x file2.xls
-s, --exel_spec PATH Path to Excel spec, to be passed with
generate_intermediary. The -x option is
mandatory along with this option.
-i, --intermediary PATH Path to intermediary file,to be passed
with generate_manifests. The -g and -x options
are not required with this option.
-d, --site_config PATH Path to the site specific YAML file [required]
-l, --loglevel INTEGER Loglevel NOTSET:0 ,DEBUG:10, INFO:20,
WARNING:30, ERROR:40, CRITICAL:50 [default:20]
-e, --end_point_config File containing end-point configurations like user-name
password, certificates, URL, etc.
--help Show this message and exit.
2) Example: +-----------------------------+-----------------------------------------------------------+
| -g, --generate_intermediary | Dump intermediary file from passed Excel and |
| | Excel spec. |
+-----------------------------+-----------------------------------------------------------+
| -m, --generate_manifests | Generate manifests from the generated |
| | intermediary file. |
+-----------------------------+-----------------------------------------------------------+
| -x, --excel PATH | Path to engineering Excel file, to be passed |
| | with generate_intermediary. The -s option is |
| | mandatory with this option. Multiple engineering |
| | files can be used. For example: -x file1.xls -x file2.xls |
+-----------------------------+-----------------------------------------------------------+
| -s, --exel_spec PATH | Path to Excel spec, to be passed with |
| | generate_intermediary. The -x option is |
| | mandatory along with this option. |
+-----------------------------+-----------------------------------------------------------+
| -i, --intermediary PATH | Path to intermediary file,to be passed |
| | with generate_manifests. The -g and -x options |
| | are not required with this option. |
+-----------------------------+-----------------------------------------------------------+
| -d, --site_config PATH | Path to the site specific YAML file [required] |
+-----------------------------+-----------------------------------------------------------+
| -l, --loglevel INTEGER | Loglevel NOTSET:0 ,DEBUG:10, INFO:20, |
| | WARNING:30, ERROR:40, CRITICAL:50 [default:20] |
+-----------------------------+-----------------------------------------------------------+
| -e, --end_point_config | File containing end-point configurations like user-name |
| | password, certificates, URL, etc. |
+-----------------------------+-----------------------------------------------------------+
| --help | Show this message and exit. |
+-----------------------------+-----------------------------------------------------------+
2-1) Using Excel spec as input data source: 2. Example:
Generate Intermediary: spyglass -g -x <DesignSpec> -s <excel spec> -d <site-config> 1) Using Excel spec as input data source:
Generate Manifest & Intermediary: spyglass -mg -x <DesignSpec> -s <excel spec> -d <site-config> Generate Intermediary: ``spyglass -g -x <DesignSpec> -s <excel spec> -d <site-config>``
Generate Manifest with Intermediary: spyglass -m -i <intermediary> Generate Manifest & Intermediary: ``spyglass -mg -x <DesignSpec> -s <excel spec> -d <site-config>``
Generate Manifest with Intermediary: ``spyglass -m -i <intermediary>``
2-1) Using external data source as input: 2) Using external data source as input:
Generate Manifest and Intermediary : spyglass -m -g -e<end_point_config> -d <site-config> Generate Manifest and Intermediary: ``spyglass -m -g -e<end_point_config> -d <site-config>``
Generate Manifest : spyglass -m -e<end_point_config> -d <site-config>
Note: The end_point_config shall include attributes of the external data source that are Generate Manifest: ``spyglass -m -e<end_point_config> -d <site-config>``
.. note::
The end_point_config shall include attributes of the external data source that are
necessary for its access. Each external data source type shall have its own plugin to configure necessary for its access. Each external data source type shall have its own plugin to configure
its corresponding credentials. its corresponding credentials.
* Program output: * Program output:
a) Site Manifests: As an initial release, the program shall output manifest files for a) Site Manifests: As an initial release, the program shall output manifest files for
"airship-seaworthy" site. For example: baremetal, deployment, networks, pki, etc. "airship-seaworthy" site. For example: baremetal, deployment, networks, pki, etc.
Reference:https://github.com/openstack/airship-treasuremap/tree/master/site/airship-seaworthy Reference: https://github.com/openstack/airship-treasuremap/tree/master/site/airship-seaworthy
b) Intermediary YAML: Containing aggregated site information generated from data sources that is b) Intermediary YAML: Containing aggregated site information generated from data sources that is
used to generate the above site manifests. used to generate the above site manifests.
Future Work Future Work
============ ============
1) Schema based manifest generation instead of Jinja2 templates. It shall 1. Schema based manifest generation instead of Jinja2 templates. It shall
be possible to cleanly transition to this schema based generation keeping a unique be possible to cleanly transition to this schema based generation keeping a unique
mapping between schema and generated manifests. Currently this is managed by mapping between schema and generated manifests. Currently this is managed by
considering a mapping of j2 templates with schemas and site type. considering a mapping of j2 templates with schemas and site type.
2. UI editor for intermediary YAML
2) UI editor for intermediary YAML
Alternatives Alternatives
============ ============
1) Schema based manifest generation instead of Jinja2 templates. 1. Schema based manifest generation instead of Jinja2 templates.
2) Develop the data source plugins as an extension to Pegleg. 2. Develop the data source plugins as an extension to Pegleg.
Dependencies Dependencies
============ ============
1) Availability of a repository to store Jinja2 templates. 1. Availability of a repository to store Jinja2 templates.
2) Availability of a repository to store generated manifests. 2. Availability of a repository to store generated manifests.
References References
========== ==========
None None

View File

@ -60,6 +60,7 @@ A separate directory structure needs to be created for adding the playbooks.
Each Divingbell config can be a separate role within the playbook structure. Each Divingbell config can be a separate role within the playbook structure.
:: ::
- playbooks/ - playbooks/
- roles/ - roles/
- systcl/ - systcl/
@ -83,6 +84,7 @@ With Divingbell DaemonSet running on each host mounted at ``hostPath``,
``hosts`` should be defined as given below within the ``master.yml``. ``hosts`` should be defined as given below within the ``master.yml``.
:: ::
hosts: all hosts: all
connection: chroot connection: chroot

View File

@ -193,14 +193,10 @@ Work Items
---------- ----------
- Update Hardware profile schema to support new attribute bios_setting - Update Hardware profile schema to support new attribute bios_setting
- Update Hardware profile objects - Update Hardware profile objects
- Update Orchestrator action PrepareNodes to call OOB driver for BIOS - Update Orchestrator action PrepareNodes to call OOB driver for BIOS
configuration configuration
- Update Redfish OOB driver to support new action ConfigBIOS - Update Redfish OOB driver to support new action ConfigBIOS
- Add unit test cases - Add unit test cases
Assignee(s): Assignee(s):
@ -215,8 +211,8 @@ Other contributors:
Dependencies Dependencies
============ ============
This spec depends on ``Introduce Redfish based OOB Driver for Drydock`` This spec depends on `Introduce Redfish based OOB Driver for Drydock <https://storyboard.openstack.org/#!/story/2003007>`_
https://storyboard.openstack.org/#!/story/2003007 story.
References References
========== ==========

View File

@ -45,7 +45,7 @@ Impacted components
The following Airship components would be impacted by this solution: The following Airship components would be impacted by this solution:
#. Promenade - Maintenance of the chart for external facing Kubernetes API #. Promenade - Maintenance of the chart for external facing Kubernetes API
servers servers
Proposed change Proposed change
=============== ===============

View File

@ -5,8 +5,8 @@
http://creativecommons.org/licenses/by/3.0/legalcode http://creativecommons.org/licenses/by/3.0/legalcode
.. index:: .. index::
single: template single: Pegleg
single: creating specs single: Security
======================================= =======================================
Pegleg Secret Generation and Encryption Pegleg Secret Generation and Encryption

View File

@ -150,9 +150,12 @@ details:
#. Drain the Kubernetes node. #. Drain the Kubernetes node.
#. Clear the Kubernetes labels on the node. #. Clear the Kubernetes labels on the node.
#. Remove etcd nodes from their clusters (if impacted). #. Remove etcd nodes from their clusters (if impacted).
- if the node being decommissioned contains etcd nodes, Promenade will - if the node being decommissioned contains etcd nodes, Promenade will
attempt to gracefully have those nodes leave the etcd cluster. attempt to gracefully have those nodes leave the etcd cluster.
#. Ensure that etcd cluster(s) are in a stable state. #. Ensure that etcd cluster(s) are in a stable state.
- Polls for status every 30 seconds up to the etcd-ready-timeout, or the - Polls for status every 30 seconds up to the etcd-ready-timeout, or the
cluster meets the defined minimum functionality for the site. cluster meets the defined minimum functionality for the site.
- A new document: promenade/EtcdClusters/v1 that will specify details about - A new document: promenade/EtcdClusters/v1 that will specify details about
@ -160,7 +163,9 @@ details:
credentials, and thresholds for minimum functionality. credentials, and thresholds for minimum functionality.
- This process should ignore the node being torn down from any calculation - This process should ignore the node being torn down from any calculation
of health of health
#. Shutdown the kubelet. #. Shutdown the kubelet.
- If this is not possible because the node is in a state of disarray such - If this is not possible because the node is in a state of disarray such
that it cannot schedule the daemonset to run, this step may fail, but that it cannot schedule the daemonset to run, this step may fail, but
should not hold up the process, as the Drydock dismantling of the node should not hold up the process, as the Drydock dismantling of the node
@ -173,11 +178,9 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success - Success: Code: 200, reason: Success
Indicates that all steps are successful. Indicates that all steps are successful.
- Failure: Code: 404, reason: NotFound - Failure: Code: 404, reason: NotFound
Indicates that the target node is not discoverable by Promenade. Indicates that the target node is not discoverable by Promenade.
- Failure: Code: 500, reason: DisassociateStepFailure - Failure: Code: 500, reason: DisassociateStepFailure
The details section should detail the successes and failures further. Any The details section should detail the successes and failures further. Any
@ -223,16 +226,13 @@ All responses will be form of the Airship Status response.
Indicates that the drain node has successfully concluded, and that no pods Indicates that the drain node has successfully concluded, and that no pods
are currently running are currently running
- Failure: Status response, code: 400, reason: BadRequest - Failure: Status response, code: 400, reason: BadRequest
A request was made with parameters that cannot work - e.g. grace-period is A request was made with parameters that cannot work - e.g. grace-period is
set to a value larger than the timeout value. set to a value larger than the timeout value.
- Failure: Status response, code: 404, reason: NotFound - Failure: Status response, code: 404, reason: NotFound
The specified node is not discoverable by Promenade The specified node is not discoverable by Promenade
- Failure: Status response, code: 500, reason: DrainNodeError - Failure: Status response, code: 500, reason: DrainNodeError
There was a processing exception raised while trying to drain a node. The There was a processing exception raised while trying to drain a node. The
@ -263,11 +263,9 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success - Success: Code: 200, reason: Success
All labels have been removed from the specified Kubernetes node. All labels have been removed from the specified Kubernetes node.
- Failure: Code: 404, reason: NotFound - Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: ClearLabelsError - Failure: Code: 500, reason: ClearLabelsError
There was a failure to clear labels that prevented completion. The details There was a failure to clear labels that prevented completion. The details
@ -298,11 +296,9 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success - Success: Code: 200, reason: Success
All etcd nodes have been removed from the specified node. All etcd nodes have been removed from the specified node.
- Failure: Code: 404, reason: NotFound - Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: RemoveEtcdError - Failure: Code: 500, reason: RemoveEtcdError
There was a failure to remove etcd from the target node that prevented There was a failure to remove etcd from the target node that prevented
@ -315,7 +311,7 @@ Promenade Check etcd
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
Retrieves the current interpreted state of etcd. Retrieves the current interpreted state of etcd.
GET /etcd-cluster-health-statuses?design_ref={the design ref} GET /etcd-cluster-health-statuses?design_ref={the design ref}
Where the design_ref parameter is required for appropriate operation, and is in Where the design_ref parameter is required for appropriate operation, and is in
the same format as used for the join-scripts API. the same format as used for the join-scripts API.
@ -334,9 +330,9 @@ All responses will be form of the Airship Status response.
The status of each etcd in the site will be returned in the details section. The status of each etcd in the site will be returned in the details section.
Valid values for status are: Healthy, Unhealthy Valid values for status are: Healthy, Unhealthy
https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conventions.rst#status-responses https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conventions.rst#status-responses
.. code:: json .. code:: json
{ "...": "... standard status response ...", { "...": "... standard status response ...",
"details": { "details": {
@ -365,11 +361,9 @@ https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conv
- Failure: Code: 400, reason: MissingDesignRef - Failure: Code: 400, reason: MissingDesignRef
Returned if the design_ref parameter is not specified Returned if the design_ref parameter is not specified
- Failure: Code: 404, reason: NotFound - Failure: Code: 404, reason: NotFound
Returned if the specified etcd could not be located Returned if the specified etcd could not be located
- Failure: Code: 500, reason: EtcdNotAccessible - Failure: Code: 500, reason: EtcdNotAccessible
Returned if the specified etcd responded with an invalid health response Returned if the specified etcd responded with an invalid health response
@ -400,11 +394,9 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success - Success: Code: 200, reason: Success
The kubelet has been successfully shutdown The kubelet has been successfully shutdown
- Failure: Code: 404, reason: NotFound - Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: ShutdownKubeletError - Failure: Code: 500, reason: ShutdownKubeletError
The specified node's kubelet fails to shutdown. The details section of the The specified node's kubelet fails to shutdown. The details section of the
@ -433,17 +425,14 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success - Success: Code: 200, reason: Success
The specified node has been removed from the Kubernetes cluster. The specified node has been removed from the Kubernetes cluster.
- Failure: Code: 404, reason: NotFound - Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade The specified node is not discoverable by Promenade
- Failure: Code: 409, reason: Conflict - Failure: Code: 409, reason: Conflict
The specified node cannot be deleted due to checks that the node is The specified node cannot be deleted due to checks that the node is
drained/cordoned and has no labels (other than possibly drained/cordoned and has no labels (other than possibly
`promenade-decomission: enabled`). `promenade-decomission: enabled`).
- Failure: Code: 500, reason: DeleteNodeError - Failure: Code: 500, reason: DeleteNodeError
The specified node cannot be removed from the cluster due to an error from The specified node cannot be removed from the cluster due to an error from

View File

@ -20,6 +20,12 @@ Instructions
a short explanation. a short explanation.
- New specs for review should be placed in the ``approved`` subfolder, where - New specs for review should be placed in the ``approved`` subfolder, where
they will undergo review and approval in Gerrit_. they will undergo review and approval in Gerrit_.
- Test if the spec file renders correctly in a web-browser by running
``make docs`` command and opening ``doc/build/html/index.html`` in a
web-browser. Ubuntu needs the following packages to be installed::
apt-get install -y make tox gcc python3-dev
- Specs that have finished implementation should be moved to the - Specs that have finished implementation should be moved to the
``implemented`` subfolder. ``implemented`` subfolder.
@ -50,36 +56,36 @@ Use the following guidelines to determine the category to use for a document:
1) For new functionality and features, the best choice for a category is to 1) For new functionality and features, the best choice for a category is to
match a functional duty of Airship. match a functional duty of Airship.
site-definition site-definition
Parts of the platform that support the definition of a site, including Parts of the platform that support the definition of a site, including
management of the yaml definitions, document authoring and translation, and management of the yaml definitions, document authoring and translation, and
the collation of source documents. the collation of source documents.
genesis genesis
Used for the steps related to preparation and deployment of the genesis node Used for the steps related to preparation and deployment of the genesis node
of an Airship deployment. of an Airship deployment.
baremetal baremetal
Those changes to Airflow that provide for the lifecycle of bare metal Those changes to Airflow that provide for the lifecycle of bare metal
components of the system - provisioning, maintenance, and teardown. This components of the system - provisioning, maintenance, and teardown. This
includes booting, hardware and network configuration, operating system, and includes booting, hardware and network configuration, operating system, and
other host-level management other host-level management
k8s k8s
For functionality that is about interfacing with Kubernetes directly, other For functionality that is about interfacing with Kubernetes directly, other
than the initial setup that is done during genesis. than the initial setup that is done during genesis.
software software
Functionality that is related to the deployment or redeployment of workload Functionality that is related to the deployment or redeployment of workload
onto the Kubernetes cluster. onto the Kubernetes cluster.
workflow workflow
Changes to existing workflows to provide new functionality and creation of Changes to existing workflows to provide new functionality and creation of
new workflows that span multiple other areas (e.g. baremetal, k8s, software), new workflows that span multiple other areas (e.g. baremetal, k8s, software),
or those changes that are new arrangements of existing functionality in one or those changes that are new arrangements of existing functionality in one
or more of those other areas. or more of those other areas.
administration administration
Security, logging, auditing, monitoring, and those things related to site Security, logging, auditing, monitoring, and those things related to site
administrative functions of the Airship platform. administrative functions of the Airship platform.

View File

@ -12,7 +12,7 @@
Blueprints are written using ReSTructured text. Blueprints are written using ReSTructured text.
Add index directives to help others find your spec. E.g.:: Add *index* directives to help others find your spec by keywords. E.g.::
.. index:: .. index::
single: template single: template
@ -27,9 +27,9 @@ Introduction paragraph -- What is this blueprint about?
Links Links
===== =====
Include pertinent links to where the work is being tracked (e.g. Storyboard), Include pertinent links to where the work is being tracked (e.g. Storyboard ID
as well as any other foundational information that may lend clarity to this and Gerrit topics), as well as any other foundational information that may lend
blueprint clarity to this blueprint
Problem description Problem description
=================== ===================