Fix docs renderring, enforce instructions and template

This patch applies various documentation renderring fixes,
and enforces application of the instructions and the
template for the file names.

In addition to that it adds a requirement to add patches
related to the spec under specified Gerrit topics.

Change-Id: I36199cf78c30f2ee75c2d716b8919ceae2ab7c42
This commit is contained in:
Roman Gorshunov 2019-03-13 16:12:45 +01:00
parent d317ef57ce
commit ba58dcab0b
10 changed files with 157 additions and 186 deletions

1
.gitignore vendored
View File

@ -5,3 +5,4 @@
/AUTHORS
/ChangeLog
.tox
.vscode/

View File

@ -5,18 +5,9 @@
http://creativecommons.org/licenses/by/3.0/legalcode
.. index::
single: template
single: creating specs
.. note::
Blueprints are written using ReSTructured text.
Add index directives to help others find your spec. E.g.::
.. index::
single: template
single: creating specs
single: Airship
single: multi-linux-distros
single: containers
===========================================
Airship Multiple Linux Distribution Support
@ -30,8 +21,9 @@ and other Linux distro's as new plugins.
Links
=====
The work to author and implement this spec is tracked in Storyboard:
https://storyboard.openstack.org/#!/story/2003699
The work to author and implement this spec is tracked in Storyboard
`2003699 <https://storyboard.openstack.org/#!/story/2003699>`_.
Use Gerrit topics ``airship_suse``, ``airship_rhel`` and similar.
Problem description
===================

View File

@ -150,34 +150,40 @@ Overall Architecture
- Raw rack information from plugin:
::
vlan_network_data:
oam:
subnet: 12.0.0.64/26
vlan: '1321'
- Rules to define gateway, ip ranges from subnet:
::
rule_ip_alloc_offset:
name: ip_alloc_offset
ip_alloc_offset:
default: 10
gateway: 1
The above rule specify the ip offset to considered to define ip address for gateway, reserved
and static ip ranges from the subnet pool.
So ip range for 12.0.0.64/26 is : 12.0.0.65 ~ 12.0.0.126
The rule "ip_alloc_offset" now helps to define additional information as follows:
- gateway: 12.0.0.65 (the first offset as defined by the field 'gateway')
- reserved ip ranges: 12.0.0.65 ~ 12.0.0.76 (the range is defined by adding
"default" to start ip range)
- static ip ranges: 12.0.0.77 ~ 12.0.0.126 (it follows the rule that we need
to skip first 10 ip addresses as defined by "default")
The above rule specify the ip offset to considered to define ip address for gateway, reserved
and static ip ranges from the subnet pool.
So ip range for 12.0.0.64/26 is : 12.0.0.65 ~ 12.0.0.126
The rule "ip_alloc_offset" now helps to define additional information as follows:
- gateway: 12.0.0.65 (the first offset as defined by the field 'gateway')
- reserved ip ranges: 12.0.0.65 ~ 12.0.0.76 (the range is defined by adding
"default" to start ip range)
- static ip ranges: 12.0.0.77 ~ 12.0.0.126 (it follows the rule that we need
to skip first 10 ip addresses as defined by "default")
- Intermediary YAML file information generated after applying the above rules
to the raw rack information:
::
::
network:
vlan_network_data:
@ -192,13 +198,13 @@ Overall Architecture
static_end: 12.0.0.126 ----+
vlan: '1321'
--
--
- J2 templates for specifying oam network data: It represents the format in
which the site manifests will be generated with values obtained from
Intermediary YAML
- J2 templates for specifying oam network data: It represents the format in
which the site manifests will be generated with values obtained from
Intermediary YAML
::
::
---
schema: 'drydock/Network/v1'
@ -230,12 +236,12 @@ Overall Architecture
end: {{ data['network']['vlan_network_data']['oam']['static_end'] }}
...
--
--
- OAM Network information in site manifests after applying intermediary YAML to J2
templates.:
- OAM Network information in site manifests after applying intermediary YAML to J2
templates.:
::
::
---
schema: 'drydock/Network/v1'
@ -267,7 +273,7 @@ Overall Architecture
end: 12.0.0.126
...
--
--
Security impact
---------------
@ -304,106 +310,112 @@ plugins.
A. Excel Based Data Source.
- Gather the following input files:
- Gather the following input files:
1) Excel based site Engineering package. This file contains detail specification
covering IPMI, Public IPs, Private IPs, VLAN, Site Details, etc.
1) Excel based site Engineering package. This file contains detail specification
covering IPMI, Public IPs, Private IPs, VLAN, Site Details, etc.
2) Excel Specification to aid parsing of the above Excel file. It contains
details about specific rows and columns in various sheet which contain the
necessary information to build site manifests.
3) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc.
4) Intermediary YAML file. In this cases Site Engineering Package and Excel
specification are not required.
2) Excel Specification to aid parsing of the above Excel file. It contains
details about specific rows and columns in various sheet which contain the
necessary information to build site manifests.
B. Remote Data Source
3) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc.
- Gather the following input information:
4) Intermediary YAML file. In this cases Site Engineering Package and Excel
specification are not required.
B. Remote Data Source
- Gather the following input information:
1) End point configuration file containing credentials to enable its access.
Each end-point type shall have their access governed by their respective plugins
and associated configuration file.
2) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc. These will be used if information
extracted from remote site is insufficient.
1) End point configuration file containing credentials to enable its access.
Each end-point type shall have their access governed by their respective plugins
and associated configuration file.
2) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc. These will be used if information
extracted from remote site is insufficient.
* Program execution
1) CLI Options:
-g, --generate_intermediary Dump intermediary file from passed Excel and
Excel spec.
-m, --generate_manifests Generate manifests from the generated
intermediary file.
-x, --excel PATH Path to engineering Excel file, to be passed
with generate_intermediary. The -s option is
mandatory with this option. Multiple engineering
files can be used. For example: -x file1.xls -x file2.xls
-s, --exel_spec PATH Path to Excel spec, to be passed with
generate_intermediary. The -x option is
mandatory along with this option.
-i, --intermediary PATH Path to intermediary file,to be passed
with generate_manifests. The -g and -x options
are not required with this option.
-d, --site_config PATH Path to the site specific YAML file [required]
-l, --loglevel INTEGER Loglevel NOTSET:0 ,DEBUG:10, INFO:20,
WARNING:30, ERROR:40, CRITICAL:50 [default:20]
-e, --end_point_config File containing end-point configurations like user-name
password, certificates, URL, etc.
--help Show this message and exit.
1. CLI Options:
2) Example:
+-----------------------------+-----------------------------------------------------------+
| -g, --generate_intermediary | Dump intermediary file from passed Excel and |
| | Excel spec. |
+-----------------------------+-----------------------------------------------------------+
| -m, --generate_manifests |Generate manifests from the generated |
| | intermediary file. |
+-----------------------------+-----------------------------------------------------------+
| -x, --excel PATH | Path to engineering Excel file, to be passed |
| | with generate_intermediary. The -s option is |
| | mandatory with this option. Multiple engineering |
| | files can be used. For example: -x file1.xls -x file2.xls |
+-----------------------------+-----------------------------------------------------------+
| -s, --exel_spec PATH | Path to Excel spec, to be passed with |
| | generate_intermediary. The -x option is |
| | mandatory along with this option. |
+-----------------------------+-----------------------------------------------------------+
| -i, --intermediary PATH | Path to intermediary file,to be passed |
| | with generate_manifests. The -g and -x options |
| | are not required with this option. |
+-----------------------------+-----------------------------------------------------------+
| -d, --site_config PATH | Path to the site specific YAML file [required] |
+-----------------------------+-----------------------------------------------------------+
| -l, --loglevel INTEGER | Loglevel NOTSET:0 ,DEBUG:10, INFO:20, |
| | WARNING:30, ERROR:40, CRITICAL:50 [default:20] |
+-----------------------------+-----------------------------------------------------------+
| -e, --end_point_config | File containing end-point configurations like user-name |
| | password, certificates, URL, etc. |
+-----------------------------+-----------------------------------------------------------+
| --help | Show this message and exit. |
+-----------------------------+-----------------------------------------------------------+
2-1) Using Excel spec as input data source:
2. Example:
Generate Intermediary: spyglass -g -x <DesignSpec> -s <excel spec> -d <site-config>
1) Using Excel spec as input data source:
Generate Manifest & Intermediary: spyglass -mg -x <DesignSpec> -s <excel spec> -d <site-config>
Generate Intermediary: ``spyglass -g -x <DesignSpec> -s <excel spec> -d <site-config>``
Generate Manifest with Intermediary: spyglass -m -i <intermediary>
Generate Manifest & Intermediary: ``spyglass -mg -x <DesignSpec> -s <excel spec> -d <site-config>``
Generate Manifest with Intermediary: ``spyglass -m -i <intermediary>``
2-1) Using external data source as input:
2) Using external data source as input:
Generate Manifest and Intermediary : spyglass -m -g -e<end_point_config> -d <site-config>
Generate Manifest : spyglass -m -e<end_point_config> -d <site-config>
Generate Manifest and Intermediary: ``spyglass -m -g -e<end_point_config> -d <site-config>``
Note: The end_point_config shall include attributes of the external data source that are
necessary for its access. Each external data source type shall have its own plugin to configure
its corresponding credentials.
Generate Manifest: ``spyglass -m -e<end_point_config> -d <site-config>``
Note: The end_point_config shall include attributes of the external data source that are
necessary for its access. Each external data source type shall have its own plugin to configure
its corresponding credentials.
* Program output:
a) Site Manifests: As an initial release, the program shall output manifest files for
"airship-seaworthy" site. For example: baremetal, deployment, networks, pki, etc.
Reference:https://github.com/openstack/airship-treasuremap/tree/master/site/airship-seaworthy
Reference: https://github.com/openstack/airship-treasuremap/tree/master/site/airship-seaworthy
b) Intermediary YAML: Containing aggregated site information generated from data sources that is
used to generate the above site manifests.
Future Work
============
1) Schema based manifest generation instead of Jinja2 templates. It shall
be possible to cleanly transition to this schema based generation keeping a unique
mapping between schema and generated manifests. Currently this is managed by
considering a mapping of j2 templates with schemas and site type.
2) UI editor for intermediary YAML
1. Schema based manifest generation instead of Jinja2 templates. It shall
be possible to cleanly transition to this schema based generation keeping a unique
mapping between schema and generated manifests. Currently this is managed by
considering a mapping of j2 templates with schemas and site type.
2. UI editor for intermediary YAML
Alternatives
============
1) Schema based manifest generation instead of Jinja2 templates.
2) Develop the data source plugins as an extension to Pegleg.
1. Schema based manifest generation instead of Jinja2 templates.
2. Develop the data source plugins as an extension to Pegleg.
Dependencies
============
1) Availability of a repository to store Jinja2 templates.
2) Availability of a repository to store generated manifests.
1. Availability of a repository to store Jinja2 templates.
2. Availability of a repository to store generated manifests.
References
==========
None

View File

@ -60,6 +60,7 @@ A separate directory structure needs to be created for adding the playbooks.
Each Divingbell config can be a separate role within the playbook structure.
::
- playbooks/
- roles/
- systcl/
@ -83,6 +84,7 @@ With Divingbell DaemonSet running on each host mounted at ``hostPath``,
``hosts`` should be defined as given below within the ``master.yml``.
::
hosts: all
connection: chroot

View File

@ -193,14 +193,10 @@ Work Items
----------
- Update Hardware profile schema to support new attribute bios_setting
- Update Hardware profile objects
- Update Orchestrator action PrepareNodes to call OOB driver for BIOS
configuration
- Update Redfish OOB driver to support new action ConfigBIOS
- Add unit test cases
Assignee(s):
@ -215,8 +211,8 @@ Other contributors:
Dependencies
============
This spec depends on ``Introduce Redfish based OOB Driver for Drydock``
https://storyboard.openstack.org/#!/story/2003007
This spec depends on `Introduce Redfish based OOB Driver for Drydock <https://storyboard.openstack.org/#!/story/2003007>`_
story.
References
==========

View File

@ -45,7 +45,7 @@ Impacted components
The following Airship components would be impacted by this solution:
#. Promenade - Maintenance of the chart for external facing Kubernetes API
servers
servers
Proposed change
===============

View File

@ -5,8 +5,8 @@
http://creativecommons.org/licenses/by/3.0/legalcode
.. index::
single: template
single: creating specs
single: pegleg
single: security
=======================================
Pegleg Secret Generation and Encryption

View File

@ -150,36 +150,36 @@ details:
#. Drain the Kubernetes node.
#. Clear the Kubernetes labels on the node.
#. Remove etcd nodes from their clusters (if impacted).
- if the node being decommissioned contains etcd nodes, Promenade will
attempt to gracefully have those nodes leave the etcd cluster.
attempt to gracefully have those nodes leave the etcd cluster.
#. Ensure that etcd cluster(s) are in a stable state.
- Polls for status every 30 seconds up to the etcd-ready-timeout, or the
cluster meets the defined minimum functionality for the site.
cluster meets the defined minimum functionality for the site.
- A new document: promenade/EtcdClusters/v1 that will specify details about
the etcd clusters deployed in the site, including: identifiers,
credentials, and thresholds for minimum functionality.
the etcd clusters deployed in the site, including: identifiers,
credentials, and thresholds for minimum functionality.
- This process should ignore the node being torn down from any calculation
of health
of health
#. Shutdown the kubelet.
- If this is not possible because the node is in a state of disarray such
that it cannot schedule the daemonset to run, this step may fail, but
should not hold up the process, as the Drydock dismantling of the node
will shut the kubelet down.
that it cannot schedule the daemonset to run, this step may fail, but
should not hold up the process, as the Drydock dismantling of the node
will shut the kubelet down.
Responses
~~~~~~~~~
All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
Indicates that all steps are successful.
- Failure: Code: 404, reason: NotFound
Indicates that the target node is not discoverable by Promenade.
- Failure: Code: 500, reason: DisassociateStepFailure
The details section should detail the successes and failures further. Any
4xx series errors from the individual steps would manifest as a 500 here.
@ -220,21 +220,14 @@ Responses
All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
Indicates that the drain node has successfully concluded, and that no pods
are currently running
- Failure: Status response, code: 400, reason: BadRequest
A request was made with parameters that cannot work - e.g. grace-period is
set to a value larger than the timeout value.
- Failure: Status response, code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Status response, code: 500, reason: DrainNodeError
There was a processing exception raised while trying to drain a node. The
details section should indicate the underlying cause if it can be
determined.
@ -261,15 +254,10 @@ Responses
All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
All labels have been removed from the specified Kubernetes node.
- Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: ClearLabelsError
There was a failure to clear labels that prevented completion. The details
section should provide more information about the cause of this failure.
@ -296,15 +284,10 @@ Responses
All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
All etcd nodes have been removed from the specified node.
- Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: RemoveEtcdError
There was a failure to remove etcd from the target node that prevented
completion within the specified timeout, or that etcd prevented removal of
the node because it would result in the cluster being broken. The details
@ -315,7 +298,7 @@ Promenade Check etcd
~~~~~~~~~~~~~~~~~~~~
Retrieves the current interpreted state of etcd.
GET /etcd-cluster-health-statuses?design_ref={the design ref}
GET /etcd-cluster-health-statuses?design_ref={the design ref}
Where the design_ref parameter is required for appropriate operation, and is in
the same format as used for the join-scripts API.
@ -330,48 +313,42 @@ Responses
All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
The status of each etcd in the site will be returned in the details section.
Valid values for status are: Healthy, Unhealthy
https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conventions.rst#status-responses
https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conventions.rst#status-responses
.. code:: json
.. code:: json
{ "...": "... standard status response ...",
"details": {
"errorCount": {{n}},
"messageList": [
{ "message": "Healthy",
"error": false,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
},
{ "message": "Unhealthy"
"error": false,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
},
{ "message": "Unable to access Etcd"
"error": true,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
}
]
}
...
}
{ "...": "... standard status response ...",
"details": {
"errorCount": {{n}},
"messageList": [
{ "message": "Healthy",
"error": false,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
},
{ "message": "Unhealthy"
"error": false,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
},
{ "message": "Unable to access Etcd"
"error": true,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
}
]
}
...
}
- Failure: Code: 400, reason: MissingDesignRef
Returned if the design_ref parameter is not specified
- Failure: Code: 404, reason: NotFound
Returned if the specified etcd could not be located
- Failure: Code: 500, reason: EtcdNotAccessible
Returned if the specified etcd responded with an invalid health response
(Not just simply unhealthy - that's a 200).
@ -398,15 +375,10 @@ Responses
All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
The kubelet has been successfully shutdown
- Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: ShutdownKubeletError
The specified node's kubelet fails to shutdown. The details section of the
status response should contain reasonable information about the source of
this failure
@ -431,21 +403,14 @@ Responses
All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
The specified node has been removed from the Kubernetes cluster.
- Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Code: 409, reason: Conflict
The specified node cannot be deleted due to checks that the node is
drained/cordoned and has no labels (other than possibly
`promenade-decomission: enabled`).
- Failure: Code: 500, reason: DeleteNodeError
The specified node cannot be removed from the cluster due to an error from
Kubernetes. The details section of the status response should contain more
information about the failure.

View File

@ -18,6 +18,9 @@ Instructions
- Attempt to detail each applicable section.
- If a section does not apply, use N/A, and optionally provide
a short explanation.
- Test that the spec file renders correctly in the browser by running
``make docs`` command and browsing to xxx directory. Ubuntu needs make, tox,
gcc, and python3-dev packages to be installed.
- New specs for review should be placed in the ``approved`` subfolder, where
they will undergo review and approval in Gerrit_.
- Specs that have finished implementation should be moved to the

View File

@ -27,9 +27,9 @@ Introduction paragraph -- What is this blueprint about?
Links
=====
Include pertinent links to where the work is being tracked (e.g. Storyboard),
as well as any other foundational information that may lend clarity to this
blueprint
Include pertinent links to where the work is being tracked (e.g. Storyboard ID
and Gerrit topics), as well as any other foundational information that may lend
clarity to this blueprint
Problem description
===================