summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRoman Gorshunov <roman.gorshunov@att.com>2019-03-13 16:12:45 +0100
committerRoman Gorshunov <roman.gorshunov@att.com>2019-03-13 18:41:12 +0100
commitc5064ef2eb47a4872a2b46380ca1640be9670b04 (patch)
tree7cdc0eafd38de0f3184569b9d47ba0f7294c42f2
parentd317ef57ce71f04cba1f89ec4e72604eabea0dbf (diff)
Fix docs renderring, enforce instructions and templateHEADmaster
This patch applies various documentation renderring fixes, and enforces application of the instructions and the template for the file names. In addition to that it adds a requirement to add patches related to the spec under specified Gerrit topics. Change-Id: I36199cf78c30f2ee75c2d716b8919ceae2ab7c42
Notes
Notes (review): Code-Review+2: Evgeniy L <eli@mirantis.com> Code-Review+2: Drew Walters <drewwalters96@gmail.com> Workflow+1: Drew Walters <drewwalters96@gmail.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Thu, 14 Mar 2019 20:02:20 +0000 Reviewed-on: https://review.openstack.org/643074 Project: openstack/airship-specs Branch: refs/heads/master
-rw-r--r--.gitignore1
-rw-r--r--specs/approved/airship_multi_linux_distros.rst (renamed from specs/approved/multi-linux-distros.rst)20
-rw-r--r--specs/approved/data_config_generator.rst204
-rw-r--r--specs/approved/divingbell_ansible_framework.rst2
-rw-r--r--specs/approved/drydock_support_bios_configuration.rst8
-rw-r--r--specs/approved/k8s_external_facing_api.rst2
-rw-r--r--specs/approved/pegleg_secrets.rst (renamed from specs/approved/pegleg-secrets.rst)4
-rw-r--r--specs/approved/workflow_node-teardown.rst93
-rw-r--r--specs/instructions.rst70
-rw-r--r--specs/template.rst8
10 files changed, 206 insertions, 206 deletions
diff --git a/.gitignore b/.gitignore
index dbd8953..dd10b51 100644
--- a/.gitignore
+++ b/.gitignore
@@ -5,3 +5,4 @@
5/AUTHORS 5/AUTHORS
6/ChangeLog 6/ChangeLog
7.tox 7.tox
8.vscode/
diff --git a/specs/approved/multi-linux-distros.rst b/specs/approved/airship_multi_linux_distros.rst
index 520a1bc..3a6aae5 100644
--- a/specs/approved/multi-linux-distros.rst
+++ b/specs/approved/airship_multi_linux_distros.rst
@@ -5,18 +5,9 @@
5 http://creativecommons.org/licenses/by/3.0/legalcode 5 http://creativecommons.org/licenses/by/3.0/legalcode
6 6
7.. index:: 7.. index::
8 single: template 8 single: Airship
9 single: creating specs 9 single: multi-linux-distros
10 10 single: containers
11.. note::
12
13 Blueprints are written using ReSTructured text.
14
15Add index directives to help others find your spec. E.g.::
16
17 .. index::
18 single: template
19 single: creating specs
20 11
21=========================================== 12===========================================
22Airship Multiple Linux Distribution Support 13Airship Multiple Linux Distribution Support
@@ -30,8 +21,9 @@ and other Linux distro's as new plugins.
30Links 21Links
31===== 22=====
32 23
33The work to author and implement this spec is tracked in Storyboard: 24The work to author and implement this spec is tracked in Storyboard
34https://storyboard.openstack.org/#!/story/2003699 25`2003699 <https://storyboard.openstack.org/#!/story/2003699>`_ and uses Gerrit
26topics ``airship_suse``, ``airship_rhel`` and similar.
35 27
36Problem description 28Problem description
37=================== 29===================
diff --git a/specs/approved/data_config_generator.rst b/specs/approved/data_config_generator.rst
index 9d98f6e..e88f8d6 100644
--- a/specs/approved/data_config_generator.rst
+++ b/specs/approved/data_config_generator.rst
@@ -150,34 +150,40 @@ Overall Architecture
150 150
151 - Raw rack information from plugin: 151 - Raw rack information from plugin:
152 152
153 ::
154
153 vlan_network_data: 155 vlan_network_data:
154 oam: 156 oam:
155 subnet: 12.0.0.64/26 157 subnet: 12.0.0.64/26
156 vlan: '1321' 158 vlan: '1321'
157 159
160
158 - Rules to define gateway, ip ranges from subnet: 161 - Rules to define gateway, ip ranges from subnet:
159 162
163 ::
164
160 rule_ip_alloc_offset: 165 rule_ip_alloc_offset:
161 name: ip_alloc_offset 166 name: ip_alloc_offset
162 ip_alloc_offset: 167 ip_alloc_offset:
163 default: 10 168 default: 10
164 gateway: 1 169 gateway: 1
165 170
166 The above rule specify the ip offset to considered to define ip address for gateway, reserved
167 and static ip ranges from the subnet pool.
168 So ip range for 12.0.0.64/26 is : 12.0.0.65 ~ 12.0.0.126
169 The rule "ip_alloc_offset" now helps to define additional information as follows:
170 171
171 - gateway: 12.0.0.65 (the first offset as defined by the field 'gateway') 172 The above rule specify the ip offset to considered to define ip address for gateway, reserved
172 - reserved ip ranges: 12.0.0.65 ~ 12.0.0.76 (the range is defined by adding 173 and static ip ranges from the subnet pool.
173 "default" to start ip range) 174 So ip range for 12.0.0.64/26 is : 12.0.0.65 ~ 12.0.0.126
174 - static ip ranges: 12.0.0.77 ~ 12.0.0.126 (it follows the rule that we need 175 The rule "ip_alloc_offset" now helps to define additional information as follows:
175 to skip first 10 ip addresses as defined by "default") 176
177 - gateway: 12.0.0.65 (the first offset as defined by the field 'gateway')
178 - reserved ip ranges: 12.0.0.65 ~ 12.0.0.76 (the range is defined by adding
179 "default" to start ip range)
180 - static ip ranges: 12.0.0.77 ~ 12.0.0.126 (it follows the rule that we need
181 to skip first 10 ip addresses as defined by "default")
176 182
177 - Intermediary YAML file information generated after applying the above rules 183 - Intermediary YAML file information generated after applying the above rules
178 to the raw rack information: 184 to the raw rack information:
179 185
180:: 186 ::
181 187
182 network: 188 network:
183 vlan_network_data: 189 vlan_network_data:
@@ -192,13 +198,13 @@ Overall Architecture
192 static_end: 12.0.0.126 ----+ 198 static_end: 12.0.0.126 ----+
193 vlan: '1321' 199 vlan: '1321'
194 200
195-- 201 --
196 202
197 - J2 templates for specifying oam network data: It represents the format in 203 - J2 templates for specifying oam network data: It represents the format in
198 which the site manifests will be generated with values obtained from 204 which the site manifests will be generated with values obtained from
199 Intermediary YAML 205 Intermediary YAML
200 206
201:: 207 ::
202 208
203 --- 209 ---
204 schema: 'drydock/Network/v1' 210 schema: 'drydock/Network/v1'
@@ -230,12 +236,12 @@ Overall Architecture
230 end: {{ data['network']['vlan_network_data']['oam']['static_end'] }} 236 end: {{ data['network']['vlan_network_data']['oam']['static_end'] }}
231 ... 237 ...
232 238
233-- 239 --
234 240
235 - OAM Network information in site manifests after applying intermediary YAML to J2 241 - OAM Network information in site manifests after applying intermediary YAML to J2
236 templates.: 242 templates.:
237 243
238:: 244 ::
239 245
240 --- 246 ---
241 schema: 'drydock/Network/v1' 247 schema: 'drydock/Network/v1'
@@ -267,7 +273,7 @@ Overall Architecture
267 end: 12.0.0.126 273 end: 12.0.0.126
268 ... 274 ...
269 275
270-- 276 --
271 277
272Security impact 278Security impact
273--------------- 279---------------
@@ -304,106 +310,114 @@ plugins.
304 310
305 A. Excel Based Data Source. 311 A. Excel Based Data Source.
306 312
307 - Gather the following input files: 313 - Gather the following input files:
308
309 1) Excel based site Engineering package. This file contains detail specification
310 covering IPMI, Public IPs, Private IPs, VLAN, Site Details, etc.
311
312 2) Excel Specification to aid parsing of the above Excel file. It contains
313 details about specific rows and columns in various sheet which contain the
314 necessary information to build site manifests.
315
316 3) Site specific configuration file containing additional configuration like
317 proxy, bgp information, interface names, etc.
318 314
319 4) Intermediary YAML file. In this cases Site Engineering Package and Excel 315 1) Excel based site Engineering package. This file contains detail specification
320 specification are not required. 316 covering IPMI, Public IPs, Private IPs, VLAN, Site Details, etc.
317 2) Excel Specification to aid parsing of the above Excel file. It contains
318 details about specific rows and columns in various sheet which contain the
319 necessary information to build site manifests.
320 3) Site specific configuration file containing additional configuration like
321 proxy, bgp information, interface names, etc.
322 4) Intermediary YAML file. In this cases Site Engineering Package and Excel
323 specification are not required.
321 324
322 B. Remote Data Source 325 B. Remote Data Source
323 326
324 - Gather the following input information: 327 - Gather the following input information:
325 328
326 1) End point configuration file containing credentials to enable its access. 329 1) End point configuration file containing credentials to enable its access.
327 Each end-point type shall have their access governed by their respective plugins 330 Each end-point type shall have their access governed by their respective plugins
328 and associated configuration file. 331 and associated configuration file.
329 332 2) Site specific configuration file containing additional configuration like
330 2) Site specific configuration file containing additional configuration like 333 proxy, bgp information, interface names, etc. These will be used if information
331 proxy, bgp information, interface names, etc. These will be used if information 334 extracted from remote site is insufficient.
332 extracted from remote site is insufficient.
333 335
334* Program execution 336* Program execution
335 1) CLI Options:
336
337 -g, --generate_intermediary Dump intermediary file from passed Excel and
338 Excel spec.
339 -m, --generate_manifests Generate manifests from the generated
340 intermediary file.
341 -x, --excel PATH Path to engineering Excel file, to be passed
342 with generate_intermediary. The -s option is
343 mandatory with this option. Multiple engineering
344 files can be used. For example: -x file1.xls -x file2.xls
345 -s, --exel_spec PATH Path to Excel spec, to be passed with
346 generate_intermediary. The -x option is
347 mandatory along with this option.
348 -i, --intermediary PATH Path to intermediary file,to be passed
349 with generate_manifests. The -g and -x options
350 are not required with this option.
351 -d, --site_config PATH Path to the site specific YAML file [required]
352 -l, --loglevel INTEGER Loglevel NOTSET:0 ,DEBUG:10, INFO:20,
353 WARNING:30, ERROR:40, CRITICAL:50 [default:20]
354 -e, --end_point_config File containing end-point configurations like user-name
355 password, certificates, URL, etc.
356 --help Show this message and exit.
357
358 2) Example:
359
360 2-1) Using Excel spec as input data source:
361
362 Generate Intermediary: spyglass -g -x <DesignSpec> -s <excel spec> -d <site-config>
363
364 Generate Manifest & Intermediary: spyglass -mg -x <DesignSpec> -s <excel spec> -d <site-config>
365
366 Generate Manifest with Intermediary: spyglass -m -i <intermediary>
367 337
368 338 1. CLI Options:
369 2-1) Using external data source as input: 339
370 340 +-----------------------------+-----------------------------------------------------------+
371 Generate Manifest and Intermediary : spyglass -m -g -e<end_point_config> -d <site-config> 341 | -g, --generate_intermediary | Dump intermediary file from passed Excel and |
372 Generate Manifest : spyglass -m -e<end_point_config> -d <site-config> 342 | | Excel spec. |
373 343 +-----------------------------+-----------------------------------------------------------+
374 Note: The end_point_config shall include attributes of the external data source that are 344 | -m, --generate_manifests | Generate manifests from the generated |
375 necessary for its access. Each external data source type shall have its own plugin to configure 345 | | intermediary file. |
376 its corresponding credentials. 346 +-----------------------------+-----------------------------------------------------------+
347 | -x, --excel PATH | Path to engineering Excel file, to be passed |
348 | | with generate_intermediary. The -s option is |
349 | | mandatory with this option. Multiple engineering |
350 | | files can be used. For example: -x file1.xls -x file2.xls |
351 +-----------------------------+-----------------------------------------------------------+
352 | -s, --exel_spec PATH | Path to Excel spec, to be passed with |
353 | | generate_intermediary. The -x option is |
354 | | mandatory along with this option. |
355 +-----------------------------+-----------------------------------------------------------+
356 | -i, --intermediary PATH | Path to intermediary file,to be passed |
357 | | with generate_manifests. The -g and -x options |
358 | | are not required with this option. |
359 +-----------------------------+-----------------------------------------------------------+
360 | -d, --site_config PATH | Path to the site specific YAML file [required] |
361 +-----------------------------+-----------------------------------------------------------+
362 | -l, --loglevel INTEGER | Loglevel NOTSET:0 ,DEBUG:10, INFO:20, |
363 | | WARNING:30, ERROR:40, CRITICAL:50 [default:20] |
364 +-----------------------------+-----------------------------------------------------------+
365 | -e, --end_point_config | File containing end-point configurations like user-name |
366 | | password, certificates, URL, etc. |
367 +-----------------------------+-----------------------------------------------------------+
368 | --help | Show this message and exit. |
369 +-----------------------------+-----------------------------------------------------------+
370
371 2. Example:
372
373 1) Using Excel spec as input data source:
374
375 Generate Intermediary: ``spyglass -g -x <DesignSpec> -s <excel spec> -d <site-config>``
376
377 Generate Manifest & Intermediary: ``spyglass -mg -x <DesignSpec> -s <excel spec> -d <site-config>``
378
379 Generate Manifest with Intermediary: ``spyglass -m -i <intermediary>``
380
381 2) Using external data source as input:
382
383 Generate Manifest and Intermediary: ``spyglass -m -g -e<end_point_config> -d <site-config>``
384
385 Generate Manifest: ``spyglass -m -e<end_point_config> -d <site-config>``
386
387 .. note::
388
389 The end_point_config shall include attributes of the external data source that are
390 necessary for its access. Each external data source type shall have its own plugin to configure
391 its corresponding credentials.
377 392
378* Program output: 393* Program output:
394
379 a) Site Manifests: As an initial release, the program shall output manifest files for 395 a) Site Manifests: As an initial release, the program shall output manifest files for
380 "airship-seaworthy" site. For example: baremetal, deployment, networks, pki, etc. 396 "airship-seaworthy" site. For example: baremetal, deployment, networks, pki, etc.
381 Reference:https://github.com/openstack/airship-treasuremap/tree/master/site/airship-seaworthy 397 Reference: https://github.com/openstack/airship-treasuremap/tree/master/site/airship-seaworthy
382 b) Intermediary YAML: Containing aggregated site information generated from data sources that is 398 b) Intermediary YAML: Containing aggregated site information generated from data sources that is
383 used to generate the above site manifests. 399 used to generate the above site manifests.
384 400
385Future Work 401Future Work
386============ 402============
3871) Schema based manifest generation instead of Jinja2 templates. It shall 4031. Schema based manifest generation instead of Jinja2 templates. It shall
388be possible to cleanly transition to this schema based generation keeping a unique 404 be possible to cleanly transition to this schema based generation keeping a unique
389mapping between schema and generated manifests. Currently this is managed by 405 mapping between schema and generated manifests. Currently this is managed by
390considering a mapping of j2 templates with schemas and site type. 406 considering a mapping of j2 templates with schemas and site type.
391 4072. UI editor for intermediary YAML
3922) UI editor for intermediary YAML
393 408
394 409
395Alternatives 410Alternatives
396============ 411============
3971) Schema based manifest generation instead of Jinja2 templates. 4121. Schema based manifest generation instead of Jinja2 templates.
3982) Develop the data source plugins as an extension to Pegleg. 4132. Develop the data source plugins as an extension to Pegleg.
399 414
400Dependencies 415Dependencies
401============ 416============
4021) Availability of a repository to store Jinja2 templates. 4171. Availability of a repository to store Jinja2 templates.
4032) Availability of a repository to store generated manifests. 4182. Availability of a repository to store generated manifests.
404 419
405References 420References
406========== 421==========
407 422
408None 423None
409
diff --git a/specs/approved/divingbell_ansible_framework.rst b/specs/approved/divingbell_ansible_framework.rst
index 6658382..6acc3c8 100644
--- a/specs/approved/divingbell_ansible_framework.rst
+++ b/specs/approved/divingbell_ansible_framework.rst
@@ -60,6 +60,7 @@ A separate directory structure needs to be created for adding the playbooks.
60Each Divingbell config can be a separate role within the playbook structure. 60Each Divingbell config can be a separate role within the playbook structure.
61 61
62:: 62::
63
63 - playbooks/ 64 - playbooks/
64 - roles/ 65 - roles/
65 - systcl/ 66 - systcl/
@@ -83,6 +84,7 @@ With Divingbell DaemonSet running on each host mounted at ``hostPath``,
83``hosts`` should be defined as given below within the ``master.yml``. 84``hosts`` should be defined as given below within the ``master.yml``.
84 85
85:: 86::
87
86 hosts: all 88 hosts: all
87 connection: chroot 89 connection: chroot
88 90
diff --git a/specs/approved/drydock_support_bios_configuration.rst b/specs/approved/drydock_support_bios_configuration.rst
index 25fbf67..81c1f96 100644
--- a/specs/approved/drydock_support_bios_configuration.rst
+++ b/specs/approved/drydock_support_bios_configuration.rst
@@ -193,14 +193,10 @@ Work Items
193---------- 193----------
194 194
195- Update Hardware profile schema to support new attribute bios_setting 195- Update Hardware profile schema to support new attribute bios_setting
196
197- Update Hardware profile objects 196- Update Hardware profile objects
198
199- Update Orchestrator action PrepareNodes to call OOB driver for BIOS 197- Update Orchestrator action PrepareNodes to call OOB driver for BIOS
200 configuration 198 configuration
201
202- Update Redfish OOB driver to support new action ConfigBIOS 199- Update Redfish OOB driver to support new action ConfigBIOS
203
204- Add unit test cases 200- Add unit test cases
205 201
206Assignee(s): 202Assignee(s):
@@ -215,8 +211,8 @@ Other contributors:
215Dependencies 211Dependencies
216============ 212============
217 213
218This spec depends on ``Introduce Redfish based OOB Driver for Drydock`` 214This spec depends on `Introduce Redfish based OOB Driver for Drydock <https://storyboard.openstack.org/#!/story/2003007>`_
219https://storyboard.openstack.org/#!/story/2003007 215story.
220 216
221References 217References
222========== 218==========
diff --git a/specs/approved/k8s_external_facing_api.rst b/specs/approved/k8s_external_facing_api.rst
index f41ba9c..64d8c1c 100644
--- a/specs/approved/k8s_external_facing_api.rst
+++ b/specs/approved/k8s_external_facing_api.rst
@@ -45,7 +45,7 @@ Impacted components
45The following Airship components would be impacted by this solution: 45The following Airship components would be impacted by this solution:
46 46
47#. Promenade - Maintenance of the chart for external facing Kubernetes API 47#. Promenade - Maintenance of the chart for external facing Kubernetes API
48servers 48 servers
49 49
50Proposed change 50Proposed change
51=============== 51===============
diff --git a/specs/approved/pegleg-secrets.rst b/specs/approved/pegleg_secrets.rst
index 9fbb3cc..6277461 100644
--- a/specs/approved/pegleg-secrets.rst
+++ b/specs/approved/pegleg_secrets.rst
@@ -5,8 +5,8 @@
5 http://creativecommons.org/licenses/by/3.0/legalcode 5 http://creativecommons.org/licenses/by/3.0/legalcode
6 6
7.. index:: 7.. index::
8 single: template 8 single: Pegleg
9 single: creating specs 9 single: Security
10 10
11======================================= 11=======================================
12Pegleg Secret Generation and Encryption 12Pegleg Secret Generation and Encryption
diff --git a/specs/approved/workflow_node-teardown.rst b/specs/approved/workflow_node-teardown.rst
index db9d587..95ffbff 100644
--- a/specs/approved/workflow_node-teardown.rst
+++ b/specs/approved/workflow_node-teardown.rst
@@ -150,21 +150,26 @@ details:
150#. Drain the Kubernetes node. 150#. Drain the Kubernetes node.
151#. Clear the Kubernetes labels on the node. 151#. Clear the Kubernetes labels on the node.
152#. Remove etcd nodes from their clusters (if impacted). 152#. Remove etcd nodes from their clusters (if impacted).
153
153 - if the node being decommissioned contains etcd nodes, Promenade will 154 - if the node being decommissioned contains etcd nodes, Promenade will
154 attempt to gracefully have those nodes leave the etcd cluster. 155 attempt to gracefully have those nodes leave the etcd cluster.
156
155#. Ensure that etcd cluster(s) are in a stable state. 157#. Ensure that etcd cluster(s) are in a stable state.
158
156 - Polls for status every 30 seconds up to the etcd-ready-timeout, or the 159 - Polls for status every 30 seconds up to the etcd-ready-timeout, or the
157 cluster meets the defined minimum functionality for the site. 160 cluster meets the defined minimum functionality for the site.
158 - A new document: promenade/EtcdClusters/v1 that will specify details about 161 - A new document: promenade/EtcdClusters/v1 that will specify details about
159 the etcd clusters deployed in the site, including: identifiers, 162 the etcd clusters deployed in the site, including: identifiers,
160 credentials, and thresholds for minimum functionality. 163 credentials, and thresholds for minimum functionality.
161 - This process should ignore the node being torn down from any calculation 164 - This process should ignore the node being torn down from any calculation
162 of health 165 of health
166
163#. Shutdown the kubelet. 167#. Shutdown the kubelet.
168
164 - If this is not possible because the node is in a state of disarray such 169 - If this is not possible because the node is in a state of disarray such
165 that it cannot schedule the daemonset to run, this step may fail, but 170 that it cannot schedule the daemonset to run, this step may fail, but
166 should not hold up the process, as the Drydock dismantling of the node 171 should not hold up the process, as the Drydock dismantling of the node
167 will shut the kubelet down. 172 will shut the kubelet down.
168 173
169Responses 174Responses
170~~~~~~~~~ 175~~~~~~~~~
@@ -173,11 +178,9 @@ All responses will be form of the Airship Status response.
173- Success: Code: 200, reason: Success 178- Success: Code: 200, reason: Success
174 179
175 Indicates that all steps are successful. 180 Indicates that all steps are successful.
176
177- Failure: Code: 404, reason: NotFound 181- Failure: Code: 404, reason: NotFound
178 182
179 Indicates that the target node is not discoverable by Promenade. 183 Indicates that the target node is not discoverable by Promenade.
180
181- Failure: Code: 500, reason: DisassociateStepFailure 184- Failure: Code: 500, reason: DisassociateStepFailure
182 185
183 The details section should detail the successes and failures further. Any 186 The details section should detail the successes and failures further. Any
@@ -223,16 +226,13 @@ All responses will be form of the Airship Status response.
223 226
224 Indicates that the drain node has successfully concluded, and that no pods 227 Indicates that the drain node has successfully concluded, and that no pods
225 are currently running 228 are currently running
226
227- Failure: Status response, code: 400, reason: BadRequest 229- Failure: Status response, code: 400, reason: BadRequest
228 230
229 A request was made with parameters that cannot work - e.g. grace-period is 231 A request was made with parameters that cannot work - e.g. grace-period is
230 set to a value larger than the timeout value. 232 set to a value larger than the timeout value.
231
232- Failure: Status response, code: 404, reason: NotFound 233- Failure: Status response, code: 404, reason: NotFound
233 234
234 The specified node is not discoverable by Promenade 235 The specified node is not discoverable by Promenade
235
236- Failure: Status response, code: 500, reason: DrainNodeError 236- Failure: Status response, code: 500, reason: DrainNodeError
237 237
238 There was a processing exception raised while trying to drain a node. The 238 There was a processing exception raised while trying to drain a node. The
@@ -263,11 +263,9 @@ All responses will be form of the Airship Status response.
263- Success: Code: 200, reason: Success 263- Success: Code: 200, reason: Success
264 264
265 All labels have been removed from the specified Kubernetes node. 265 All labels have been removed from the specified Kubernetes node.
266
267- Failure: Code: 404, reason: NotFound 266- Failure: Code: 404, reason: NotFound
268 267
269 The specified node is not discoverable by Promenade 268 The specified node is not discoverable by Promenade
270
271- Failure: Code: 500, reason: ClearLabelsError 269- Failure: Code: 500, reason: ClearLabelsError
272 270
273 There was a failure to clear labels that prevented completion. The details 271 There was a failure to clear labels that prevented completion. The details
@@ -298,11 +296,9 @@ All responses will be form of the Airship Status response.
298- Success: Code: 200, reason: Success 296- Success: Code: 200, reason: Success
299 297
300 All etcd nodes have been removed from the specified node. 298 All etcd nodes have been removed from the specified node.
301
302- Failure: Code: 404, reason: NotFound 299- Failure: Code: 404, reason: NotFound
303 300
304 The specified node is not discoverable by Promenade 301 The specified node is not discoverable by Promenade
305
306- Failure: Code: 500, reason: RemoveEtcdError 302- Failure: Code: 500, reason: RemoveEtcdError
307 303
308 There was a failure to remove etcd from the target node that prevented 304 There was a failure to remove etcd from the target node that prevented
@@ -315,7 +311,7 @@ Promenade Check etcd
315~~~~~~~~~~~~~~~~~~~~ 311~~~~~~~~~~~~~~~~~~~~
316Retrieves the current interpreted state of etcd. 312Retrieves the current interpreted state of etcd.
317 313
318GET /etcd-cluster-health-statuses?design_ref={the design ref} 314 GET /etcd-cluster-health-statuses?design_ref={the design ref}
319 315
320Where the design_ref parameter is required for appropriate operation, and is in 316Where the design_ref parameter is required for appropriate operation, and is in
321the same format as used for the join-scripts API. 317the same format as used for the join-scripts API.
@@ -334,42 +330,40 @@ All responses will be form of the Airship Status response.
334 The status of each etcd in the site will be returned in the details section. 330 The status of each etcd in the site will be returned in the details section.
335 Valid values for status are: Healthy, Unhealthy 331 Valid values for status are: Healthy, Unhealthy
336 332
337https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conventions.rst#status-responses 333 https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conventions.rst#status-responses
338 334
339.. code:: json 335 .. code:: json
340 336
341 { "...": "... standard status response ...", 337 { "...": "... standard status response ...",
342 "details": { 338 "details": {
343 "errorCount": {{n}}, 339 "errorCount": {{n}},
344 "messageList": [ 340 "messageList": [
345 { "message": "Healthy", 341 { "message": "Healthy",
346 "error": false, 342 "error": false,
347 "kind": "HealthMessage", 343 "kind": "HealthMessage",
348 "name": "{{the name of the etcd service}}" 344 "name": "{{the name of the etcd service}}"
349 }, 345 },
350 { "message": "Unhealthy" 346 { "message": "Unhealthy"
351 "error": false, 347 "error": false,
352 "kind": "HealthMessage", 348 "kind": "HealthMessage",
353 "name": "{{the name of the etcd service}}" 349 "name": "{{the name of the etcd service}}"
354 }, 350 },
355 { "message": "Unable to access Etcd" 351 { "message": "Unable to access Etcd"
356 "error": true, 352 "error": true,
357 "kind": "HealthMessage", 353 "kind": "HealthMessage",
358 "name": "{{the name of the etcd service}}" 354 "name": "{{the name of the etcd service}}"
359 } 355 }
360 ] 356 ]
361 } 357 }
362 ... 358 ...
363 } 359 }
364 360
365- Failure: Code: 400, reason: MissingDesignRef 361- Failure: Code: 400, reason: MissingDesignRef
366 362
367 Returned if the design_ref parameter is not specified 363 Returned if the design_ref parameter is not specified
368
369- Failure: Code: 404, reason: NotFound 364- Failure: Code: 404, reason: NotFound
370 365
371 Returned if the specified etcd could not be located 366 Returned if the specified etcd could not be located
372
373- Failure: Code: 500, reason: EtcdNotAccessible 367- Failure: Code: 500, reason: EtcdNotAccessible
374 368
375 Returned if the specified etcd responded with an invalid health response 369 Returned if the specified etcd responded with an invalid health response
@@ -400,11 +394,9 @@ All responses will be form of the Airship Status response.
400- Success: Code: 200, reason: Success 394- Success: Code: 200, reason: Success
401 395
402 The kubelet has been successfully shutdown 396 The kubelet has been successfully shutdown
403
404- Failure: Code: 404, reason: NotFound 397- Failure: Code: 404, reason: NotFound
405 398
406 The specified node is not discoverable by Promenade 399 The specified node is not discoverable by Promenade
407
408- Failure: Code: 500, reason: ShutdownKubeletError 400- Failure: Code: 500, reason: ShutdownKubeletError
409 401
410 The specified node's kubelet fails to shutdown. The details section of the 402 The specified node's kubelet fails to shutdown. The details section of the
@@ -433,17 +425,14 @@ All responses will be form of the Airship Status response.
433- Success: Code: 200, reason: Success 425- Success: Code: 200, reason: Success
434 426
435 The specified node has been removed from the Kubernetes cluster. 427 The specified node has been removed from the Kubernetes cluster.
436
437- Failure: Code: 404, reason: NotFound 428- Failure: Code: 404, reason: NotFound
438 429
439 The specified node is not discoverable by Promenade 430 The specified node is not discoverable by Promenade
440
441- Failure: Code: 409, reason: Conflict 431- Failure: Code: 409, reason: Conflict
442 432
443 The specified node cannot be deleted due to checks that the node is 433 The specified node cannot be deleted due to checks that the node is
444 drained/cordoned and has no labels (other than possibly 434 drained/cordoned and has no labels (other than possibly
445 `promenade-decomission: enabled`). 435 `promenade-decomission: enabled`).
446
447- Failure: Code: 500, reason: DeleteNodeError 436- Failure: Code: 500, reason: DeleteNodeError
448 437
449 The specified node cannot be removed from the cluster due to an error from 438 The specified node cannot be removed from the cluster due to an error from
diff --git a/specs/instructions.rst b/specs/instructions.rst
index 5fa31a8..a76051b 100644
--- a/specs/instructions.rst
+++ b/specs/instructions.rst
@@ -20,6 +20,12 @@ Instructions
20 a short explanation. 20 a short explanation.
21- New specs for review should be placed in the ``approved`` subfolder, where 21- New specs for review should be placed in the ``approved`` subfolder, where
22 they will undergo review and approval in Gerrit_. 22 they will undergo review and approval in Gerrit_.
23- Test if the spec file renders correctly in a web-browser by running
24 ``make docs`` command and opening ``doc/build/html/index.html`` in a
25 web-browser. Ubuntu needs the following packages to be installed::
26
27 apt-get install -y make tox gcc python3-dev
28
23- Specs that have finished implementation should be moved to the 29- Specs that have finished implementation should be moved to the
24 ``implemented`` subfolder. 30 ``implemented`` subfolder.
25 31
@@ -50,38 +56,38 @@ Use the following guidelines to determine the category to use for a document:
501) For new functionality and features, the best choice for a category is to 561) For new functionality and features, the best choice for a category is to
51 match a functional duty of Airship. 57 match a functional duty of Airship.
52 58
53site-definition 59 site-definition
54 Parts of the platform that support the definition of a site, including 60 Parts of the platform that support the definition of a site, including
55 management of the yaml definitions, document authoring and translation, and 61 management of the yaml definitions, document authoring and translation, and
56 the collation of source documents. 62 the collation of source documents.
57 63
58genesis 64 genesis
59 Used for the steps related to preparation and deployment of the genesis node 65 Used for the steps related to preparation and deployment of the genesis node
60 of an Airship deployment. 66 of an Airship deployment.
61 67
62baremetal 68 baremetal
63 Those changes to Airflow that provide for the lifecycle of bare metal 69 Those changes to Airflow that provide for the lifecycle of bare metal
64 components of the system - provisioning, maintenance, and teardown. This 70 components of the system - provisioning, maintenance, and teardown. This
65 includes booting, hardware and network configuration, operating system, and 71 includes booting, hardware and network configuration, operating system, and
66 other host-level management 72 other host-level management
67 73
68k8s 74 k8s
69 For functionality that is about interfacing with Kubernetes directly, other 75 For functionality that is about interfacing with Kubernetes directly, other
70 than the initial setup that is done during genesis. 76 than the initial setup that is done during genesis.
71 77
72software 78 software
73 Functionality that is related to the deployment or redeployment of workload 79 Functionality that is related to the deployment or redeployment of workload
74 onto the Kubernetes cluster. 80 onto the Kubernetes cluster.
75 81
76workflow 82 workflow
77 Changes to existing workflows to provide new functionality and creation of 83 Changes to existing workflows to provide new functionality and creation of
78 new workflows that span multiple other areas (e.g. baremetal, k8s, software), 84 new workflows that span multiple other areas (e.g. baremetal, k8s, software),
79 or those changes that are new arrangements of existing functionality in one 85 or those changes that are new arrangements of existing functionality in one
80 or more of those other areas. 86 or more of those other areas.
81 87
82administration 88 administration
83 Security, logging, auditing, monitoring, and those things related to site 89 Security, logging, auditing, monitoring, and those things related to site
84 administrative functions of the Airship platform. 90 administrative functions of the Airship platform.
85 91
862) For specs that are not feature focused, the component of the system may 922) For specs that are not feature focused, the component of the system may
87 be the best choice for a category, e.g. ``shipyard``, ``armada`` etc... 93 be the best choice for a category, e.g. ``shipyard``, ``armada`` etc...
diff --git a/specs/template.rst b/specs/template.rst
index 01577ee..6309548 100644
--- a/specs/template.rst
+++ b/specs/template.rst
@@ -12,7 +12,7 @@
12 12
13 Blueprints are written using ReSTructured text. 13 Blueprints are written using ReSTructured text.
14 14
15Add index directives to help others find your spec. E.g.:: 15Add *index* directives to help others find your spec by keywords. E.g.::
16 16
17 .. index:: 17 .. index::
18 single: template 18 single: template
@@ -27,9 +27,9 @@ Introduction paragraph -- What is this blueprint about?
27Links 27Links
28===== 28=====
29 29
30Include pertinent links to where the work is being tracked (e.g. Storyboard), 30Include pertinent links to where the work is being tracked (e.g. Storyboard ID
31as well as any other foundational information that may lend clarity to this 31and Gerrit topics), as well as any other foundational information that may lend
32blueprint 32clarity to this blueprint
33 33
34Problem description 34Problem description
35=================== 35===================