Update docs for developer overview

- New diagrams and documents for developer overview
- Update conf.py for docs to work w/ readthedocs.io
- Add policy and config gen to `make docs`
- Update zuul-linter to support checked in images
- Last fix to document publishing

Change-Id: I4faa1b87032ae5b0e786aa0fd998f809124b7987
This commit is contained in:
Scott Hussey 2018-05-30 14:31:53 -05:00
parent c85e6ad668
commit 20873ad4f9
22 changed files with 329 additions and 67 deletions

View File

@ -122,13 +122,13 @@
XapfJuj7tAkFF+jeaWamB5CMiC+4M3zsrReB2/kqbxGFXC0nQ9q9AbVg48zCZFxNTVMLj
J5K79voMoMmFoP14trhneFDs1Ki8FOLU1fqU7KrBYrlixI4FJwJ6ljEM9C/OvU=
token: !encrypted/pkcs1-oaep
- StrExdiOzIVLYO5xvvqXkIWXCWhdXREnl2VcgmPdfQnKgZ5KgDbtOnATMmowkijOtk3ov
zH3O5arOdjawoqwWQ/9mHDfTLQZfphU6S3J2PrUoMOkWs0UoXVyS577ECSxbA9m4t6x+G
Vfe2Uq8u12CUAomXGUlDBPgpTcKCFFLuQX9qF7FwEQ6I218QlzncVlwo2mGQ0PEGRzVWK
l/KrZxrH8HzNPsB2+SqvYakj/7Ps3q3gedmk8Wq3q6xcdaXvKxmk4zcdvi03bzcmptDhE
meRw/FZCHWEfclgWTDM+MbjqFXdhUTXO/JcjsofwrL3hOEV/3xm8+5XJhdfjUeFUbVw2t
E/ZrJYscRzrTtSA4FAbS9XKO2YwixwvCsm7jSHkfQVf3P9bhczveOWllmTmWEzSvPxN+P
4oXPNnbxVRLgkISuh5kzBIcbXv8aWpcr9/EHTZCPLXqmY2x1sE1RbPOsNxDZqvllgM87k
gHzCDPd39o2KnrNpS++f2hB7xv8DonnyAqFLsH3os/BbkIIdkRwF2p5I/1az47aheET8m
QuZdGYtIzn0OeHfrKmUoF21D6G+zilwo9Gcy1qZQhWzvnIWgswCgzmnImVVT1DaEW97cW
lgKmZ4uApQ6/TitoiFb4l3R/6iQqqHA+yLl3LvFkmGbGwt4fB9su0+r5BpASQ8=
- i6Js243rxTsL0V1l5UWsJalCiRh3kYs54nBz0M9KKrE5YYdAYkD59jKSPncUeG7V+VTkr
LuwGpI837r/oaYqD7g4ZZhsE/X+xSE1PSdtsSY3t5GZZAPdKG4oSLxl0buTd23JsS6cU4
7IAh4Q28wtaIXg8fZ69KVkGm2f2nXPNKbUH/yPTjFW51yEXI55AClNKzv+mVKLd1PNdCN
USQkmF4fvgFreQym+NkZrUh78YMQI1uNT1e7rhD/jxYCjhZGAEr0Clxiu8UmLIRvxHgc2
2SM99xT8s0/dRudePkSz3zXSagwWvdat8bHqpGHJrakjZvePtGeZrdk20v7JQHt8T3XBp
InfWRB8ad/gDvgpstXiag4EHsJ7tnFuwsFDh+KSYySBjtkbYqY8Rx8lQ5qW/Qgk96LagJ
yzpin6EquBcnnPNTGTYLRF9jtowzbI8G9ItRRWdvkIQSlMQDxROI4bVEnfLHgRMbAKVjF
1oSaiEzMwMHj356qYBS06pBBF3Dr/OCIZNiBy3UU8J6OJt2XchMgy9TVhsGkj+HE092d+
mADSwkA5TpfWJCo8rqTDO8cCXIeiG8kBoxjph5m7YNWUcbuRDQdbga1FjV4lMe9bMyOo5
AJ6O8hl3q7CJElLw6Z7p9vW2wHUf/xr242pZnk70DiMkyXxzJFLLqvRsWctTDc=

View File

@ -15,10 +15,10 @@
BUILD_DIR := $(shell mktemp -d)
DOCKER_REGISTRY ?= quay.io
IMAGE_NAME ?= drydock
IMAGE_PREFIX ?= attcomdev
IMAGE_TAG ?= latest
IMAGE_PREFIX ?= airshipit
IMAGE_TAG ?= dev
HELM := $(BUILD_DIR)/helm
PROXY ?= http://one.proxy.att.com:8080
PROXY ?= http://proxy.foo.com:8000
USE_PROXY ?= false
PUSH_IMAGE ?= false
LABEL ?= commit-id
@ -108,9 +108,21 @@ security: external_dep
tox -e bandit
.PHONY: drydock_docs
drydock_docs: external_dep
drydock_docs: external_dep render_diagrams genpolicy genconfig
tox -e docs
.PHONY: render_diagrams
render_diagrams:
plantuml -v -tpng -o ../source/images docs/diagrams/*.uml
.PHONY: genpolicy
genpolicy:
tox -e genpolicy
.PHONY: genconfig
genconfig:
tox -e genconfig
.PHONY: clean
clean:
rm -rf $(BUILD_DIR)/*

View File

@ -0,0 +1,23 @@
' PlantUML file to generate the architecture component diagram
@startuml
frame "Drydock" {
[Control] ..> [Statemgr]
[Orchestrator] ..> [Statemgr]
[Orchestrator] ..> [Ingester]
[Orchestrator] ..> [Driver]
}
database "Postgres" {
SQL - [drydock_db]
}
HTTP - [uWSGI]
[uWSGI] --> [Keystone Middleware]
[Keystone Middleware] --> WSGI
WSGI - [Control]
[Statemgr] --> [SQL]
[Driver] --> [MAAS]
@enduml

View File

@ -0,0 +1,43 @@
' PlantUML file describing the basic task execution
' sequence diagram
@startuml
actor User
== Task Creation ==
User -> TasksAPI : POST task body
TasksAPI -> oslo_policy : Enforce RBAC
oslo_policy -> TasksAPI : Approved
TasksAPI -> Statemgr : Insert Task as Queued
Statemgr -> TasksAPI : Result
TasksAPI -> User : Serialized Task
== Task Execution ==
Orchestrator -> Statemgr : Poll for Queued Tasks
Statemgr -> Orchestrator : Task ID of Queued Task
Orchestrator -> Statemgr : Update Task to Running
Statemgr -> Orchestrator : Result
Orchestrator -> Orchestrator : Execute task
Orchestrator -> Ingester : Ingest Site Design
Ingester -> Statemgr : Resolve Design Reference
Statemgr -> Ingester : Raw Site Design
Ingester -> Orchestrator : Parsed Site Design
Orchestrator -> Orchestrator : Create Subtask
Orchestrator ->> Driver : Execute Subtask
par Threaded Task Execution
Driver -> Driver : Execute 1 or More Actions
Driver -> Statemgr : Update Subtask to Complete
Statemgr -> Driver : Result
loop Until Subtask Complete or Timeout
Orchestrator -> Statemgr : Poll Subtask Status
Statemgr -> Orchestrator : Subtask Status
end
end
Orchestrator -> Statemgr : Update Task to Complete
== Task Query ==
User -> TasksAPI : GET task
@enduml

View File

@ -0,0 +1,4 @@
sphinx>=1.6.2
sphinx_rtd_theme==0.2.4
oslo.versionedobjects
falcon

View File

@ -34,8 +34,6 @@ extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
'oslo_config.sphinxconfiggen',
'oslo_policy.sphinxpolicygen'
]
# oslo_config.sphinxconfiggen options

148
docs/source/development.rst Normal file
View File

@ -0,0 +1,148 @@
..
Copyright 2018 AT&T Intellectual Property.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
=============================
Developer Overview of Drydock
=============================
The core objective of Drydock is to fully deploy physical servers based on
a declarative YAML topology. The actual provisioning work is completed by
a downstream 3rd party tool managed by a pluggable driver. The initial use-case
is Canonical MAAS.
Architecture
============
.. image:: images/architecture.png
:alt: High level architecture of Drydock
At a very high level Drydock is a very simple workflow engine fronted by a RESTful
API and maintains state in a Postgres relational database. Clients create a task
via the API that defines two main attributes of an action and a reference to a site design
or topology. The Drydock orchestrator will asynchronously execute the task while
the client polls the API for task status. Once execution is complete, the task status
is updated with results and the orchestrator will move to the next queued task.
.. image:: images/basic_task_sequence.png
:alt: Sequence diagram of basic task execution.
Components
==========
Control
-------
The ``control`` module is simply the RESTful API. It is based on the
`Falcon Framework <https://falconframework.org/>`_ and utilizes oslo_policy
for RBAC enforcement of the API endpoints. The normal deployment of Drydock
uses `uWSGI <http://uwsgi-docs.readthedocs.io/en/latest/>`_ and PasteDeploy
to build a pipeline that includes Keystone Middleware for authentication
and role decoration of the request.
Statemgr
--------
The ``statemgr`` module is the interface into all backing stores for Drydock.
This is mainly a `Postgres <https://www.postgresql.org/>`_, but Drydock
also uses the state manager for accessing external URLs to ingest site designs.
Interactions with Postgres use the core libraries of
`SQLAlchemy <https://docs.sqlalchemy.org/en/latest/core/tutorial.html>`_ (not the ORM).
Ingester
--------
The ``ingester`` module is basically a pluggable translator between external site definitions
(currently supports YAML formats) and the internal object model. Most of the internal object
model utilizes oslo_versionedobjects, much to my regret.
Orchestrator
------------
The ``orchestrator`` module is the brain of the task execution. It requests queued tasks
from the state manager and when one is available, it executes it. The orchestrator is
single-threaded in that only a single user-created task is executed at once. However, that
task can spawn many subtasks that may be executed concurrently depending on their synchronization
requirements. For some actions, the orchestrator creates subtasks that are handed off to the
driver for execution. A common question about this module is why Drydock doesn't use Celery
as a task management engine. The simple answer is that it wasn't considered due to unfamiliarity
at the time.
Driver
------
The ``driver`` module is a framework that supports pluggable drivers to execute task actions. The
subtle difference between the ``driver`` and ``orchestrator`` modules is the orchestrator manages
a wide scope of task execution that may cross the boundaries of a single driver plugin. Each driver
plugin is more focused on using a single downstream tool to accomplish the actions.
Developer Workflow / Test Cycle
===============================
Because Airship is a container-centric platform, the developer workflow heavily utilizes containers
for testing and publishing. It also requires Drydock to produce multiple artifacts that are related,
but separate: the Python package, the Docker image and the Helm chart. The code is published via the
Docker image artifact.
Drydock strives to conform to the `Airship coding conventions <http://airshipit.readthedocs.io/en/latest/conventions.html>`_.
Python
------
The Drydock Python codebase is under ``/drydock_provisioner`` and the testing is under ``/tests``. The
developer tools expect to run on Ubuntu 16.04 and you'll need GNU ``make`` available. With that you
should be able to use make targets for testing code changes:
* ``make pep8`` - Lint the Python code against the PEP8 coding standard
* ``make unit_tests`` - Run the local unit tests
* ``make security`` - Scan the code with `Bandit <https://docs.openstack.org/bandit/latest/>`_
* ``make coverage_test`` - Run unit tests and Postgres integration tests
Docker
------
The Drydock dockerfile is located in ``/images/drydock`` along with any artifacts built specifically
to enable the container image. Again make targets are used for generating and testing the artifacts.
* ``make images`` - Build the Drydock Docker image. See :ref:`make-options` below.
* ``make run_images`` - Build the image and then run a rudimentary local test
Helm
----
The Drydock helm chart is located in ``/charts/drydock``. Local testing currently only supports linting
and previewing the rendered artifacts. Richer functional chart testing is a TODO.
* ``make helm_lint`` - Lint the Helm chart
* ``make dry-run`` - Render the chart and output the Kubernetes manifest YAML documents
.. _make-options:
Makefile Options
----------------
The Makefile supports a few options that override default values to allow use behind
a proxy or for geneting the Docker image with custom tags.
* ``DOCKER_REGISTRY`` - Defaults to ``quay.io``, used as the Docker registry for tagging images
* ``IMAGE_NAME`` - Defaults to ``drydock``, the image name.
* ``IMAGE_PREFIX`` - Defaults to ``airshipit``, the registry organization to push images into
* ``IMAGE_TAG`` - Defaults to ``dev``, a tag to apply to the image
* ``PUSH_IMAGE`` - Defaults to ``false``, set to ``true`` if you want the build process to also
push the image. Likely will require you have previously run ``docker login``.
* ``PROXY`` - A HTTP/HTTPS proxy server to add to the image build environment. Required if you
are building the image behind a proxy.
* ``USE_PROXY`` - Defaults to ``false``, set to ``true`` to include the ``PROXY`` configuration
above in the build.

View File

@ -22,56 +22,43 @@ Bootstrap Kubernetes
--------------------
You can bootstrap your Helm-enabled Kubernetes cluster via the Openstack-Helm
`AIO <https://openstack-helm.readthedocs.io/en/latest/install/developer/all-in-one.html>`_
or the `Promenade <https://github.com/att-comdev/promenade>`_ tools.
`AIO <https://docs.openstack.org/openstack-helm/latest/install/developer/index.html>`_
or the `Promenade <https://airshipit.readthedocs.io/projects/promenade/en/latest/>`_ tools.
Deploy Drydock and Dependencies
-------------------------------
Drydock is most easily deployed using Armada to deploy the Drydock
container into a Kubernetes cluster via Helm charts. The Drydock chart
is in `aic-helm <https://github.com/att-comdev/aic-helm>`_. It depends on
the deployments of the `MaaS <https://github.com/openstack/openstack-helm-addons>`_
chart and the `Keystone <https://github.com/openstack/openstack-helm>`_ chart.
is in the ``charts/drydock`` directory. It depends on
the deployments of the `MaaS <https://git.openstack.org/cgit/openstack/airship-maas/>`_
chart and the `Keystone <https://git.openstack.org/cgit/openstack/openstack-helm/>`_ chart.
A integrated deployment of these charts can be accomplished using the
`Armada <https://github.com/att-comdev/armada>`_ tool. An example integration
`Armada <https://airshipit.readthedocs.io/projects/armada/en/latest/>`_ tool. An example integration
chart can be found in the
`UCP-Integration <https://github.com/att-comdev/ucp-integration>`_ repo in the
``./manifests/basic_ucp`` directory.
.. code:: bash
$ git clone https://github.com/att-comdev/ucp-integration
$ sudo docker run -ti -v $(pwd):/target -v ~/.kube:/armaada/.kube quay.io/attcomdev/armada:master apply --tiller-host <host_ip> --tiller-port 44134 /target/manifests/basic_ucp/ucp-armada.yaml
$ # wait for all pods in kubectl get pods -n ucp are 'Running'
$ KS_POD=$(kubectl get pods -n ucp | grep keystone | cut -d' ' -f1)
$ TOKEN=$(docker run --rm --net=host -e 'OS_AUTH_URL=http://keystone-api.ucp.svc.cluster.local:80/v3' -e 'OS_PASSWORD=password' -e 'OS_PROJECT_DOMAIN_NAME=default' -e 'OS_PROJECT_NAME=service' -e 'OS_REGION_NAME=RegionOne' -e 'OS_USERNAME=drydock' -e 'OS_USER_DOMAIN_NAME=default' -e 'OS_IDENTITY_API_VERSION=3' kolla/ubuntu-source-keystone:3.0.3 openstack token issue -f shell | grep ^id | cut -d'=' -f2 | tr -d '"')
$ docker run --rm -ti --net=host -e "DD_TOKEN=$TOKEN" -e "DD_URL=http://drydock-api.ucp.svc.cluster.local:9000" -e "LC_ALL=C.UTF-8" -e "LANG=C.UTF-8" $DRYDOCK_IMAGE /bin/bash
`Airship in a Bottle <http://git.openstack.org/cgit/openstack/airship-in-a-bottle/>`_ repo in the
``./manifests/dev_single_node`` directory.
Load Site
---------
To use Drydock for site configuration, you must craft and load a site topology
YAML. An example of this is in ``./examples/designparts_v1.0.yaml``.
YAML. An example of this is in ``./test/yaml_samples/deckhand_fullsite.yaml``.
Documentation on building your topology document is at :ref:`topology_label`.
Use the Drydock CLI create a design and load the configuration
.. code:: bash
# drydock design create
# drydock part create -d <design_id> -f <yaml_file>
Drydock requires that the YAML topology be hosted somewhere, either the preferred
method of using `Deckhand <https://airshipit.readthedocs.io/projects/deckhand/en/latests/>`_
or through a simple HTTP server like Nginx or Apache.
Use the CLI to create tasks to deploy your site
.. code:: bash
# drydock task create -d <design_id> -a verify_site
# drydock task create -d <design_id> -a prepare_site
# drydock task create -d <design_id> -a prepare_node
# drydock task create -d <design_id> -a deploy_node
# drydock task create -d <design_url> -a verify_site
# drydock task create -d <design_url> -a prepare_site
# drydock task create -d <design_url> -a prepare_nodes
# drydock task create -d <design_url> -a deploy_nodes
A demo of this process is available at https://asciinema.org/a/133906

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@ -37,7 +37,6 @@ Drydock Configuration Guide
policy-enforcement
exceptions/index
API Documentation
-----------------
.. toctree::
@ -48,6 +47,13 @@ API Documentation
bootaction
validatedesign
Developer Overview
------------------
.. toctree::
:maxdepth: 1
development
Client Documentation
--------------------

View File

@ -20,6 +20,6 @@ auto-generated from Drydock when this documentation is built, so
if you are having issues with an option, please compare your version of
Drydock with the version of this documentation.
The sample policy file can also be viewed in `file form <_static/drydock.policy.yaml.sample>`_.
The sample policy file can also be viewed in `file form <_static/policy.yaml.sample>`_.
.. literalinclude:: _static/drydock.policy.yaml.sample
.. literalinclude:: _static/policy.yaml.sample

View File

@ -238,7 +238,6 @@ def _import_modules(module_names):
for module_name in module_names:
module = importlib.import_module(module_name)
if hasattr(module, 'list_opts'):
print("Pulling options from module %s" % module.__name__)
imported_modules.append(module)
return imported_modules

View File

@ -187,7 +187,7 @@ class InvalidAssetLocation(BootactionError):
class BuildDataError(Exception):
"""
**Message:** *Error saving build data - data_element type <data_element>
could not be cast to string.
could not be cast to string*.
**Troubleshoot:**

View File

@ -1,5 +1,4 @@
[DEFAULT]
output_file = etc/drydock/drydock.conf.sample
wrap_width = 80
namespace = drydock_provisioner

View File

@ -1,5 +1,4 @@
[DEFAULT]
output_file = etc/drydock/policy.yaml.sample
wrap_width = 80
namespace = drydock_provisioner

View File

@ -8,15 +8,23 @@
# value)
#poll_interval = 10
# How long a leader has to check-in before leadership can be usurped, in seconds
# (integer value)
#leader_grace_period = 300
[bootdata]
# How often will an instance attempt to claim leadership, in seconds (integer
# value)
#leadership_claim_interval = 30
[database]
#
# From drydock_provisioner
#
# Path to file to distribute for prom_init.sh (string value)
#prom_init = /etc/drydock/bootdata/join.sh
# The URI database connect string. (string value)
#database_connect_string = <None>
[keystone_authtoken]
@ -218,6 +226,16 @@
#auth_section = <None>
[libvirt_driver]
#
# From drydock_provisioner
#
# Polling interval in seconds for querying libvirt status (integer value)
#poll_interval = 10
[logging]
#
@ -228,7 +246,7 @@
#log_level = INFO
# Logger name for the top-level logger (string value)
#global_logger_name = drydock
#global_logger_name = drydock_provisioner
# Logger name for OOB driver logging (string value)
#oobdriver_logger_name = ${global_logger_name}.oobdriver
@ -285,10 +303,10 @@
# From drydock_provisioner
#
# Module path string of a input ingester to enable (multi valued)
# Module path string of a input ingester to enable (string value)
#ingester = drydock_provisioner.ingester.plugins.yaml.YamlIngester
# Module path string of a OOB driver to enable (multi valued)
# List of module path strings of OOB drivers to enable (list value)
#oob_driver = drydock_provisioner.drivers.oob.pyghmi_driver.PyghmiDriver
# Module path string of the Node driver to enable (string value)
@ -298,6 +316,16 @@
#network_driver = <None>
[pyghmi_driver]
#
# From drydock_provisioner
#
# Polling interval in seconds for querying IPMI status (integer value)
#poll_interval = 10
[timeouts]
#
@ -331,3 +359,7 @@
# Timeout in minutes for deploying a node (integer value)
#deploy_node = 45
# Timeout in minutes between deployment completion and the all boot actions
# reporting status (integer value)
#bootaction_final_status = 15

View File

@ -38,6 +38,10 @@
# POST /api/v1.0/tasks
#"physical_provisioner:destroy_nodes": "role:admin"
# Read build data for a node
# GET /api/v1.0/nodes/{nodename}/builddata
#"physical_provisioner:read_build_data": "role:admin"
# Read loaded design data
# GET /api/v1.0/designs
# GET /api/v1.0/designs/{design_id}
@ -48,11 +52,11 @@
# POST /api/v1.0/designs/{design_id}/parts
#"physical_provisioner:ingest_data": "role:admin"
# et health status
# GET /api/v1.0/health/extended
#"physical_provisioner:health_data": "role:admin"
# Validate site design
# POST /api/v1.0/validatedesign
#"physical_provisioner:validate_site_design": "role:admin"
# Get health status
# GET /api/v1.0/health/extended
#"physical_provisioner:health_data": "role:admin"

View File

@ -6,3 +6,4 @@ python3-dev
python-tox
docker.io
gcc
plantuml

View File

@ -13,5 +13,5 @@
- hosts: primary
tasks:
- name: Publish current merged documents on readthedocs
shell: 'set -x && curl -X POST -d "token={{ airship_drydock_readthedocs.token }}" "{{ airship_drydock_readthedocs.url }}"'
shell: 'set -x && curl -X POST -d "token={{ airship_drydock_readthedocs.token | trim }}" "{{ airship_drydock_readthedocs.url | trim }}"'
register: result

View File

@ -15,6 +15,6 @@
- hosts: primary
tasks:
- name: Execute a Whitespace Linter check
command: find . -not -path "*/\.*" -not -path "*/doc/build/*" -not -name "*.tgz" -type f -exec egrep -l " +$" {} \;
command: find . -not -path "*/\.*" -not -path "*/docs/build/*" -not -path "*/docs/source/images/*" -not -name "*.tgz" -type f -exec egrep -l " +$" {} \;
register: result
failed_when: result.stdout != ""
failed_when: result.stdout != ""

11
tox.ini
View File

@ -56,10 +56,14 @@ commands=
{toxinidir}/tests/unit/ {toxinidir}/tests/integration/postgres
[testenv:genconfig]
commands = oslo-config-generator --config-file=etc/drydock/drydock-config-generator.conf
whitelist_externals=tee
sh
commands = sh -c 'oslo-config-generator --config-file=etc/drydock/drydock-config-generator.conf | tee etc/drydock/drydock.conf.sample docs/source/_static/drydock.conf.sample'
[testenv:genpolicy]
commands = oslopolicy-sample-generator --config-file etc/drydock/drydock-policy-generator.conf
whitelist_externals=tee
sh
commands = sh -c 'oslopolicy-sample-generator --config-file etc/drydock/drydock-policy-generator.conf | tee etc/drydock/policy.yaml.sample docs/source/_static/policy.yaml.sample'
[testenv:pep8]
commands = flake8 \
@ -74,7 +78,10 @@ exclude= venv,.venv,.git,.idea,.tox,*.egg-info,*.eggs,bin,dist,./build/,alembic/
max-line-length=119
[testenv:docs]
deps=
-rdocs/requirements-doc.txt
whitelist_externals=rm
recreate=true
commands =
rm -rf docs/build
sphinx-build -b html docs/source docs/build