Kubernetes deployment artifacts for Canonical's MaaS.
Go to file
Jerome Brette bfa8c97d3a Update Dockerfile to allow override of FROM variable
l is to let user customize the base image of the component
by passing FROM=myimage during the build process. This would let any
project leveraging Airship ensure that the base image is matching the
security requirements for that project and still use the same Dockerfile.
This will also ease the control of the /etc/apt/source.list
and thereby the result of apt-get update/upgrade procedure.
2. The above goal is achievable by using docker-ce feature such as:
ARG FROM="defaultbaseimage:xx"
FROM ${FROM}
For this reason, the installation of docker.io in the Zuul gating is beeing
replaced by docker-ce.
3. Third Goal is to bring consistency with the other compoenents leveraging
Helm such as the openstack-helm and potentially use bindep the same way
the LOCI images are to ensure
4. The new syntax in the Dockerfile is still commented out until the associated
image builder have been updated to use docker-ce as they have been for the LOCI
images.

Change-Id: I9a9d63329bea2b562f297705dc51661896a592f2
2018-07-17 16:36:20 -05:00
charts/maas Add test pods labels. 2018-07-11 08:03:37 -05:00
images Update Dockerfile to allow override of FROM variable 2018-07-17 16:36:20 -05:00
tools Update Dockerfile to allow override of FROM variable 2018-07-17 16:36:20 -05:00
.gitreview Update .gitreview for openstack infra 2018-05-17 19:48:42 +01:00
.zuul.yaml (zuul) Add Docker image jobs 2018-06-08 05:38:15 -05:00
LICENSE Initial commit 2017-10-19 11:42:23 -05:00
Makefile Update Dockerfile to allow override of FROM variable 2018-07-17 16:36:20 -05:00
README.md Add image cache sidecar 2017-12-04 12:50:30 -06:00

README.md

MaaS Helm Artifacts

This repository holds artifacts supporting the deployment of Canonical MaaS in a Kubernetes cluster.

Images

The MaaS install is made up of two required imags and one optional image. The Dockerfiles in this repo can be used to build all three. These images are intended to be deployed via a Kubernetes Helm chart.

MaaS Region Controller

The regiond Dockerfile builds a systemD-based Docker image to run the MaaS Region API server and metadata server.

MaaS Rack Controller

The rackd Dockerfile builds a systemD-based Docker image to run the MaaS Rack controller and dependent services (DHCPd, TFTPd, etc...). This image needs to be run in privileged host networking mode to function.

MaaS Image Cache

The cache image Dockerfile simply provides a point-in-time mirror of the maas.io image repository so that if you are deploying MaaS somewhere without network connectivity, you have a local copy of Ubuntu. Currently this only mirrors Ubuntu 16.04 Xenial and does not update the mirror after image creation.

Charts

Also provided is a Kubernetes Helm chart to deploy the MaaS pieces and integrates them. This chart depends on a previous deployment of Postgres. The recommended avenue for this is the Openstack Helm Postgres chart but any Postgres instance should work.

Overrides

Chart overrides are likely required to deploy MaaS into your environment

  • values.labels.rack.node_selector_key - This is the Kubernetes label key for selecting nodes to deploy the rack controller
  • values.labels.rack.node_selector_value - This is the Kubernetges label value for selecting nodes to deploy the rack controller
  • values.labels.region.node_selector_key - this is the Kubernetes label key for selecting nodes to deploy the region controller
  • values.labels.region.node_selector_value - This is the Kubernetes label value for selecting nodes to deploy the region controller
  • values.conf.cache.enabled - Boolean on whether to use the repo cache image in the deployment
  • values.conf.maas.url.maas_url - The URL rack controllers and nodes should use for accessing the region API (e.g. http://10.10.10.10:8080/MAAS)

Deployment Flow

During deployment, the chart executes the below steps:

  1. Initializes the Postgres DB for MaaS
  2. Starts a Pod with the region controller and optionally the image cache sidecar container
  3. Once the region controller is running, deploy a Pod with the rack controller and join it to the region controller.
  4. Initialize the configuration of MaaS and start the image sync
  5. Export an API key into a Kubernetes secret so other Pods can access the API if needed