Add an example with Ceph

This extends the virsh-based test tooling to both the previous, basic
example and the new "complete" example.  It also removes the Vagrant
tooling.

Change-Id: I249f937e9b3eedc486e31a3d1c1ac31bcfdf0ca8
This commit is contained in:
Mark Burnett 2017-10-23 16:06:46 -05:00
parent 22e2196b7c
commit e56ad622c3
48 changed files with 1896 additions and 385 deletions

View File

@ -1,8 +1,8 @@
.eggs
.tox .tox
.vagrant
Vagrantfile
__pycache__ __pycache__
build
docs docs
example examples
promenade.egg-info promenade.egg-info
tools tools

15
.gitignore vendored
View File

@ -1,20 +1,13 @@
__pycache__ __pycache__
/*.log /*.log
/*.tar /.python-version
/.vagrant /build
/cni.tgz
/env.sh
/helm
/kubectl
/kubelet
/linux-amd64
/genesis_image_cache/
/join_image_cache/
/promenade.egg-info /promenade.egg-info
/tmp /tmp
.tox/ .tox/
/.eggs /.eggs
ChangeLog /AUTHORS
/ChangeLog
# Sphinx documentation # Sphinx documentation
docs/build/ docs/build/

49
Vagrantfile vendored
View File

@ -1,49 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "promenade/ubuntu1604"
config.vm.box_check_update = false
provision_env = {}
if ENV['http_proxy'] then
provision_env['http_proxy'] = ENV['http_proxy']
end
config.vm.provision :shell, privileged: true, env: provision_env, inline:<<EOS
set -ex
echo === Setting up NTP so simulate MaaS environment ===
apt-get update -qq
apt-get install -y -qq --no-install-recommends chrony
EOS
config.vm.synced_folder ".", "/vagrant", :nfs => true
config.vm.provider "libvirt" do |lv|
lv.cpus = 2
lv.memory = "2048"
lv.nested = true
end
config.vm.define "n0" do |c|
c.vm.hostname = "n0"
c.vm.network "private_network", ip: "192.168.77.10"
end
config.vm.define "n1" do |c|
c.vm.hostname = "n1"
c.vm.network "private_network", ip: "192.168.77.11"
end
config.vm.define "n2" do |c|
c.vm.hostname = "n2"
c.vm.network "private_network", ip: "192.168.77.12"
end
config.vm.define "n3" do |c|
c.vm.hostname = "n3"
c.vm.network "private_network", ip: "192.168.77.13"
end
end

View File

@ -1,6 +1,48 @@
Getting Started Getting Started
=============== ===============
Basic Deployment
----------------
Setup
^^^^^
To create the certificates and scripts needed to perform a basic deployment,
you can use the following helper script:
.. code-block:: bash
./tools/basic-deployment.sh examples/basic build
This will copy the configuration provided in the ``examples/basic`` directory
into the ``build`` directory. Then, it will generate self-signed certificates
for all the needed components in Deckhand-compatible format. Finally, it will
render the provided configuration into directly-usable ``genesis.sh`` and
``join-<NODE>.sh`` scripts.
Execution
^^^^^^^^^
Perform the following steps to execute the deployment:
1. Copy the ``genesis.sh`` script to the genesis node and run it.
2. Validate the genesis node by running ``validate-genesis.sh`` on it.
3. Join master nodes by copying their respective ``join-<NODE>.sh`` scripts to
them and running them.
4. Validate the master nodes by copying and running their respective
``validate-<NODE>.sh`` scripts on each of them.
5. Re-provision the Genesis node
a) Run the ``/usr/local/bin/promenade-teardown`` script on the Genesis node:
b) Delete the node from the cluster via one of the other nodes ``kubectl delete node <GENESIS>``.
c) Power off and re-image the Genesis node.
d) Join the genesis node as a normal node using its ``join-<GENESIS>.sh`` script.
e) Validate the node using ``validate-<GENSIS>.sh``.
6. Join and validate all remaining nodes using the ``join-<NODE>.sh`` and
``validate-<NODE>.sh`` scripts described above.
Running Tests Running Tests
------------- -------------
@ -29,6 +71,11 @@ For more verbose output, try:
PROMENADE_DEBUG=1 ./tools/gate.sh PROMENADE_DEBUG=1 ./tools/gate.sh
For extremely verbose output, try:
.. code-block:: bash
GATE_DEBUG=1 PROMENADE_DEBUG=1 ./tools/gate.sh
The gate leaves its test VMs running for convenience. To shut everything down: The gate leaves its test VMs running for convenience. To shut everything down:
@ -57,6 +104,7 @@ These can be found in ``tools/g2/bin``. The most important is certainly
./tools/g2/bin/ssh.sh n0 ./tools/g2/bin/ssh.sh n0
Development Development
----------- -----------
@ -72,7 +120,7 @@ host:
./tools/registry/start.sh ./tools/registry/start.sh
./tools/registry/update_cache.sh ./tools/registry/update_cache.sh
Then, the images used by the example can be updated using: Then, the images used by the basic example can be updated using:
.. code-block:: bash .. code-block:: bash
@ -91,71 +139,6 @@ The registry can be stopped with:
./tools/registry/stop.sh ./tools/registry/stop.sh
Deployment using Vagrant
^^^^^^^^^^^^^^^^^^^^^^^^
Initial Setup of Vagrant
~~~~~~~~~~~~~~~~~~~~~~~~
Deployment using Vagrant uses KVM instead of Virtualbox due to better
performance of disk and networking, which both have significant impact on the
stability of the etcd clusters.
Make sure you have [Vagrant](https://vagrantup.com) installed, then
run `./tools/vagrant/full-vagrant-setup.sh`, which will do the following:
* Install Vagrant libvirt plugin and its dependencies
* Install NFS dependencies for Vagrant volume sharing
* Install [packer](https://packer.io) and build a KVM image for Ubuntu 16.04
Deployment
~~~~~~~~~~
A complete set of configuration that works with the `Vagrantfile` in the
top-level directory is provided in the `example` directory.
To exercise that example, first generate certs and combine the configuration
into usable parts:
.. code-block:: bash
./tools/build-example.sh
Start the VMs:
.. code-block:: bash
vagrant up --parallel
Then bring up the genesis node:
.. code-block:: bash
vagrant ssh n0 -c 'sudo /vagrant/example/scripts/genesis.sh'
Join additional master nodes:
.. code-block:: bash
vagrant ssh n1 -c 'sudo /vagrant/example/scripts/join-n1.sh'
vagrant ssh n2 -c 'sudo /vagrant/example/scripts/join-n2.sh'
Re-provision the genesis node as a normal master:
.. code-block:: bash
vagrant ssh n0 -c 'sudo promenade-teardown'
vagrant ssh n1 -c 'sudo kubectl delete node n0'
vagrant destroy -f n0
vagrant up n0
vagrant ssh n0 -c 'sudo /vagrant/example/scripts/join-n0.sh'
Join the remaining worker:
.. code-block:: bash
vagrant ssh n3 -c 'sudo /vagrant/example/scripts/join-n3.sh'
Building the image Building the image
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
@ -175,14 +158,11 @@ To build the image from behind a proxy, you can:
For convenience, there is a script which builds an image from the current code, For convenience, there is a script which builds an image from the current code,
then uses it to construct scripts for the example: then uses it to generate certificates and construct scripts:
.. code-block:: bash .. code-block:: bash
./tools/dev-build.sh ./tools/dev-build.sh examples/basic build
*NOTE* the ``dev-build.sh`` script puts Promenade in debug mode, which will
instruct it to use Vagrant's shared directory to source local charts.
Using Promenade Behind a Proxy Using Promenade Behind a Proxy

2
example/.gitignore vendored
View File

@ -1,2 +0,0 @@
certificates.yaml
scripts

View File

@ -0,0 +1,17 @@
---
schema: armada/Manifest/v1
metadata:
schema: metadata/Document/v1
name: cluster-bootstrap
layeringDefinition:
abstract: false
layer: site
data:
release_prefix: ucp
chart_groups:
- kubernetes-proxy
- container-networking
- dns
- kubernetes
- kubernetes-rbac
...

View File

@ -0,0 +1,15 @@
---
schema: promenade/Docker/v1
metadata:
schema: metadata/Document/v1
name: docker
layeringDefinition:
abstract: false
layer: site
data:
config:
insecure-registries:
- registry:5000
live-restore: true
storage-driver: overlay2
...

View File

@ -14,6 +14,9 @@ data:
- calico-etcd=enabled - calico-etcd=enabled
- node-role.kubernetes.io/master= - node-role.kubernetes.io/master=
dynamic: dynamic:
- ceph-mds=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- kubernetes-apiserver=enabled - kubernetes-apiserver=enabled
- kubernetes-controller-manager=enabled - kubernetes-controller-manager=enabled
- kubernetes-etcd=enabled - kubernetes-etcd=enabled
@ -31,6 +34,6 @@ data:
scheduler: gcr.io/google_containers/hyperkube-amd64:v1.8.0 scheduler: gcr.io/google_containers/hyperkube-amd64:v1.8.0
files: files:
- path: /var/lib/anchor/calico-etcd-bootstrap - path: /var/lib/anchor/calico-etcd-bootstrap
content: "" content: "# placeholder for triggering calico etcd bootstrapping"
mode: 0644 mode: 0644
... ...

View File

@ -1,55 +1,4 @@
--- ---
schema: promenade/KubernetesNetwork/v1
metadata:
schema: metadata/Document/v1
name: kubernetes-network
layeringDefinition:
abstract: false
layer: site
data:
dns:
cluster_domain: cluster.local
service_ip: 10.96.0.10
bootstrap_validation_checks:
- calico-etcd.kube-system.svc.cluster.local
- kubernetes-etcd.kube-system.svc.cluster.local
- kubernetes.default.svc.cluster.local
upstream_servers:
- 8.8.8.8
- 8.8.4.4
kubernetes:
pod_cidr: 10.97.0.0/16
service_cidr: 10.96.0.0/16
service_ip: 10.96.0.1
etcd:
service_ip: 10.96.0.2
hosts_entries:
- ip: 192.168.77.1
names:
- registry
# proxy:
# url: http://proxy.example.com:8080
# additional_no_proxy:
# - 10.0.1.1
---
schema: promenade/Docker/v1
metadata:
schema: metadata/Document/v1
name: docker
layeringDefinition:
abstract: false
layer: site
data:
config:
insecure-registries:
- registry:5000
live-restore: true
storage-driver: overlay2
---
schema: promenade/HostSystem/v1 schema: promenade/HostSystem/v1
metadata: metadata:
schema: metadata/Document/v1 schema: metadata/Document/v1

View File

@ -0,0 +1,38 @@
---
schema: promenade/KubernetesNetwork/v1
metadata:
schema: metadata/Document/v1
name: kubernetes-network
layeringDefinition:
abstract: false
layer: site
data:
dns:
cluster_domain: cluster.local
service_ip: 10.96.0.10
bootstrap_validation_checks:
- calico-etcd.kube-system.svc.cluster.local
- kubernetes-etcd.kube-system.svc.cluster.local
- kubernetes.default.svc.cluster.local
upstream_servers:
- 8.8.8.8
- 8.8.4.4
kubernetes:
pod_cidr: 10.97.0.0/16
service_cidr: 10.96.0.0/16
service_ip: 10.96.0.1
etcd:
service_ip: 10.96.0.2
hosts_entries:
- ip: 192.168.77.1
names:
- registry
# proxy:
# url: http://proxy.example.com:8080
# additional_no_proxy:
# - 10.0.1.1
...

View File

@ -1,20 +1,4 @@
--- ---
schema: armada/Manifest/v1
metadata:
schema: metadata/Document/v1
name: cluster-bootstrap
layeringDefinition:
abstract: false
layer: site
data:
release_prefix: ucp
chart_groups:
- kubernetes-proxy
- container-networking
- dns
- kubernetes
- kubernetes-rbac
---
schema: armada/ChartGroup/v1 schema: armada/ChartGroup/v1
metadata: metadata:
schema: metadata/Document/v1 schema: metadata/Document/v1

View File

@ -12,6 +12,9 @@ data:
join_ip: 192.168.77.11 join_ip: 192.168.77.11
labels: labels:
dynamic: dynamic:
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
- ucp-control-plane=enabled - ucp-control-plane=enabled
--- ---
schema: promenade/KubernetesNode/v1 schema: promenade/KubernetesNode/v1
@ -30,6 +33,9 @@ data:
- node-role.kubernetes.io/master= - node-role.kubernetes.io/master=
dynamic: dynamic:
- calico-etcd=enabled - calico-etcd=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
- kubernetes-apiserver=enabled - kubernetes-apiserver=enabled
- kubernetes-controller-manager=enabled - kubernetes-controller-manager=enabled
- kubernetes-etcd=enabled - kubernetes-etcd=enabled
@ -52,6 +58,9 @@ data:
- node-role.kubernetes.io/master= - node-role.kubernetes.io/master=
dynamic: dynamic:
- calico-etcd=enabled - calico-etcd=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
- kubernetes-apiserver=enabled - kubernetes-apiserver=enabled
- kubernetes-controller-manager=enabled - kubernetes-controller-manager=enabled
- kubernetes-etcd=enabled - kubernetes-etcd=enabled
@ -74,6 +83,9 @@ data:
- node-role.kubernetes.io/master= - node-role.kubernetes.io/master=
dynamic: dynamic:
- calico-etcd=enabled - calico-etcd=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
- kubernetes-apiserver=enabled - kubernetes-apiserver=enabled
- kubernetes-controller-manager=enabled - kubernetes-controller-manager=enabled
- kubernetes-etcd=enabled - kubernetes-etcd=enabled

View File

@ -0,0 +1,19 @@
---
schema: armada/Manifest/v1
metadata:
schema: metadata/Document/v1
name: cluster-bootstrap
layeringDefinition:
abstract: false
layer: site
data:
release_prefix: ucp
chart_groups:
- kubernetes-proxy
- container-networking
- dns
- kubernetes
- kubernetes-rbac
- ceph
- ucp-infra
...

View File

@ -0,0 +1,15 @@
---
schema: promenade/Docker/v1
metadata:
schema: metadata/Document/v1
name: docker
layeringDefinition:
abstract: false
layer: site
data:
config:
insecure-registries:
- registry:5000
live-restore: true
storage-driver: overlay2
...

View File

@ -0,0 +1,39 @@
---
schema: promenade/Genesis/v1
metadata:
schema: metadata/Document/v1
name: genesis
layeringDefinition:
abstract: false
layer: site
data:
hostname: n0
ip: 192.168.77.10
labels:
static:
- calico-etcd=enabled
- node-role.kubernetes.io/master=
dynamic:
- ceph-mds=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- kubernetes-apiserver=enabled
- kubernetes-controller-manager=enabled
- kubernetes-etcd=enabled
- kubernetes-scheduler=enabled
- promenade-genesis=enabled
- ucp-control-plane=enabled
images:
armada: quay.io/attcomdev/armada:latest
helm:
tiller: gcr.io/kubernetes-helm/tiller:v2.5.1
kubernetes:
apiserver: gcr.io/google_containers/hyperkube-amd64:v1.8.0
controller-manager: gcr.io/google_containers/hyperkube-amd64:v1.8.0
etcd: quay.io/coreos/etcd:v3.0.17
scheduler: gcr.io/google_containers/hyperkube-amd64:v1.8.0
files:
- path: /var/lib/anchor/calico-etcd-bootstrap
content: "# placeholder for triggering calico etcd bootstrapping"
mode: 0644
...

View File

@ -0,0 +1,62 @@
---
schema: promenade/HostSystem/v1
metadata:
schema: metadata/Document/v1
name: host-system
layeringDefinition:
abstract: false
layer: site
data:
files:
- path: /opt/kubernetes/bin/kubelet
tar_url: https://dl.k8s.io/v1.8.0/kubernetes-node-linux-amd64.tar.gz
tar_path: kubernetes/node/bin/kubelet
mode: 0555
images:
coredns: coredns/coredns:011
helm:
helm: lachlanevenson/k8s-helm:v2.5.1
kubernetes:
kubectl: gcr.io/google_containers/hyperkube-amd64:v1.8.0
packages:
repositories:
- deb http://apt.dockerproject.org/repo ubuntu-xenial main
keys:
- |-
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFWln24BEADrBl5p99uKh8+rpvqJ48u4eTtjeXAWbslJotmC/CakbNSqOb9o
ddfzRvGVeJVERt/Q/mlvEqgnyTQy+e6oEYN2Y2kqXceUhXagThnqCoxcEJ3+KM4R
mYdoe/BJ/J/6rHOjq7Omk24z2qB3RU1uAv57iY5VGw5p45uZB4C4pNNsBJXoCvPn
TGAs/7IrekFZDDgVraPx/hdiwopQ8NltSfZCyu/jPpWFK28TR8yfVlzYFwibj5WK
dHM7ZTqlA1tHIG+agyPf3Rae0jPMsHR6q+arXVwMccyOi+ULU0z8mHUJ3iEMIrpT
X+80KaN/ZjibfsBOCjcfiJSB/acn4nxQQgNZigna32velafhQivsNREFeJpzENiG
HOoyC6qVeOgKrRiKxzymj0FIMLru/iFF5pSWcBQB7PYlt8J0G80lAcPr6VCiN+4c
NKv03SdvA69dCOj79PuO9IIvQsJXsSq96HB+TeEmmL+xSdpGtGdCJHHM1fDeCqkZ
hT+RtBGQL2SEdWjxbF43oQopocT8cHvyX6Zaltn0svoGs+wX3Z/H6/8P5anog43U
65c0A+64Jj00rNDr8j31izhtQMRo892kGeQAaaxg4Pz6HnS7hRC+cOMHUU4HA7iM
zHrouAdYeTZeZEQOA7SxtCME9ZnGwe2grxPXh/U/80WJGkzLFNcTKdv+rwARAQAB
tDdEb2NrZXIgUmVsZWFzZSBUb29sIChyZWxlYXNlZG9ja2VyKSA8ZG9ja2VyQGRv
Y2tlci5jb20+iQI4BBMBAgAiBQJVpZ9uAhsvBgsJCAcDAgYVCAIJCgsEFgIDAQIe
AQIXgAAKCRD3YiFXLFJgnbRfEAC9Uai7Rv20QIDlDogRzd+Vebg4ahyoUdj0CH+n
Ak40RIoq6G26u1e+sdgjpCa8jF6vrx+smpgd1HeJdmpahUX0XN3X9f9qU9oj9A4I
1WDalRWJh+tP5WNv2ySy6AwcP9QnjuBMRTnTK27pk1sEMg9oJHK5p+ts8hlSC4Sl
uyMKH5NMVy9c+A9yqq9NF6M6d6/ehKfBFFLG9BX+XLBATvf1ZemGVHQusCQebTGv
0C0V9yqtdPdRWVIEhHxyNHATaVYOafTj/EF0lDxLl6zDT6trRV5n9F1VCEh4Aal8
L5MxVPcIZVO7NHT2EkQgn8CvWjV3oKl2GopZF8V4XdJRl90U/WDv/6cmfI08GkzD
YBHhS8ULWRFwGKobsSTyIvnbk4NtKdnTGyTJCQ8+6i52s+C54PiNgfj2ieNn6oOR
7d+bNCcG1CdOYY+ZXVOcsjl73UYvtJrO0Rl/NpYERkZ5d/tzw4jZ6FCXgggA/Zxc
jk6Y1ZvIm8Mt8wLRFH9Nww+FVsCtaCXJLP8DlJLASMD9rl5QS9Ku3u7ZNrr5HWXP
HXITX660jglyshch6CWeiUATqjIAzkEQom/kEnOrvJAtkypRJ59vYQOedZ1sFVEL
MXg2UCkD/FwojfnVtjzYaTCeGwFQeqzHmM241iuOmBYPeyTY5veF49aBJA1gEJOQ
TvBR8Q==
=Fm3p
-----END PGP PUBLIC KEY BLOCK-----
additional:
- ceph-common=10.2.7-0ubuntu0.16.04.1
- curl
- jq
required:
docker: docker-engine=1.13.1-0~ubuntu-xenial
socat: socat=1.7.3.1-1
...

View File

@ -0,0 +1,38 @@
---
schema: promenade/KubernetesNetwork/v1
metadata:
schema: metadata/Document/v1
name: kubernetes-network
layeringDefinition:
abstract: false
layer: site
data:
dns:
cluster_domain: cluster.local
service_ip: 10.96.0.10
bootstrap_validation_checks:
- calico-etcd.kube-system.svc.cluster.local
- kubernetes-etcd.kube-system.svc.cluster.local
- kubernetes.default.svc.cluster.local
upstream_servers:
- 8.8.8.8
- 8.8.4.4
kubernetes:
pod_cidr: 10.97.0.0/16
service_cidr: 10.96.0.0/16
service_ip: 10.96.0.1
etcd:
service_ip: 10.96.0.2
hosts_entries:
- ip: 192.168.77.1
names:
- registry
# proxy:
# url: http://proxy.example.com:8080
# additional_no_proxy:
# - 10.0.1.1
...

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,94 @@
---
schema: promenade/KubernetesNode/v1
metadata:
schema: metadata/Document/v1
name: n0
layeringDefinition:
abstract: false
layer: site
data:
hostname: n0
ip: 192.168.77.10
join_ip: 192.168.77.11
labels:
dynamic:
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
- ucp-control-plane=enabled
---
schema: promenade/KubernetesNode/v1
metadata:
schema: metadata/Document/v1
name: n1
layeringDefinition:
abstract: false
layer: site
data:
hostname: n1
ip: 192.168.77.11
join_ip: 192.168.77.10
labels:
static:
- node-role.kubernetes.io/master=
dynamic:
- calico-etcd=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
- kubernetes-apiserver=enabled
- kubernetes-controller-manager=enabled
- kubernetes-etcd=enabled
- kubernetes-scheduler=enabled
- ucp-control-plane=enabled
---
schema: promenade/KubernetesNode/v1
metadata:
schema: metadata/Document/v1
name: n2
layeringDefinition:
abstract: false
layer: site
data:
hostname: n2
ip: 192.168.77.12
join_ip: 192.168.77.10
labels:
static:
- node-role.kubernetes.io/master=
dynamic:
- calico-etcd=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
- kubernetes-apiserver=enabled
- kubernetes-controller-manager=enabled
- kubernetes-etcd=enabled
- kubernetes-scheduler=enabled
- ucp-control-plane=enabled
---
schema: promenade/KubernetesNode/v1
metadata:
schema: metadata/Document/v1
name: n3
layeringDefinition:
abstract: false
layer: site
data:
hostname: n3
ip: 192.168.77.13
join_ip: 192.168.77.11
labels:
static:
- node-role.kubernetes.io/master=
dynamic:
- calico-etcd=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
- kubernetes-apiserver=enabled
- kubernetes-controller-manager=enabled
- kubernetes-etcd=enabled
- kubernetes-scheduler=enabled
- ucp-control-plane=enabled
...

View File

@ -4,8 +4,8 @@
set -xe set -xe
if [ $(kubectl get nodes | grep '\bReady\b' | wc -l) -lt 3 ]; then if [ $(kubectl get nodes | grep '\bReady\b' | wc -l) -lt 2 ]; then
echo Not enough live nodes to proceed with genesis teardown. 1>&2 echo Not enough live nodes to proceed with teardown. 1>&2
exit 1 exit 1
fi fi

View File

@ -23,7 +23,7 @@ spec:
if [ $MEMBER_COUNT -gt 1 ]; then if [ $MEMBER_COUNT -gt 1 ]; then
MEMBER_ID=$(etcdctl member list | grep auxiliary | awk -F ', ' '{ print $1 }') MEMBER_ID=$(etcdctl member list | grep auxiliary | awk -F ', ' '{ print $1 }')
if [ -n $MEMBER_ID ]; then if [ -n $MEMBER_ID ]; then
while [ $MEMBER_COUNT -lt 4 ]; do while [ $MEMBER_COUNT -lt 3 ]; do
sleep 30 sleep 30
MEMBER_COUNT=$(etcdctl member list | grep '\bstarted\b' | wc -l) MEMBER_COUNT=$(etcdctl member list | grep '\bstarted\b' | wc -l)
done done

View File

@ -1,34 +1,42 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -ex set -eux
IMAGE_PROMENADE=${IMAGE_PROMENADE:-quay.io/attcomdev/promenade:latest} IMAGE_PROMENADE=${IMAGE_PROMENADE:-quay.io/attcomdev/promenade:latest}
PROMENADE_DEBUG=${PROMENADE_DEBUG:-0}
SCRIPT_DIR=$(realpath $(dirname $0))
CONFIG_SOURCE=$(realpath ${1:-${SCRIPT_DIR}/../examples/basic})
BUILD_DIR=$(realpath ${2:-${SCRIPT_DIR}/../build})
echo === Cleaning up old data === echo === Cleaning up old data ===
rm -rf example/scripts rm -rf ${BUILD_DIR}
mkdir example/scripts mkdir -p ${BUILD_DIR}
cp "${CONFIG_SOURCE}"/*.yaml ${BUILD_DIR}
echo === Generating updated certificates === echo === Generating updated certificates ===
docker run --rm -t \ docker run --rm -t \
-w /target \ -w /target \
-e PROMENADE_DEBUG=$PROMENADE_DEBUG \ -e PROMENADE_DEBUG=$PROMENADE_DEBUG \
-v $(pwd):/target \ -v ${BUILD_DIR}:/target \
${IMAGE_PROMENADE} \ ${IMAGE_PROMENADE} \
promenade \ promenade \
generate-certs \ generate-certs \
-o example \ -o /target \
example/*.yaml $(ls ${BUILD_DIR})
echo === Building bootstrap scripts === echo === Building bootstrap scripts ===
docker run --rm -t \ docker run --rm -t \
-w /target \ -w /target \
-e PROMENADE_DEBUG=$PROMENADE_DEBUG \ -e PROMENADE_DEBUG=$PROMENADE_DEBUG \
-v $(pwd):/target \ -v ${BUILD_DIR}:/target \
${IMAGE_PROMENADE} \ ${IMAGE_PROMENADE} \
promenade \ promenade \
build-all \ build-all \
-o example/scripts \ -o /target \
--validators \ --validators \
example/*.yaml $(ls ${BUILD_DIR})
echo === Done === echo === Done ===

View File

@ -1,12 +1,13 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -ex set -eux
SCRIPT_DIR=$(dirname $0) SCRIPT_DIR=$(realpath $(dirname $0))
SOURCE_DIR=$(realpath $SCRIPT_DIR/..)
echo === Building image === echo === Building image ===
docker build -t quay.io/attcomdev/promenade:latest $(realpath $SCRIPT_DIR/..) docker build -t quay.io/attcomdev/promenade:latest ${SOURCE_DIR}
export PROMENADE_DEBUG=${PROMENADE_DEBUG:-1} export PROMENADE_DEBUG=${PROMENADE_DEBUG:-1}
exec $SCRIPT_DIR/build-example.sh exec $SCRIPT_DIR/basic-deployment.sh ${@}

View File

@ -1,3 +1,6 @@
set -e
set -o nounset
LIB_DIR=$(realpath $(dirname $BASH_SOURCE)) LIB_DIR=$(realpath $(dirname $BASH_SOURCE))
source $LIB_DIR/config.sh source $LIB_DIR/config.sh
@ -11,6 +14,6 @@ source $LIB_DIR/ssh.sh
source $LIB_DIR/validate.sh source $LIB_DIR/validate.sh
source $LIB_DIR/virsh.sh source $LIB_DIR/virsh.sh
if [ "x${PROMENADE_DEBUG}" = "x1" ]; then if [[ -v GATE_DEBUG && ${GATE_DEBUG} = "1" ]]; then
set -x set -x
fi fi

View File

@ -5,3 +5,24 @@ export PROMENADE_DEBUG=${PROMENADE_DEBUG:-0}
export REGISTRY_DATA_DIR=${REGISTRY_DATA_DIR:-/mnt/registry} export REGISTRY_DATA_DIR=${REGISTRY_DATA_DIR:-/mnt/registry}
export VIRSH_POOL=${VIRSH_POOL:-promenade} export VIRSH_POOL=${VIRSH_POOL:-promenade}
export VIRSH_POOL_PATH=${VIRSH_POOL_PATH:-/var/lib/libvirt/promenade} export VIRSH_POOL_PATH=${VIRSH_POOL_PATH:-/var/lib/libvirt/promenade}
config_configuration() {
jq -cr '.configuration[]' < ${GATE_MANIFEST}
}
config_vm_memory() {
jq -cr '.vm.memory' < ${GATE_MANIFEST}
}
config_vm_names() {
jq -cr '.vm.names[]' < ${GATE_MANIFEST}
}
config_vm_ip() {
NAME=${1}
echo 192.168.77.1${NAME:1}
}
config_vm_vcpus() {
jq -cr '.vm.vcpus' < ${GATE_MANIFEST}
}

View File

@ -2,14 +2,9 @@ GENESIS_NAME=n0
SSH_CONFIG_DIR=${WORKSPACE}/tools/g2/config-ssh SSH_CONFIG_DIR=${WORKSPACE}/tools/g2/config-ssh
TEMPLATE_DIR=${WORKSPACE}/tools/g2/templates TEMPLATE_DIR=${WORKSPACE}/tools/g2/templates
XML_DIR=${WORKSPACE}/tools/g2/xml XML_DIR=${WORKSPACE}/tools/g2/xml
VM_NAMES=( ALL_VM_NAMES=(
n0 n0
n1 n1
n2 n2
n3 n3
) )
vm_ip() {
NAME=${1}
echo 192.168.77.1${NAME:1}
}

View File

@ -1,4 +1,4 @@
if [[ "x${GATE_COLOR}" = "x1" ]]; then if [[ -v GATE_COLOR && ${GATE_COLOR} = "1" ]]; then
C_CLEAR="\e[0m" C_CLEAR="\e[0m"
C_ERROR="\e[38;5;160m" C_ERROR="\e[38;5;160m"
C_HEADER="\e[38;5;164m" C_HEADER="\e[38;5;164m"
@ -16,7 +16,9 @@ else
fi fi
log() { log() {
echo -e ${C_MUTE}$(date --utc)${C_CLEAR} $* 1>&2 d=$(date --utc)
echo -e ${C_MUTE}${d}${C_CLEAR} $* 1>&2
echo -e ${d} $* >> ${LOG_FILE}
} }
log_stage_diagnostic_header() { log_stage_diagnostic_header() {
@ -60,8 +62,10 @@ log_temp_dir() {
echo -e Working in ${C_TEMP}${TEMP_DIR}${C_CLEAR} echo -e Working in ${C_TEMP}${TEMP_DIR}${C_CLEAR}
} }
if [[ "x${PROMENADE_DEBUG}" = "x1" ]]; then if [[ -v GATE_DEBUG && ${GATE_DEBUG} = "1" ]]; then
export LOG_FILE=/dev/stderr export LOG_FILE=/dev/stderr
elif [[ -v TEMP_DIR ]]; then
export LOG_FILE=${TEMP_DIR}/gate.log
else else
export LOG_FILE=/dev/null export LOG_FILE=/dev/null
fi fi

View File

@ -2,12 +2,12 @@ registry_down() {
REGISTRY_ID=$(docker ps -qa -f name=registry) REGISTRY_ID=$(docker ps -qa -f name=registry)
if [ "x${REGISTRY_ID}" != "x" ]; then if [ "x${REGISTRY_ID}" != "x" ]; then
log Removing docker registry log Removing docker registry
docker rm -fv ${REGISTRY_ID} &> ${LOG_FILE} docker rm -fv ${REGISTRY_ID} &>> ${LOG_FILE}
fi fi
} }
registry_list_images() { registry_list_images() {
FILES=${@:-${WORKSPACE}/example/*.yaml} FILES=$(find $(config_configuration) -type f -name '*.yaml')
HOSTNAME_REGEX='[a-zA-Z0-9][a-zA-Z0-9_-]{0,62}' HOSTNAME_REGEX='[a-zA-Z0-9][a-zA-Z0-9_-]{0,62}'
DOMAIN_NAME_REGEX="${HOSTNAME_REGEX}(\.${HOSTNAME_REGEX})*" DOMAIN_NAME_REGEX="${HOSTNAME_REGEX}(\.${HOSTNAME_REGEX})*"
@ -31,9 +31,9 @@ registry_populate() {
for image in $(registry_list_images); do for image in $(registry_list_images); do
if ! docker pull localhost:5000/${image} &> /dev/null; then if ! docker pull localhost:5000/${image} &> /dev/null; then
log Loading image ${image} into local registry log Loading image ${image} into local registry
docker pull ${image} >& ${LOG_FILE} docker pull ${image} &>> ${LOG_FILE}
docker tag ${image} localhost:5000/${image} >& ${LOG_FILE} docker tag ${image} localhost:5000/${image} &>> ${LOG_FILE}
docker push localhost:5000/${image} >& ${LOG_FILE} docker push localhost:5000/${image} &>> ${LOG_FILE}
fi fi
done done
} }
@ -51,7 +51,7 @@ registry_up() {
RUNNING_REGISTRY_ID=$(docker ps -q -f name=registry) RUNNING_REGISTRY_ID=$(docker ps -q -f name=registry)
if [ "x${RUNNING_REGISTRY_ID}" = "x" -a "x${REGISTRY_ID}" != "x" ]; then if [ "x${RUNNING_REGISTRY_ID}" = "x" -a "x${REGISTRY_ID}" != "x" ]; then
log Removing stopped docker registry log Removing stopped docker registry
docker rm -fv ${REGISTRY_ID} &> ${LOG_FILE} docker rm -fv ${REGISTRY_ID} &>> ${LOG_FILE}
fi fi
if [ "x${REGISTRY_ID}" = "x" ]; then if [ "x${REGISTRY_ID}" = "x" ]; then
@ -62,6 +62,6 @@ registry_up() {
--restart=always \ --restart=always \
--name registry \ --name registry \
-v $REGISTRY_DATA_DIR:/var/lib/registry \ -v $REGISTRY_DATA_DIR:/var/lib/registry \
registry:2 &> ${LOG_FILE} registry:2 &>> ${LOG_FILE}
fi fi
} }

View File

@ -25,7 +25,7 @@ ssh_keypair_declare() {
log Validating SSH keypair exists log Validating SSH keypair exists
if [ ! -s ${SSH_CONFIG_DIR}/id_rsa ]; then if [ ! -s ${SSH_CONFIG_DIR}/id_rsa ]; then
log Generating SSH keypair log Generating SSH keypair
ssh-keygen -N '' -f ${SSH_CONFIG_DIR}/id_rsa > ${LOG_FILE} ssh-keygen -N '' -f ${SSH_CONFIG_DIR}/id_rsa &>> ${LOG_FILE}
fi fi
} }

View File

@ -11,12 +11,12 @@ img_base_declare() {
--name promenade-base.img \ --name promenade-base.img \
--format qcow2 \ --format qcow2 \
--capacity ${BASE_IMAGE_SIZE} \ --capacity ${BASE_IMAGE_SIZE} \
--prealloc-metadata &> ${LOG_FILE} --prealloc-metadata &>> ${LOG_FILE}
virsh vol-upload \ virsh vol-upload \
--vol promenade-base.img \ --vol promenade-base.img \
--file base.img \ --file base.img \
--pool ${VIRSH_POOL} &> ${LOG_FILE} --pool ${VIRSH_POOL} &>> ${LOG_FILE}
fi fi
} }
@ -27,7 +27,7 @@ iso_gen() {
log Removing existing cloud-init ISO for ${NAME} log Removing existing cloud-init ISO for ${NAME}
virsh vol-delete \ virsh vol-delete \
--pool ${VIRSH_POOL} \ --pool ${VIRSH_POOL} \
--vol cloud-init-${NAME}.iso &> ${LOG_FILE} --vol cloud-init-${NAME}.iso &>> ${LOG_FILE}
fi fi
log Creating cloud-init ISO for ${NAME} log Creating cloud-init ISO for ${NAME}
@ -35,7 +35,7 @@ iso_gen() {
mkdir -p ${ISO_DIR} mkdir -p ${ISO_DIR}
cd ${ISO_DIR} cd ${ISO_DIR}
export BR_IP_NODE=$(vm_ip ${NAME}) export BR_IP_NODE=$(config_vm_ip ${NAME})
export NAME export NAME
export SSH_PUBLIC_KEY=$(ssh_load_pubkey) export SSH_PUBLIC_KEY=$(ssh_load_pubkey)
envsubst < ${TEMPLATE_DIR}/user-data.sub > user-data envsubst < ${TEMPLATE_DIR}/user-data.sub > user-data
@ -50,18 +50,18 @@ iso_gen() {
-o cidata.iso \ -o cidata.iso \
meta-data \ meta-data \
network-config \ network-config \
user-data &> ${LOG_FILE} user-data &>> ${LOG_FILE}
virsh vol-create-as \ virsh vol-create-as \
--pool ${VIRSH_POOL} \ --pool ${VIRSH_POOL} \
--name cloud-init-${NAME}.iso \ --name cloud-init-${NAME}.iso \
--capacity $(stat -c %s ${ISO_DIR}/cidata.iso) \ --capacity $(stat -c %s ${ISO_DIR}/cidata.iso) \
--format raw &> ${LOG_FILE} --format raw &>> ${LOG_FILE}
virsh vol-upload \ virsh vol-upload \
--pool ${VIRSH_POOL} \ --pool ${VIRSH_POOL} \
--vol cloud-init-${NAME}.iso \ --vol cloud-init-${NAME}.iso \
--file ${ISO_DIR}/cidata.iso &> ${LOG_FILE} --file ${ISO_DIR}/cidata.iso &>> ${LOG_FILE}
} }
iso_path() { iso_path() {
@ -77,7 +77,7 @@ net_clean() {
net_declare() { net_declare() {
if ! virsh net-list --name | grep ^promenade$ > /dev/null; then if ! virsh net-list --name | grep ^promenade$ > /dev/null; then
log Creating promenade network log Creating promenade network
virsh net-create ${XML_DIR}/network.xml &> ${LOG_FILE} virsh net-create ${XML_DIR}/network.xml &>> ${LOG_FILE}
fi fi
} }
@ -85,25 +85,25 @@ pool_declare() {
log Validating virsh pool setup log Validating virsh pool setup
if ! virsh pool-uuid ${VIRSH_POOL} &> /dev/null; then if ! virsh pool-uuid ${VIRSH_POOL} &> /dev/null; then
log Creating pool ${VIRSH_POOL} log Creating pool ${VIRSH_POOL}
virsh pool-create-as --name ${VIRSH_POOL} --type dir --target ${VIRSH_POOL_PATH} &> ${LOG_FILE} virsh pool-create-as --name ${VIRSH_POOL} --type dir --target ${VIRSH_POOL_PATH} &>> ${LOG_FILE}
fi fi
} }
vm_clean() { vm_clean() {
NAME=${1} NAME=${1}
if virsh list --name | grep ${NAME} &> /dev/null; then if virsh list --name | grep ${NAME} &> /dev/null; then
virsh destroy ${NAME} &> ${LOG_FILE} virsh destroy ${NAME} &>> ${LOG_FILE}
fi fi
if virsh list --name --all | grep ${NAME} &> /dev/null; then if virsh list --name --all | grep ${NAME} &> /dev/null; then
log Removing VM ${NAME} log Removing VM ${NAME}
virsh undefine --remove-all-storage --domain ${NAME} &> ${LOG_FILE} virsh undefine --remove-all-storage --domain ${NAME} &>> ${LOG_FILE}
fi fi
} }
vm_clean_all() { vm_clean_all() {
log Removing all VMs in parallel log Removing all VMs in parallel
for NAME in ${VM_NAMES[@]}; do for NAME in ${ALL_VM_NAMES[@]}; do
vm_clean ${NAME} & vm_clean ${NAME} &
done done
wait wait
@ -122,13 +122,13 @@ vm_create() {
--graphics vnc,listen=0.0.0.0 \ --graphics vnc,listen=0.0.0.0 \
--noautoconsole \ --noautoconsole \
--network network=promenade \ --network network=promenade \
--vcpus 2 \ --vcpus $(config_vm_vcpus) \
--memory 2048 \ --memory $(config_vm_memory) \
--import \ --import \
--disk vol=${VIRSH_POOL}/promenade-${NAME}.img,format=qcow2,bus=virtio \ --disk vol=${VIRSH_POOL}/promenade-${NAME}.img,format=qcow2,bus=virtio \
--disk pool=${VIRSH_POOL},size=20,format=qcow2,bus=virtio \ --disk pool=${VIRSH_POOL},size=20,format=qcow2,bus=virtio \
--disk pool=${VIRSH_POOL},size=20,format=qcow2,bus=virtio \ --disk pool=${VIRSH_POOL},size=20,format=qcow2,bus=virtio \
--disk vol=${VIRSH_POOL}/cloud-init-${NAME}.iso,device=cdrom &> ${LOG_FILE} --disk vol=${VIRSH_POOL}/cloud-init-${NAME}.iso,device=cdrom &>> ${LOG_FILE}
ssh_wait ${NAME} ssh_wait ${NAME}
ssh_cmd ${NAME} sync ssh_cmd ${NAME} sync
@ -136,12 +136,12 @@ vm_create() {
vm_create_all() { vm_create_all() {
log Starting all VMs in parallel log Starting all VMs in parallel
for NAME in ${VM_NAMES[@]}; do for NAME in $(config_vm_names); do
vm_create ${NAME} & vm_create ${NAME} &
done done
wait wait
for NAME in ${VM_NAMES[@]}; do for NAME in $(config_vm_names); do
vm_validate ${NAME} vm_validate ${NAME}
done done
} }
@ -149,23 +149,23 @@ vm_create_all() {
vm_start() { vm_start() {
NAME=${1} NAME=${1}
log Starting VM ${NAME} log Starting VM ${NAME}
virsh start ${NAME} &> ${LOG_FILE} virsh start ${NAME} &>> ${LOG_FILE}
ssh_wait ${NAME} ssh_wait ${NAME}
} }
vm_stop() { vm_stop() {
NAME=${1} NAME=${1}
log Stopping VM ${NAME} log Stopping VM ${NAME}
virsh destroy ${NAME} &> ${LOG_FILE} virsh destroy ${NAME} &>> ${LOG_FILE}
} }
vm_restart_all() { vm_restart_all() {
for NAME in ${VM_NAMES[@]}; do for NAME in $(config_vm_names); do
vm_stop ${NAME} & vm_stop ${NAME} &
done done
wait wait
for NAME in ${VM_NAMES[@]}; do for NAME in $(config_vm_names); do
vm_start ${NAME} & vm_start ${NAME} &
done done
wait wait
@ -174,7 +174,7 @@ vm_restart_all() {
vm_validate() { vm_validate() {
NAME=${1} NAME=${1}
if ! virsh list --name | grep ${NAME} &> /dev/null; then if ! virsh list --name | grep ${NAME} &> /dev/null; then
log VM ${NAME} did not start correctly. Use PROMENADE_DEBUG=1 for more details. log VM ${NAME} did not start correctly.
exit 1 exit 1
fi fi
} }
@ -185,7 +185,7 @@ vol_create_root() {
if virsh vol-list --pool ${VIRSH_POOL} | grep promenade-${NAME}.img &> /dev/null; then if virsh vol-list --pool ${VIRSH_POOL} | grep promenade-${NAME}.img &> /dev/null; then
log Deleting previous volume promenade-${NAME}.img log Deleting previous volume promenade-${NAME}.img
virsh vol-delete --pool ${VIRSH_POOL} promenade-${NAME}.img &> ${LOG_FILE} virsh vol-delete --pool ${VIRSH_POOL} promenade-${NAME}.img &>> ${LOG_FILE}
fi fi
log Creating root volume for ${NAME} log Creating root volume for ${NAME}
@ -195,5 +195,5 @@ vol_create_root() {
--capacity 64G \ --capacity 64G \
--format qcow2 \ --format qcow2 \
--backing-vol promenade-base.img \ --backing-vol promenade-base.img \
--backing-vol-format qcow2 &> ${LOG_FILE} --backing-vol-format qcow2 &>> ${LOG_FILE}
} }

View File

@ -1,4 +1,7 @@
{ {
"configuration": [
"examples/complete"
],
"stages": [ "stages": [
{ {
"name": "Gate Setup", "name": "Gate Setup",
@ -24,6 +27,12 @@
"name": "Genesis", "name": "Genesis",
"script": "genesis.sh" "script": "genesis.sh"
} }
] ],
"vm": {
"memory": 8096,
"names": [
"n0"
],
"vcpus": 4
}
} }

View File

@ -1,4 +1,7 @@
{ {
"configuration": [
"examples/basic"
],
"stages": [ "stages": [
{ {
"name": "Gate Setup", "name": "Gate Setup",
@ -35,7 +38,10 @@
}, },
{ {
"name": "Reprovision Genesis", "name": "Reprovision Genesis",
"script": "reprovision-genesis.sh" "script": "reprovision-genesis.sh",
"arguments": [
"n1 n2 n3"
]
}, },
{ {
"name": "Hard Reboot Cluster", "name": "Hard Reboot Cluster",
@ -45,5 +51,15 @@
"name": "Move Master", "name": "Move Master",
"script": "move-master.sh" "script": "move-master.sh"
} }
] ],
"vm": {
"memory": 2048,
"names": [
"n0",
"n1",
"n2",
"n3"
],
"vcpus": 2
}
} }

View File

@ -1,4 +1,7 @@
{ {
"configuration": [
"examples/complete"
],
"stages": [ "stages": [
{ {
"name": "Build Image", "name": "Build Image",

View File

@ -1,4 +1,7 @@
{ {
"configuration": [
"examples/basic"
],
"stages": [ "stages": [
{ {
"name": "Gate Setup", "name": "Gate Setup",
@ -28,17 +31,20 @@
"name": "Join Masters", "name": "Join Masters",
"script": "join-masters.sh", "script": "join-masters.sh",
"arguments": [ "arguments": [
"n1", "n1"
"n2"
] ]
}, },
{
"name": "Reprovision Genesis",
"script": "reprovision-genesis.sh"
},
{ {
"name": "Hard Reboot Cluster", "name": "Hard Reboot Cluster",
"script": "hard-reboot-cluster.sh" "script": "hard-reboot-cluster.sh"
} }
] ],
"vm": {
"memory": 2048,
"names": [
"n0",
"n1"
],
"vcpus": 2
}
} }

View File

@ -0,0 +1,57 @@
{
"configuration": [
"examples/complete"
],
"stages": [
{
"name": "Gate Setup",
"script": "gate-setup.sh"
},
{
"name": "Build Image",
"script": "build-image.sh"
},
{
"name": "Generate Certificates",
"script": "generate-certificates.sh"
},
{
"name": "Build Scripts",
"script": "build-scripts.sh"
},
{
"name": "Create VMs",
"script": "create-vms.sh"
},
{
"name": "Genesis",
"script": "genesis.sh"
},
{
"name": "Join Masters",
"script": "join-masters.sh",
"arguments": [
"n1"
]
},
{
"name": "Reprovision Genesis",
"script": "reprovision-genesis.sh",
"arguments": [
"n1"
]
},
{
"name": "Hard Reboot Cluster",
"script": "hard-reboot-cluster.sh"
}
],
"vm": {
"memory": 8096,
"names": [
"n0",
"n1"
],
"vcpus": 4
}
}

View File

@ -5,4 +5,8 @@ set -e
source ${GATE_UTILS} source ${GATE_UTILS}
log Building docker image ${IMAGE_PROMENADE} log Building docker image ${IMAGE_PROMENADE}
sudo docker build -q -t ${IMAGE_PROMENADE} ${WORKSPACE} docker build -q -t ${IMAGE_PROMENADE} ${WORKSPACE}
log Loading Promenade image ${IMAGE_PROMENADE} into local registry
docker tag ${IMAGE_PROMENADE} localhost:5000/${IMAGE_PROMENADE} &>> ${LOG_FILE}
docker push localhost:5000/${IMAGE_PROMENADE} &>> ${LOG_FILE}

View File

@ -8,7 +8,7 @@ cd ${TEMP_DIR}
mkdir scripts mkdir scripts
log Building scripts log Building scripts
sudo docker run --rm -t \ docker run --rm -t \
-w /target \ -w /target \
-v ${TEMP_DIR}:/target \ -v ${TEMP_DIR}:/target \
-e PROMENADE_DEBUG=${PROMENADE_DEBUG} \ -e PROMENADE_DEBUG=${PROMENADE_DEBUG} \

View File

@ -7,13 +7,15 @@ source ${GATE_UTILS}
OUTPUT_DIR=${TEMP_DIR}/config OUTPUT_DIR=${TEMP_DIR}/config
mkdir -p ${OUTPUT_DIR} mkdir -p ${OUTPUT_DIR}
log Copying example configuration for source_dir in $(config_configuration); do
cp ${WORKSPACE}/example/*.yaml ${OUTPUT_DIR} log Copying configuration from ${source_dir}
cp ${WORKSPACE}/${source_dir}/*.yaml ${OUTPUT_DIR}
done
registry_replace_references ${OUTPUT_DIR}/*.yaml registry_replace_references ${OUTPUT_DIR}/*.yaml
log Generating certificates log Generating certificates
sudo docker run --rm -t \ docker run --rm -t \
-w /target \ -w /target \
-v ${OUTPUT_DIR}:/target \ -v ${OUTPUT_DIR}:/target \
-e PROMENADE_DEBUG=${PROMENADE_DEBUG} \ -e PROMENADE_DEBUG=${PROMENADE_DEBUG} \

View File

@ -20,5 +20,5 @@ done
validate_cluster n0 validate_cluster n0
validate_etcd_membership kubernetes n0 genesis n1 n2 n3 validate_etcd_membership kubernetes n0 genesis ${@}
validate_etcd_membership calico n0 n0 n1 n2 n3 validate_etcd_membership calico n0 n0 ${@}

View File

@ -4,6 +4,8 @@ set -e
source ${GATE_UTILS} source ${GATE_UTILS}
EXPECTED_MEMBERS=${@}
promenade_teardown_node ${GENESIS_NAME} n1 promenade_teardown_node ${GENESIS_NAME} n1
vm_clean ${GENESIS_NAME} vm_clean ${GENESIS_NAME}
@ -16,5 +18,5 @@ ssh_cmd ${GENESIS_NAME} /root/promenade/validate-${GENESIS_NAME}.sh
validate_cluster n1 validate_cluster n1
validate_etcd_membership kubernetes n1 n1 n2 n3 validate_etcd_membership kubernetes n1 ${EXPECTED_MEMBERS}
validate_etcd_membership calico n1 n1 n2 n3 validate_etcd_membership calico n1 ${EXPECTED_MEMBERS}

View File

@ -10,10 +10,10 @@ chmod -R 755 ${TEMP_DIR}
export GATE_COLOR=${GATE_COLOR:-1} export GATE_COLOR=${GATE_COLOR:-1}
source ${GATE_UTILS} MANIFEST_ARG=${1:-resiliency}
export GATE_MANIFEST=${WORKSPACE}/tools/g2/manifests/${MANIFEST_ARG}.json
MANIFEST_ARG=${1:-full} source ${GATE_UTILS}
MANIFEST=${WORKSPACE}/tools/g2/manifests/${MANIFEST_ARG}.json
STAGES_DIR=${WORKSPACE}/tools/g2/stages STAGES_DIR=${WORKSPACE}/tools/g2/stages
@ -21,7 +21,7 @@ log_temp_dir ${TEMP_DIR}
echo echo
STAGES=$(mktemp) STAGES=$(mktemp)
jq -cr '.stages | .[]' ${MANIFEST} > ${STAGES} jq -cr '.stages | .[]' ${GATE_MANIFEST} > ${STAGES}
# NOTE(mark-burnett): It is necessary to use a non-stdin file descriptor for # NOTE(mark-burnett): It is necessary to use a non-stdin file descriptor for
# the read below, since we will be calling SSH, which will consume the # the read below, since we will be calling SSH, which will consume the
@ -34,7 +34,7 @@ while read -u 3 stage; do
if echo ${stage} | jq -e .arguments > /dev/null; then if echo ${stage} | jq -e .arguments > /dev/null; then
ARGUMENTS=($(echo ${stage} | jq -r '.arguments[]')) ARGUMENTS=($(echo ${stage} | jq -r '.arguments[]'))
else else
ARGUMENTS=() ARGUMENTS=
fi fi
log_stage_header "${NAME}" log_stage_header "${NAME}"
@ -42,7 +42,7 @@ while read -u 3 stage; do
log_stage_success log_stage_success
else else
log_color_reset log_color_reset
log_stage_error "${NAME}" ${TEMP_DIR} log_stage_error "${NAME}" ${LOG_FILE}
if echo ${stage} | jq -e .on_error > /dev/null; then if echo ${stage} | jq -e .on_error > /dev/null; then
log_stage_diagnostic_header log_stage_diagnostic_header
ON_ERROR=${WORKSPACE}/$(echo ${stage} | jq -r .on_error) ON_ERROR=${WORKSPACE}/$(echo ${stage} | jq -r .on_error)

View File

@ -6,5 +6,5 @@ IMAGES_FILE=$(dirname $0)/IMAGES
IFS=, IFS=,
grep -v '^#.*' $IMAGES_FILE | while read src tag dst; do grep -v '^#.*' $IMAGES_FILE | while read src tag dst; do
sed -i "s;registry:5000/$dst:$tag;$src:$tag;g" example/*.yaml sed -i "s;registry:5000/$dst:$tag;$src:$tag;g" examples/basic/*.yaml
done done

View File

@ -6,5 +6,5 @@ IMAGES_FILE=$(dirname $0)/IMAGES
IFS=, IFS=,
grep -v '^#.*' $IMAGES_FILE | while read src tag dst; do grep -v '^#.*' $IMAGES_FILE | while read src tag dst; do
sed -i "s;$src:$tag;registry:5000/$dst:$tag;g" example/*.yaml sed -i "s;$src:$tag;registry:5000/$dst:$tag;g" examples/basic/*.yaml
done done

View File

@ -1,33 +0,0 @@
#!/usr/bin/env bash
set -ex
WORKDIR=$(mktemp -d)
function cleanup {
rm -rf "${WORKDIR}"
}
trap cleanup EXIT
sudo apt-get update
sudo apt-get install -y --no-install-recommends \
curl \
unzip
git clone https://github.com/jakobadam/packer-qemu-templates.git ${WORKDIR}
cd ${WORKDIR}/ubuntu
sed -i -e 's#http://releases.ubuntu.com/16.04/ubuntu-16.04-server-amd64.iso#http://old-releases.ubuntu.com/releases/xenial/ubuntu-16.04.2-server-amd64.iso#g' ubuntu.json
sed -i -e 's/de5ee8665048f009577763efbf4a6f0558833e59/f529548fa7468f2d8413b8427d8e383b830df5f6/g' ubuntu.json
sed -i -e 's#http://releases.ubuntu.com/16.04/ubuntu-16.04.1-server-amd64.iso#http://old-releases.ubuntu.com/releases/xenial/ubuntu-16.04.2-server-amd64.iso#g' ubuntu-vagrant.json
sed -i -e 's/de5ee8665048f009577763efbf4a6f0558833e59/f529548fa7468f2d8413b8427d8e383b830df5f6/g' ubuntu-vagrant.json
sed -i -e 's#http://releases.ubuntu.com/16.04/ubuntu-16.04.3-server-amd64.iso#http://old-releases.ubuntu.com/releases/xenial/ubuntu-16.04.2-server-amd64.iso#g' ubuntu1604.json
sed -i -e 's/a06cd926f5855d4f21fb4bc9978a35312f815fbda0d0ef7fdc846861f4fc4600/737ae7041212c628de5751d15c3016058b0e833fdc32e7420209b76ca3d0a535/g' ubuntu1604.json
sed -i -e 's#http://releases.ubuntu.com/16.04/ubuntu-16.04-server-amd64.iso#http://old-releases.ubuntu.com/releases/xenial/ubuntu-16.04.1-server-amd64.iso#g' ubuntu.json
PACKER_LOG="yes" packer build -var-file=ubuntu1604.json ubuntu-vagrant.json
vagrant box add promenade/ubuntu1604 box/libvirt/ubuntu1604-1.box

View File

@ -1,10 +0,0 @@
#!/usr/bin/env bash
set -ex
SCRIPT_DIR=$(dirname $0)
$SCRIPT_DIR/install-vagrant-nfs-deps.sh
$SCRIPT_DIR/install-vagrant-libvirt.sh
$SCRIPT_DIR/install-packer.sh
$SCRIPT_DIR/build-vagrant-box.sh

View File

@ -1,21 +0,0 @@
#!/usr/bin/env bash
set -ex
PACKER_VERSION=${PACKER_VERSION:-1.0.3}
WORKDIR=$(mktemp -d)
function cleanup {
rm -rf "${WORKDIR}"
}
trap cleanup EXIT
cd ${WORKDIR}
curl -Lo packer.zip https://releases.hashicorp.com/packer/${PACKER_VERSION}/packer_${PACKER_VERSION}_linux_amd64.zip
unzip packer.zip
sudo mv packer /usr/local/bin/

View File

@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -ex
sudo apt-get update
sudo apt-get build-dep -y \
ruby-libvirt
sudo apt-get install -y --no-install-recommends \
build-essential \
dnsmasq \
ebtables \
libvirt-bin \
libvirt-dev \
libxml2-dev \
libxslt-dev \
qemu \
ruby-dev \
zlib1g-dev
vagrant plugin install vagrant-libvirt

View File

@ -1,9 +0,0 @@
#!/usr/bin/env bash
set -ex
sudo apt-get update
sudo apt-get install -y --no-install-recommends \
nfs-common \
nfs-kernel-server \
portmap