Merge pull request #41 from mark-burnett/testing-framework

Initial testing framework
This commit is contained in:
Scott Hussey 2017-07-28 14:54:56 -05:00 committed by GitHub
commit d0c1256866
10 changed files with 285 additions and 26 deletions

View File

@ -88,7 +88,8 @@ The `Network` document must contain:
- `dns_servers` - A list of upstream DNS server IPs. - `dns_servers` - A list of upstream DNS server IPs.
Optionally, proxy settings can be specified here as well. These should all Optionally, proxy settings can be specified here as well. These should all
generally be set together: `http_proxy`, `https_proxy`, `no_proxy`. generally be set together: `http_proxy`, `https_proxy`, `no_proxy`. `no_proxy`
must include all master IP addresses, and the `kubernetes` service name.
Here's an example `Network` document: Here's an example `Network` document:
@ -111,7 +112,7 @@ spec:
- 8.8.4.4 - 8.8.4.4
http_proxy: http://proxy.example.com:8080 http_proxy: http://proxy.example.com:8080
https_proxy: http://proxy.example.com:8080 https_proxy: http://proxy.example.com:8080
no_proxy: 192.168.77.10,127.0.0.1,kubernetes no_proxy: 192.168.77.10,192.168.77.11,192.168.77.12,127.0.0.1,kubernetes,kubernetes.default.svc.cluster.local
``` ```
The `Versions` document must define the Promenade image to be used and the The `Versions` document must define the Promenade image to be used and the

View File

@ -31,29 +31,22 @@ vagrant up
Start the genesis node: Start the genesis node:
```bash ```bash
vagrant ssh n0 -c 'sudo /vagrant/configs/up.sh /vagrant/configs/n0.yaml' vagrant ssh n0 -c 'sudo bash /vagrant/configs/up.sh /vagrant/configs/n0.yaml'
``` ```
Join the master nodes: Join the master nodes:
```bash ```bash
vagrant ssh n1 -c 'sudo /vagrant/configs/up.sh /vagrant/configs/n1.yaml' vagrant ssh n1 -c 'sudo bash /vagrant/configs/up.sh /vagrant/configs/n1.yaml'
vagrant ssh n2 -c 'sudo /vagrant/configs/up.sh /vagrant/configs/n2.yaml' vagrant ssh n2 -c 'sudo bash /vagrant/configs/up.sh /vagrant/configs/n2.yaml'
``` ```
Join the worker node: Join the worker node:
```bash ```bash
vagrant ssh n3 -c 'sudo /vagrant/configs/up.sh /vagrant/configs/n3.yaml' vagrant ssh n3 -c 'sudo bash /vagrant/configs/up.sh /vagrant/configs/n3.yaml'
``` ```
To use Promenade from behind a proxy, simply add proxy settings to the
promenade `Network` configuration document using the keys `http_proxy`,
`https_proxy`, and `no_proxy` before running `generate`.
Note that it is important to specify `no_proxy` to include `kubernetes` and the
IP addresses of all the master nodes.
### Building the image ### Building the image
```bash ```bash
@ -69,9 +62,11 @@ docker save -o promenade.tar promenade:local
Then on a node: Then on a node:
```bash ```bash
PROMENADE_LOAD_IMAGE=/vagrant/promenade.tar /vagrant/up.sh /vagrant/path/to/node-config.yaml PROMENADE_LOAD_IMAGE=/vagrant/promenade.tar bash /vagrant/up.sh /vagrant/path/to/node-config.yaml
``` ```
These commands are combined in a convenience script at `tools/dev-build.sh`.
To build the image from behind a proxy, you can: To build the image from behind a proxy, you can:
```bash ```bash
@ -82,13 +77,5 @@ docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$http_pr
## Using Promenade Behind a Proxy ## Using Promenade Behind a Proxy
To use Promenade from behind a proxy, simply export `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` environment variables on the vagrant host prior to executing the `genesis.sh` and `join.sh` scripts respectively. Alternatively, you may also export the `DOCKER_HTTP_PROXY`, `DOCKER_HTTPS_PROXY`, and `DOCKER_NO_PROXY` directly. Ensure you are running the script with `sudo -E` option to preserve the environment variables. To use Promenade from behind a proxy, use the proxy settings described in the
[configuration docs](configuration.md).
```bash
vagrant ssh n0
cd /vagrant
export DOCKER_HTTP_PROXY="http://proxy.server.com:8080"
export DOCKER_HTTPS_PROXY="https://proxy.server.com:8080"
export DOCKER_NO_PROXY="localhost,127.0.0.1"
sudo -E /vagrant/up.sh /vagrant/configs/n0.yaml
```

View File

@ -30,13 +30,13 @@ class Generator:
assert self.input_config['Cluster'].metadata['name'] \ assert self.input_config['Cluster'].metadata['name'] \
== self.input_config['Network'].metadata['cluster'] == self.input_config['Network'].metadata['cluster']
def generate_up_sh(self, output_dir): def generate_additional_scripts(self, output_dir):
r = renderer.Renderer(config=self.input_config, r = renderer.Renderer(config=self.input_config,
target_dir=output_dir) target_dir=output_dir)
r.render_generate_files() r.render_generate_files()
def generate_all(self, output_dir): def generate_all(self, output_dir):
self.generate_up_sh(output_dir) self.generate_additional_scripts(output_dir)
cluster = self.input_config['Cluster'] cluster = self.input_config['Cluster']
network = self.input_config['Network'] network = self.input_config['Network']

View File

@ -14,3 +14,14 @@ apt-get install -y --no-install-recommends \
systemctl daemon-reload systemctl daemon-reload
systemctl enable kubelet systemctl enable kubelet
systemctl restart kubelet systemctl restart kubelet
cat <<EOF > /etc/resolv.conf
options timeout:1 attempts:1
nameserver 127.0.0.1
{%- for server in config['Network']['dns_servers'] %}
nameserver {{ server }}
{%- endfor %}
EOF

View File

@ -0,0 +1,5 @@
{% include "common_validation.sh" with context %}
wait_for_ready_nodes 1
validate_kubectl_logs

View File

@ -0,0 +1,6 @@
{% include "common_validation.sh" with context %}
EXPECTED_NODE_COUNT={{ config['Cluster']['nodes'] | length }}
wait_for_ready_nodes $EXPECTED_NODE_COUNT
validate_kubectl_logs

View File

@ -0,0 +1,136 @@
#!/usr/bin/env bash
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root." 1>&2
exit 1
fi
set -ex
export KUBECONFIG=/etc/kubernetes/admin/kubeconfig.yaml
function log {
echo $* 1>&2
}
function report_docker_exited_containers {
for container_id in $(docker ps -q --filter "status=exited"); do
log Report for exited container $container_id
docker inspect $container_id
docker logs $container_id
done
}
function report_docker_state {
log General docker state report
docker info
docker ps -a
report_docker_exited_containers
}
function report_kube_state {
log General cluster state report
kubectl get nodes 1>&2
kubectl get --all-namespaces pods -o wide 1>&2
}
function fail {
report_docker_state
report_kube_state
exit 1
}
function wait_for_ready_nodes {
set +x
NODES=$1
SECONDS=${2:-600}
log $(date) Waiting $SECONDS seconds for $NODES Ready nodes.
NODE_READY_JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[?(@.type=="Ready")]}{@.type}={@.status}{"\n"}{end}{end}'
end=$(($(date +%s) + $SECONDS))
while true; do
READY_NODE_COUNT=$(kubectl get nodes -o jsonpath="${NODE_READY_JSONPATH}" | grep "Ready=True" | wc -l)
if [ $NODES -ne $READY_NODE_COUNT ]; then
now=$(date +%s)
if [ $now -gt $end ]; then
log $(date) Nodes were not all ready before timeout.
fail
fi
sleep 5
else
log $(date) Found expected nodes.
break
fi
done
set -x
}
function wait_for_pod_termination {
set +x
NAMESPACE=$1
POD_NAME=$2
SECONDS=${3:-120}
log $(date) Waiting $SECONDS seconds for termination of pod $POD_NAME
POD_PHASE_JSONPATH='{.status.phase}'
end=$(($(date +%s) + $SECONDS))
while true; do
POD_PHASE=$(kubectl --namespace $NAMESPACE get -o jsonpath="${POD_PHASE_JSONPATH}" pod $POD_NAME)
if [ "x$POD_PHASE" = "xSucceeded" ]; then
log $(date) Pod $POD_NAME succeeded.
break
elif [ "x$POD_PHASE" = "xFailed" ]; then
log $(date) Pod $POD_NAME failed.
kubectl --namespace $NAMESPACE get -o yaml pod $POD_NAME 1>&2
fail
else
now=$(date +%s)
if [ $now -gt $end ]; then
log $(date) Pod did not terminate before timeout.
kubectl --namespace $NAMESPACE get -o yaml pod $POD_NAME 1>&2
fail
fi
sleep 1
fi
done
set -x
}
function validate_kubectl_logs {
NAMESPACE=default
POD_NAME=log-test-$(date +%s)
cat <<EOPOD | kubectl --namespace $NAMESPACE apply -f -
---
apiVersion: v1
kind: Pod
metadata:
name: $POD_NAME
spec:
restartPolicy: Never
containers:
- name: noisy
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/echo
- EXPECTED RESULT
...
EOPOD
wait_for_pod_termination $NAMESPACE $POD_NAME
ACTUAL_LOGS=$(kubectl logs $POD_NAME)
if [ "x$ACTUAL_LOGS" != "xEXPECTED RESULT" ]; then
log Got unexpected logs:
kubectl --namespace $NAMESPACE logs $POD_NAME 1>&2
fail
fi
}

23
tools/dev-build.sh Executable file
View File

@ -0,0 +1,23 @@
#!/usr/bin/env bash
set -ex
echo === Cleaning up old data ===
rm -rf promenade.tar configs
mkdir configs
echo === Building image ===
docker build -t quay.io/attcomdev/promenade:latest .
echo === Generating updated configuration ===
docker run --rm -t \
-v $(pwd):/target quay.io/attcomdev/promenade:latest \
promenade -v \
generate \
-c /target/example/vagrant-input-config.yaml \
-o /target/configs
echo === Saving image ===
docker save -o promenade.tar quay.io/attcomdev/promenade:latest
echo === Done ===

0
tools/generated_configs/.gitignore vendored Normal file
View File

View File

@ -0,0 +1,90 @@
---
apiVersion: promenade/v1
kind: Cluster
metadata:
name: example
target: none
spec:
nodes:
G_HOSTNAME:
ip: GENESIS
kubernetes_interface: G_IFACE
roles:
- master
- genesis
additional_labels:
- beta.kubernetes.io/arch=amd64
M1_HOSTNAME:
ip: MASTER_1
kubernetes_interface: M1_IFACE
roles:
- master
additional_labels:
- beta.kubernetes.io/arch=amd64
M2_HOSTNAME:
ip: MASTER_2
kubernetes_interface: M2_IFACE
roles:
- master
additional_labels:
- beta.kubernetes.io/arch=amd64
W_HOSTNAME:
ip: WORKER
kubernetes_interface: W_IFACE
roles:
- worker
additional_labels:
- beta.kubernetes.io/arch=amd64
---
apiVersion: promenade/v1
kind: Network
metadata:
cluster: example
name: example
target: all
spec:
cluster_domain: cluster.local
cluster_dns: 10.96.0.10
kube_service_ip: 10.96.0.1
pod_ip_cidr: 10.97.0.0/16
service_ip_cidr: 10.96.0.0/16
calico_etcd_service_ip: 10.96.232.136
dns_servers:
- 8.8.8.8
- 8.8.4.4
#http_proxy: http://proxy.example.com:8080
#https_proxy: https://proxy.example.com:8080
---
apiVersion: promenade/v1
kind: Versions
metadata:
cluster: example
name: example
target: all
spec:
images:
armada: quay.io/attcomdev/armada:v0.5.1
calico:
cni: quay.io/calico/cni:v1.9.1
etcd: quay.io/coreos/etcd:v3.2.1
node: quay.io/calico/node:v1.3.0
policy-controller: quay.io/calico/kube-policy-controller:v0.6.0
kubernetes:
apiserver: gcr.io/google_containers/hyperkube-amd64:v1.6.4
controller-manager: quay.io/attcomdev/kube-controller-manager:v1.6.4
dns:
dnsmasq: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2
kubedns: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.2
sidecar: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2
etcd: quay.io/coreos/etcd:v3.2.1
kubectl: gcr.io/google_containers/hyperkube-amd64:v1.6.4
proxy: gcr.io/google_containers/hyperkube-amd64:v1.6.4
scheduler: gcr.io/google_containers/hyperkube-amd64:v1.6.4
promenade: quay.io/attcomdev/promenade:latest
tiller: gcr.io/kubernetes-helm/tiller:v2.4.2
packages:
docker: docker.io=1.12.6-0ubuntu1~16.04.1
dnsmasq: dnsmasq=2.75-1ubuntu0.16.04.2
socat: socat=1.7.3.1-1
additional_packages:
- ceph-common=10.2.7-0ubuntu0.16.04.1