initial commit
This commit is contained in:
parent
9db4577516
commit
5fdb1d1bca
|
@ -0,0 +1,2 @@
|
||||||
|
hosts-deploy.yaml
|
||||||
|
roles/deploy-kubelet/templates/kubeconfig
|
239
README.md
239
README.md
|
@ -1,2 +1,237 @@
|
||||||
# promenade
|
# Promenade: Manually Self-hosted Kubernetes via Bootkube
|
||||||
A kubeadm based Kubernetes installation framework to handle resilient and highly available installations of Kubernetes.
|
A small howto on how to bring up a self-hosted kubernetes cluster
|
||||||
|
|
||||||
|
We'll use [bootkube](https://github.com/kubernetes-incubator/bootkube) to initiate the master-components. First we'll render the assets necessary for bringing up the control plane (apiserver, controller-manger, scheduler, etc). Then we'll start the kubelets which job is it to start the assets but can't do much, because there's no API-server yet. Running `bootkube` once will kick things off then. At a high-level the bootstrapping process looks like this:
|
||||||
|
|
||||||
|
![Self-Hosted](./img/self-hosted-moving-parts.png?raw=true "Self-hosted-moving-parts")
|
||||||
|
|
||||||
|
Image taken from the [self-hosted proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/self-hosted-kubernetes.md).
|
||||||
|
|
||||||
|
|
||||||
|
This is how the final cluster looks like from a `kubectl` perspective:
|
||||||
|
|
||||||
|
![Screenshot](./img/self-hosted.png?raw=true "Screenshot")
|
||||||
|
|
||||||
|
Let's start!
|
||||||
|
## Temporary apiserver: `bootkube`
|
||||||
|
### Download
|
||||||
|
```
|
||||||
|
wget https://github.com/kubernetes-incubator/bootkube/releases/download/v0.3.9/bootkube.tar.gz
|
||||||
|
tar xvzf bootkube.tar.gz
|
||||||
|
sudo cp bin/linux/bootkube /usr/bin/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Render the Assets
|
||||||
|
Exchange `10.7.183.59` with the node you are working on. If you have DNS available group all master node IP addresses behind a CNAME Record and provide this insted.
|
||||||
|
```
|
||||||
|
bootkube render --asset-dir=assets --experimental-self-hosted-etcd --etcd-servers=http://10.3.0.15:2379 --api-servers=https://10.7.183.59:443
|
||||||
|
```
|
||||||
|
This will generate several things:
|
||||||
|
- manifests for running apiserver, controller-manager, scheduler, flannel, etcd, dns and kube-proxy
|
||||||
|
- a `kubeconfig` file for connecting to and authenticating with the apiserver
|
||||||
|
- TLS assets
|
||||||
|
|
||||||
|
## Start the Master Kubelet
|
||||||
|
### Download `hyperkube`
|
||||||
|
```
|
||||||
|
wget http://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/hyperkube -O ./hyperkube
|
||||||
|
sudo mv hyperkube /usr/bin/hyperkube
|
||||||
|
sudo chmod 755 /usr/bin/hyperkube
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install CNI
|
||||||
|
```
|
||||||
|
sudo mkdir -p /opt/cni/bin
|
||||||
|
wget https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tbz2
|
||||||
|
sudo tar xjf cni-amd64-v0.4.0.tbz2 -C /opt/cni/bin/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Copy Configuration Files
|
||||||
|
```
|
||||||
|
sudo cp assets/auth/kubeconfig /etc/kubernetes/
|
||||||
|
sudo cp -a assets/manifests /etc/kubernetes/
|
||||||
|
```
|
||||||
|
### Start the Kubelet
|
||||||
|
```
|
||||||
|
sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--require-kubeconfig \
|
||||||
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
|
--network-plugin=cni \
|
||||||
|
--lock-file=/var/run/lock/kubelet.lock \
|
||||||
|
--exit-on-lock-contention \
|
||||||
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
|
--allow-privileged \
|
||||||
|
--node-labels=master=true \
|
||||||
|
--minimum-container-ttl-duration=6m0s \
|
||||||
|
--cluster_dns=10.3.0.10 \
|
||||||
|
--cluster_domain=cluster.local \
|
||||||
|
--hostname-override=10.7.183.59
|
||||||
|
```
|
||||||
|
The TLS credentials generated by `bootkube render` in assets/tls/ are copied to a secret: assets/manifests/kube-apiserver-secret.yaml.
|
||||||
|
|
||||||
|
### Start the Temporary API Server
|
||||||
|
bootkube will serve as the temporary apiserver so the kubelet from above can start the real apiserver in a pod
|
||||||
|
```
|
||||||
|
sudo bootkube start --asset-dir=./assets --experimental-self-hosted-etcd --etcd-server=http://127.0.0.1:12379
|
||||||
|
```
|
||||||
|
bootkube should exit itself after successfully bootstrapping the master components. It's only needed for the very first bootstrapping
|
||||||
|
|
||||||
|
### Check the Output
|
||||||
|
```
|
||||||
|
watch hyperkube kubectl get pods -o wide --all-namespaces
|
||||||
|
```
|
||||||
|
|
||||||
|
## Join Nodes to the Cluster
|
||||||
|
Copy the information where to find the apiserver and how to authenticate:
|
||||||
|
```
|
||||||
|
scp 10.7.183.59:assets/auth/kubeconfig .
|
||||||
|
sudo mkdir -p /etc/kubernetes
|
||||||
|
sudo mv kubeconfig /etc/kubernetes/
|
||||||
|
```
|
||||||
|
install cni binaries and download hyperkube
|
||||||
|
```
|
||||||
|
sudo mkdir -p /opt/cni/bin
|
||||||
|
wget https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tbz2
|
||||||
|
sudo tar xjf cni-amd64-v0.4.0.tbz2 -C /opt/cni/bin/
|
||||||
|
wget http://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/hyperkube -O ./hyperkube
|
||||||
|
sudo mv hyperkube /usr/bin/hyperkube
|
||||||
|
sudo chmod 755 /usr/bin/hyperkube
|
||||||
|
```
|
||||||
|
### Master Nodes
|
||||||
|
Start the kubelet:
|
||||||
|
```
|
||||||
|
sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--require-kubeconfig \
|
||||||
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
|
--network-plugin=cni \
|
||||||
|
--lock-file=/var/run/lock/kubelet.lock \
|
||||||
|
--exit-on-lock-contention \
|
||||||
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
|
--allow-privileged \
|
||||||
|
--node-labels=master=true \
|
||||||
|
--minimum-container-ttl-duration=6m0s \
|
||||||
|
--cluster_dns=10.3.0.10 \
|
||||||
|
--cluster_domain=cluster.local \
|
||||||
|
--hostname-override=10.7.183.60
|
||||||
|
```
|
||||||
|
|
||||||
|
### Worker Nodes
|
||||||
|
|
||||||
|
Note the only difference is the removal of `--node-labels=master=true`:
|
||||||
|
```
|
||||||
|
sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--require-kubeconfig \
|
||||||
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
|
--network-plugin=cni \
|
||||||
|
--lock-file=/var/run/lock/kubelet.lock \
|
||||||
|
--exit-on-lock-contention \
|
||||||
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
|
--allow-privileged \
|
||||||
|
--minimum-container-ttl-duration=6m0s \
|
||||||
|
--cluster_dns=10.3.0.10 \
|
||||||
|
--cluster_domain=cluster.local\
|
||||||
|
--hostname-override=10.7.183.60
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scale Etcd
|
||||||
|
kubectl apply doesn't work for TPR at the moment. See https://github.com/kubernetes/kubernetes/issues/29542. As a workaround, we use cURL to resize the cluster.
|
||||||
|
|
||||||
|
```
|
||||||
|
hyperkube kubectl --namespace=kube-system get cluster.etcd kube-etcd -o json > etcd.json && \
|
||||||
|
vim etcd.json && \
|
||||||
|
curl -H 'Content-Type: application/json' -X PUT --data @etcd.json http://127.0.0.1:8080/apis/etcd.coreos.com/v1beta1/namespaces/kube-system/clusters/kube-etcd
|
||||||
|
```
|
||||||
|
If that doesn't work, re-run until it does. See https://github.com/kubernetes-incubator/bootkube/issues/346#issuecomment-283526930
|
||||||
|
|
||||||
|
## Challenges
|
||||||
|
|
||||||
|
### Node setup
|
||||||
|
Some Broadcom NICs panic'ed with the default Ubuntu kernel
|
||||||
|
- upgrade kernel to >`4.8` because of brcm nic failure
|
||||||
|
- move to `--storage-driver=overlay2` instead of `aufs` as docker driver
|
||||||
|
- disable swap on the node (will be a fatal error in kube-1.6)
|
||||||
|
|
||||||
|
|
||||||
|
## ToDo Items:
|
||||||
|
### apiserver resiliance
|
||||||
|
the master apiservers need to have a single address only. Possible solutions:
|
||||||
|
- use LB from the DC
|
||||||
|
- use DNS from the DC with programmable API (e.g. powerdns)
|
||||||
|
- use something like kube-keepalive-vip?
|
||||||
|
- bootstrap DNS itself (skydns, coredns)
|
||||||
|
|
||||||
|
### Etcd Challenges
|
||||||
|
- backup strategies (https://github.com/coreos/etcd-operator/blob/master/doc/user/spec_examples.md#three-members-cluster-that-restores-from-previous-pv-backup)
|
||||||
|
- etcd-operator failures (e.g. https://github.com/coreos/etcd-operator/issues/851)
|
||||||
|
- partial failure (loosing quorum)
|
||||||
|
- permament failure (state gone completely)
|
||||||
|
- etcd needs ntp available (or another mechanism so that every node is in sync)
|
||||||
|
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
### clean up docker
|
||||||
|
```
|
||||||
|
sudo su -
|
||||||
|
docker rm -f $(docker ps -a -q)
|
||||||
|
exit
|
||||||
|
```
|
||||||
|
|
||||||
|
### Compile Bootkube
|
||||||
|
```
|
||||||
|
sudo docker run --rm -it -v $(pwd)/golang/src:/go/src/ -w /go/src golang:1.7 bash
|
||||||
|
go get -u github.com/kubernetes-incubator/bootkube
|
||||||
|
cd $GOPATH/src/github.com/kubernetes-incubator/bootkube
|
||||||
|
make
|
||||||
|
```
|
||||||
|
|
||||||
|
### RBAC
|
||||||
|
```
|
||||||
|
./bootkube-rbac render --asset-dir assets-rbac --experimental-self-hosted-etcd --etcd-servers=http://10.3.0.15:2379 --api-servers=https://10.7.183.59:443
|
||||||
|
sudo rm -rf /etc/kubernetes/*
|
||||||
|
sudo cp -a assets-rbac/manifests /etc/kubernetes/
|
||||||
|
sudo cp assets-rbac/auth/kubeconfig /etc/kubernetes/
|
||||||
|
sudo ./bootkube-rbac start --asset-dir=./assets-rbac --experimental-self-hosted-etcd --etcd-server=http://127.0.0.1:12379
|
||||||
|
```
|
||||||
|
|
||||||
|
### Containerized Kubelet
|
||||||
|
The benefit here is using a docker container instead of a kubelet binary. Also the hyperkube docker image packages and installs the cni binaries. The downside would be that in either case something needs to start the container upon a reboot of the node. Usually the something is systemd and systemd is better managing binaries than docker containers. Either way, this is how you would run a containerized kubelet:
|
||||||
|
```
|
||||||
|
sudo docker run \
|
||||||
|
--rm \
|
||||||
|
-it \
|
||||||
|
--privileged \
|
||||||
|
-v /dev:/dev \
|
||||||
|
-v /run:/run \
|
||||||
|
-v /sys:/sys \
|
||||||
|
-v /etc/kubernetes:/etc/kubernetes \
|
||||||
|
-v /usr/share/ca-certificates:/etc/ssl/certs \
|
||||||
|
-v /var/lib/docker:/var/lib/docker \
|
||||||
|
-v /var/lib/kubelet:/var/lib/kubelet \
|
||||||
|
-v /:/rootfs \
|
||||||
|
quay.io/coreos/hyperkube:v1.5.3_coreos.0 \
|
||||||
|
./hyperkube \
|
||||||
|
kubelet \
|
||||||
|
--network-plugin=cni \
|
||||||
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
|
--cni-bin-dir=/opt/cni/bin \
|
||||||
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
|
--allow-privileged \
|
||||||
|
--hostname-override=10.7.183.60 \
|
||||||
|
--cluster-dns=10.3.0.10 \
|
||||||
|
--cluster-domain=cluster.local \
|
||||||
|
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--require-kubeconfig \
|
||||||
|
--lock-file=/var/run/lock/kubelet.lock \
|
||||||
|
--containerized
|
||||||
|
```
|
||||||
|
Not quite working yet though. The node comes up, registeres successfully with the master and starts daemonsets. Everything comes up except flannel:
|
||||||
|
```
|
||||||
|
main.go:127] Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resources and References
|
||||||
|
- https://github.com/kubernetes/community/blob/master/contributors/design-proposals/self-hosted-kubernetes.md
|
||||||
|
- https://github.com/kubernetes-incubator/bootkube
|
||||||
|
- https://github.com/coreos/etcd-operator/
|
||||||
|
- http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html
|
||||||
|
- https://github.com/kubernetes/kubeadm/issues/127
|
||||||
|
|
|
@ -0,0 +1,2 @@
|
||||||
|
## Instructions:
|
||||||
|
ansible-playbook -e bootstrap_enabled=true -i hosts-deploy.yaml site.yaml
|
|
@ -0,0 +1,38 @@
|
||||||
|
#Sample Hosts File with variables
|
||||||
|
|
||||||
|
#For Single node deployments, make sure that the bootstrap node is listed as a master and worker node as well.
|
||||||
|
[bootstrap]
|
||||||
|
192.168.0.1
|
||||||
|
|
||||||
|
[master]
|
||||||
|
#Make sure bootstrap node is first master node
|
||||||
|
192.168.0.1
|
||||||
|
192.168.0.2
|
||||||
|
|
||||||
|
[workers]
|
||||||
|
192.168.0.3
|
||||||
|
|
||||||
|
|
||||||
|
[bootstrap:vars]
|
||||||
|
node_master=true
|
||||||
|
bootstrap_enabled=false
|
||||||
|
boot_kube_version="v0.3.12"
|
||||||
|
|
||||||
|
|
||||||
|
[master:vars]
|
||||||
|
node_master=true
|
||||||
|
cni_version="v0.5.1"
|
||||||
|
hyperkube_version="v1.5.6"
|
||||||
|
kubelet_version="v1.5.6"
|
||||||
|
calicoctl_version="v1.1.0"
|
||||||
|
calico_peer1="192.168.0.4"
|
||||||
|
calico_peer2="192.168.0.5"
|
||||||
|
deploy_pods_master=true
|
||||||
|
|
||||||
|
[all:vars]
|
||||||
|
ansible_user="ubuntu"
|
||||||
|
ansible_ssh_pass="password"
|
||||||
|
#API Server FQDN is required for SkyDNS to resolve
|
||||||
|
api_server_fqdn="cluster-ha.default.svc.cluster.local"
|
||||||
|
kube_labels="openstack-control-plane"
|
||||||
|
kube_controller_manager_version="v1.5.6"
|
Binary file not shown.
After Width: | Height: | Size: 35 KiB |
Binary file not shown.
After Width: | Height: | Size: 225 KiB |
|
@ -0,0 +1,26 @@
|
||||||
|
---
|
||||||
|
- name: Install Ceph
|
||||||
|
apt:
|
||||||
|
name: ceph-common
|
||||||
|
state: present
|
||||||
|
register: ceph_installed
|
||||||
|
when: addons_enabled and "{{addons.ceph is defined}}"
|
||||||
|
|
||||||
|
- name: Create Ceph and OpenStack-Helm directories
|
||||||
|
file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
with_items:
|
||||||
|
- "/var/lib/openstack-helm/ceph/osd"
|
||||||
|
- "/var/lib/openstack-helm/ceph/ceph"
|
||||||
|
- "/var/lib/openstack-helm/ceph/mon"
|
||||||
|
- "/var/lib/nova/instances"
|
||||||
|
when: addons_enabled and "{{addons.ceph is defined}}"
|
||||||
|
|
||||||
|
- name: Install Sigil for Ceph Secrets
|
||||||
|
shell: curl -L https://github.com/gliderlabs/sigil/releases/download/v0.4.0/sigil_0.4.0_Linux_x86_64.tgz | tar -zxC /usr/local/bin
|
||||||
|
when: addons_enabled and "{{addons.ceph is defined}}" and ceph_installed | changed
|
||||||
|
|
||||||
|
- name: Capture kubernetes version
|
||||||
|
shell: kubelet --version | cut -d " " -f2
|
||||||
|
register: kube_version
|
|
@ -0,0 +1,10 @@
|
||||||
|
---
|
||||||
|
- name: Check for Kubernetes dashboard
|
||||||
|
shell: hyperkube kubectl get pods -o wide --all-namespaces | grep kubernetes-dashboard
|
||||||
|
register: dashboard_check
|
||||||
|
ignore_errors: true
|
||||||
|
when: addons_enabled and "{{addons.dashboard is defined}}"
|
||||||
|
|
||||||
|
- name: Deploy Kubernetes Dashboard
|
||||||
|
shell: hyperkube kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
|
||||||
|
when: addons_enabled and "{{addons.dashboard is defined}}" and dashboard_check | failed
|
|
@ -0,0 +1,20 @@
|
||||||
|
---
|
||||||
|
- name: Check if Helm is installed
|
||||||
|
stat:
|
||||||
|
path: /usr/local/bin/helm
|
||||||
|
register: helm_installed
|
||||||
|
when: addons_enabled and "{{addons.helm is defined}}"
|
||||||
|
|
||||||
|
- name: Install helm
|
||||||
|
shell: curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > /root/get_helm.sh
|
||||||
|
when: addons_enabled and "{{addons.ceph is defined}}" and helm_installed.stat.exists == False
|
||||||
|
|
||||||
|
- name: Set file properties
|
||||||
|
file:
|
||||||
|
path: /root/get_helm.sh
|
||||||
|
mode: 0700
|
||||||
|
when: addons_enabled and "{{addons.ceph is defined}}" and helm_installed.stat.exists == False
|
||||||
|
|
||||||
|
- name: Install helm
|
||||||
|
shell: sh /root/get_helm.sh
|
||||||
|
when: addons_enabled and "{{addons.ceph is defined}}" and helm_installed.stat.exists == False
|
|
@ -0,0 +1,59 @@
|
||||||
|
---
|
||||||
|
- name: Check if MAAS is Running
|
||||||
|
shell: hyperkube kubectl describe pod maas-region --namespace=maas
|
||||||
|
ignore_errors: true
|
||||||
|
register: maas_deployed
|
||||||
|
when: addons_enabled and "{{addons.maas is defined}}"
|
||||||
|
|
||||||
|
- name: Check if Postgres is Running
|
||||||
|
shell: hyperkube kubectl describe pod postgresql-0 --namespace=maas
|
||||||
|
ignore_errors: true
|
||||||
|
register: postgres_deployed
|
||||||
|
when: addons_enabled and "{{addons.maas is defined}}"
|
||||||
|
|
||||||
|
#Check every 15 seconds to make sure the tiller pod has fully come up.
|
||||||
|
- action: shell hyperkube kubectl get pods --all-namespaces | grep tiller
|
||||||
|
register: tiller_output
|
||||||
|
until: tiller_output.stdout.find("Running") != -1
|
||||||
|
retries: 20
|
||||||
|
delay: 15
|
||||||
|
when: addons_enabled and "{{addons.maas is defined}}"
|
||||||
|
|
||||||
|
- name: Run Make on all Helm charts
|
||||||
|
shell: make
|
||||||
|
environment:
|
||||||
|
HELM_HOME: /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
args:
|
||||||
|
chdir: /opt/openstack-helm/repos/openstack-helm/
|
||||||
|
when: addons_enabled and "{{addons.maas is defined}}" and maas_deployed | failed
|
||||||
|
|
||||||
|
- name: Deploy Postgres
|
||||||
|
shell: helm install postgresql --namespace=maas
|
||||||
|
environment:
|
||||||
|
HELM_HOME: /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
args:
|
||||||
|
chdir: /opt/openstack-helm/repos/openstack-helm/
|
||||||
|
when: addons_enabled and "{{addons.maas is defined}}" and postgres_deployed | failed
|
||||||
|
|
||||||
|
- action: shell hyperkube kubectl get pods --namespace maas
|
||||||
|
register: postgres_output
|
||||||
|
until: postgres_output.stdout.find("Running") != -1
|
||||||
|
retries: 20
|
||||||
|
delay: 15
|
||||||
|
when: addons_enabled and "{{addons.maas is defined}}"
|
||||||
|
|
||||||
|
- name: Deploy MaaS
|
||||||
|
shell: helm install maas --namespace=maas
|
||||||
|
environment:
|
||||||
|
HELM_HOME: /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
args:
|
||||||
|
chdir: /opt/openstack-helm/repos/openstack-helm/
|
||||||
|
when: addons_enabled and "{{addons.maas is defined}}" and maas_deployed | failed
|
||||||
|
|
||||||
|
#Check every 15 seconds until MaaS comes up
|
||||||
|
- action: shell hyperkube kubectl get pods --namespace maas
|
||||||
|
register: maas_output
|
||||||
|
until: maas_output.stdout.find("Running") != -1
|
||||||
|
retries: 20
|
||||||
|
delay: 15
|
||||||
|
when: addons_enabled and "{{addons.maas is defined}}"
|
|
@ -0,0 +1,39 @@
|
||||||
|
---
|
||||||
|
- name: Create directories for OpenStack Helm
|
||||||
|
file:
|
||||||
|
path: /opt/openstack-helm/repos/openstack-helm
|
||||||
|
state: directory
|
||||||
|
when: addons_enabled and "{{addons.osh is defined}}"
|
||||||
|
|
||||||
|
- name: Checkout OpenStack-Helm
|
||||||
|
git:
|
||||||
|
repo: https://github.com/att-comdev/openstack-helm.git
|
||||||
|
dest: /opt/openstack-helm/repos/openstack-helm
|
||||||
|
update: true
|
||||||
|
when: addons_enabled and "{{addons.osh is defined}}"
|
||||||
|
|
||||||
|
- name: Check for Helm/Tiller
|
||||||
|
shell: hyperkube kubectl get pods --namespace kube-system | grep tiller
|
||||||
|
ignore_errors: true
|
||||||
|
register: helm_running
|
||||||
|
when: addons_enabled and "{{addons.osh is defined}}"
|
||||||
|
|
||||||
|
- name: Initialize Helm/Tiller
|
||||||
|
shell: helm init --home /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
environment:
|
||||||
|
HELM_HOME: /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
when: addons_enabled and "{{addons.osh is defined}}" and helm_running | failed
|
||||||
|
|
||||||
|
- name: Helm Serve
|
||||||
|
shell: nohup helm serve --repo-path /opt/openstack-helm/repos/openstack-helm/.helm/repository/local &
|
||||||
|
environment:
|
||||||
|
HELM_HOME: /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
args:
|
||||||
|
chdir: /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
when: addons_enabled and "{{addons.osh is defined}}" and helm_running | failed
|
||||||
|
|
||||||
|
- name: Add helm repositories
|
||||||
|
shell: helm repo add local http://localhost:8879/charts --home /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
args:
|
||||||
|
chdir: /opt/openstack-helm/repos/openstack-helm/.helm
|
||||||
|
when: addons_enabled and "{{addons.osh is defined}}" and helm_running | failed
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
- include: addon-dashboard.yaml
|
||||||
|
- include: addon-helm.yaml
|
||||||
|
- include: addon-osh.yaml
|
||||||
|
- include: addon-ceph.yaml
|
||||||
|
- include: addon-maas.yaml
|
|
@ -0,0 +1,75 @@
|
||||||
|
{
|
||||||
|
"kind": "Pod",
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"metadata": {
|
||||||
|
"name": "kube-controller-manager",
|
||||||
|
"namespace": "kube-system",
|
||||||
|
"creationTimestamp": null,
|
||||||
|
"labels": {
|
||||||
|
"component": "kube-controller-manager",
|
||||||
|
"tier": "control-plane"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"spec": {
|
||||||
|
"volumes": [
|
||||||
|
{
|
||||||
|
"name": "k8s",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/kubernetes"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "certs",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/ssl/certs"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"containers": [
|
||||||
|
{
|
||||||
|
"name": "kube-controller-manager",
|
||||||
|
"image": "quay.io/attcomdev/kube-controller-manager:{{ kube_controller_manager_version }}",
|
||||||
|
"command": [
|
||||||
|
"kube-controller-manager",
|
||||||
|
"--address=127.0.0.1",
|
||||||
|
"--leader-elect",
|
||||||
|
"--master=127.0.0.1:8080",
|
||||||
|
"--cluster-name=kubernetes",
|
||||||
|
"--root-ca-file=/etc/kubernetes/pki/ca.pem",
|
||||||
|
"--service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
|
||||||
|
"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem",
|
||||||
|
"--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem",
|
||||||
|
"--insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap"
|
||||||
|
],
|
||||||
|
"resources": {
|
||||||
|
"requests": {
|
||||||
|
"cpu": "200m"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"volumeMounts": [
|
||||||
|
{
|
||||||
|
"name": "k8s",
|
||||||
|
"readOnly": true,
|
||||||
|
"mountPath": "/etc/kubernetes/"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "certs",
|
||||||
|
"mountPath": "/etc/ssl/certs"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"livenessProbe": {
|
||||||
|
"httpGet": {
|
||||||
|
"path": "/healthz",
|
||||||
|
"port": 10252,
|
||||||
|
"host": "127.0.0.1"
|
||||||
|
},
|
||||||
|
"initialDelaySeconds": 15,
|
||||||
|
"timeoutSeconds": 15,
|
||||||
|
"failureThreshold": 8
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"hostNetwork": true
|
||||||
|
},
|
||||||
|
"status": {}
|
||||||
|
}
|
|
@ -0,0 +1,15 @@
|
||||||
|
---
|
||||||
|
- name: Setup bootkube.service
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
template:
|
||||||
|
src: bootkube.service
|
||||||
|
dest: /etc/systemd/system/bootkube.service
|
||||||
|
|
||||||
|
- name: Run bootkube
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
systemd:
|
||||||
|
name: bootkube
|
||||||
|
state: started
|
||||||
|
daemon_reload: yes
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
- include: prep-host.yaml
|
||||||
|
- include: prep-bootkube.yaml
|
||||||
|
- include: prep-network.yaml
|
||||||
|
- include: prep-kubernetes.yaml
|
||||||
|
- include: deploy-bootkube.yaml
|
|
@ -0,0 +1,22 @@
|
||||||
|
---
|
||||||
|
- name: Ensures bootkube dir exists
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
file:
|
||||||
|
path: /tmp/bootkube
|
||||||
|
state: directory
|
||||||
|
|
||||||
|
- name: Extract bootkube binaries
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
unarchive:
|
||||||
|
src: "https://github.com/kubernetes-incubator/bootkube/releases/download/{{ boot_kube_version }}/bootkube.tar.gz"
|
||||||
|
dest: /tmp/bootkube
|
||||||
|
remote_src: True
|
||||||
|
|
||||||
|
- name: Render bootkube manifests
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
command: "/tmp/bootkube/bin/linux/bootkube render --asset-dir=/tmp/bootkube/assets --experimental-self-hosted-etcd --etcd-servers=http://10.3.0.15:2379 --api-servers=https://{{ api_server_fqdn }}:443"
|
||||||
|
args:
|
||||||
|
creates: /etc/kubernetes/kubeconfig
|
|
@ -0,0 +1,23 @@
|
||||||
|
---
|
||||||
|
- name: Install base packages
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
apt:
|
||||||
|
name: "{{ item }}"
|
||||||
|
state: present
|
||||||
|
with_items:
|
||||||
|
- "docker.io"
|
||||||
|
- "vim"
|
||||||
|
- "ethtool"
|
||||||
|
- "traceroute"
|
||||||
|
- "git"
|
||||||
|
- "build-essential"
|
||||||
|
- "lldpd"
|
||||||
|
|
||||||
|
- name: Insert Temporary Hosts File Entry for FQDN Resolution
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
lineinfile:
|
||||||
|
dest: /etc/hosts
|
||||||
|
line: "{{ hostvars[groups['master'][0]]['ansible_default_ipv4']['address'] }} {{ api_server_fqdn }}"
|
||||||
|
state: present
|
|
@ -0,0 +1,29 @@
|
||||||
|
---
|
||||||
|
- name: Ensures /etc/kubernetes dir exists
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
file:
|
||||||
|
path: /etc/kubernetes
|
||||||
|
state: directory
|
||||||
|
|
||||||
|
- name: copy kubeconfig credentials
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
command: cp /tmp/bootkube/assets/auth/kubeconfig /etc/kubernetes/kubeconfig
|
||||||
|
args:
|
||||||
|
creates: /etc/kubernetes/kubeconfig
|
||||||
|
|
||||||
|
- name: copy kubernetes manifests
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
command: cp -a /tmp/bootkube/assets/manifests /etc/kubernetes/
|
||||||
|
args:
|
||||||
|
creates: /etc/kubernetes/manifests
|
||||||
|
|
||||||
|
- name: fetch kubeconfig
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
fetch:
|
||||||
|
src: /etc/kubernetes/kubeconfig
|
||||||
|
dest: roles/deploy-kubelet/templates/kubeconfig
|
||||||
|
flat: yes
|
|
@ -0,0 +1,14 @@
|
||||||
|
---
|
||||||
|
- name: Inject Custom manifests - kube-calico.yaml
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
template:
|
||||||
|
src: kube-calico.yaml.j2
|
||||||
|
dest: "/tmp/bootkube/assets/manifests/kube-flannel.yaml"
|
||||||
|
|
||||||
|
- name: Inject Custom manifests - kube-calico-cfg.yaml
|
||||||
|
when:
|
||||||
|
bootstrap_enabled
|
||||||
|
template:
|
||||||
|
src: kube-calico-cfg.yaml.j2
|
||||||
|
dest: "/tmp/bootkube/assets/manifests/kube-flannel-cfg.yaml"
|
|
@ -0,0 +1,10 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Control Plane Bootstrapping
|
||||||
|
Documentation=https://github.com/kubernetes-incubator/bootkube
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/tmp/bootkube/bin/linux/bootkube start --asset-dir=/tmp/bootkube/assets/ --experimental-self-hosted-etcd --etcd-server=http://127.0.0.1:12379
|
||||||
|
Restart=on-failure
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,267 @@
|
||||||
|
# This ConfigMap is used to configure a self-hosted Calico installation.
|
||||||
|
kind: ConfigMap
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: calico-config
|
||||||
|
namespace: kube-system
|
||||||
|
data:
|
||||||
|
# The location of your etcd cluster. This uses the Service clusterIP
|
||||||
|
# defined below.
|
||||||
|
etcd_endpoints: "http://10.96.232.136:6666"
|
||||||
|
|
||||||
|
# Configure the Calico backend to use.
|
||||||
|
calico_backend: "bird"
|
||||||
|
|
||||||
|
# The CNI network configuration to install on each node.
|
||||||
|
cni_network_config: |-
|
||||||
|
{
|
||||||
|
"name": "k8s-pod-network",
|
||||||
|
"type": "calico",
|
||||||
|
"etcd_endpoints": "__ETCD_ENDPOINTS__",
|
||||||
|
"log_level": "info",
|
||||||
|
"ipam": {
|
||||||
|
"type": "calico-ipam"
|
||||||
|
},
|
||||||
|
"policy": {
|
||||||
|
"type": "k8s",
|
||||||
|
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
|
||||||
|
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
|
||||||
|
},
|
||||||
|
"kubernetes": {
|
||||||
|
"kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manifest installs the Calico etcd on the kubeadm master. This uses a DaemonSet
|
||||||
|
# to force it to run on the master even when the master isn't schedulable, and uses
|
||||||
|
# nodeSelector to ensure it only runs on the master.
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: calico-etcd
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-etcd
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-etcd
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
# Only run this pod on the master.
|
||||||
|
nodeSelector:
|
||||||
|
kubeadm.alpha.kubernetes.io/role: master
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
- name: calico-etcd
|
||||||
|
image: gcr.io/google_containers/etcd:2.2.1
|
||||||
|
env:
|
||||||
|
- name: CALICO_ETCD_IP
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: status.podIP
|
||||||
|
command: ["/bin/sh","-c"]
|
||||||
|
args: ["/usr/local/bin/etcd --name=calico --data-dir=/var/etcd/calico-data --advertise-client-urls=http://$CALICO_ETCD_IP:6666 --listen-client-urls=http://0.0.0.0:6666 --listen-peer-urls=http://0.0.0.0:6667"]
|
||||||
|
volumeMounts:
|
||||||
|
- name: var-etcd
|
||||||
|
mountPath: /var/etcd
|
||||||
|
volumes:
|
||||||
|
- name: var-etcd
|
||||||
|
hostPath:
|
||||||
|
path: /var/etcd
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manfiest installs the Service which gets traffic to the Calico
|
||||||
|
# etcd.
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-etcd
|
||||||
|
name: calico-etcd
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
# Select the calico-etcd pod running on the master.
|
||||||
|
selector:
|
||||||
|
k8s-app: calico-etcd
|
||||||
|
# This ClusterIP needs to be known in advance, since we cannot rely
|
||||||
|
# on DNS to get access to etcd.
|
||||||
|
clusterIP: 10.96.232.136
|
||||||
|
ports:
|
||||||
|
- port: 6666
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manifest installs the calico/node container, as well
|
||||||
|
# as the Calico CNI plugins and network config on
|
||||||
|
# each master and worker node in a Kubernetes cluster.
|
||||||
|
kind: DaemonSet
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: calico-node
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
# Runs calico/node container on each Kubernetes node. This
|
||||||
|
# container programs network policy and routes on each
|
||||||
|
# host.
|
||||||
|
- name: calico-node
|
||||||
|
image: quay.io/calico/node:v1.1.0
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# Enable BGP. Disable to enforce policy only.
|
||||||
|
- name: CALICO_NETWORKING_BACKEND
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: calico_backend
|
||||||
|
# Disable file logging so `kubectl logs` works.
|
||||||
|
- name: CALICO_DISABLE_FILE_LOGGING
|
||||||
|
value: "true"
|
||||||
|
# Set Felix endpoint to host default action to ACCEPT.
|
||||||
|
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
|
||||||
|
value: "ACCEPT"
|
||||||
|
# Configure the IP Pool from which Pod IPs will be chosen.
|
||||||
|
- name: CALICO_IPV4POOL_CIDR
|
||||||
|
value: "192.168.0.0/16"
|
||||||
|
- name: CALICO_IPV4POOL_IPIP
|
||||||
|
value: "always"
|
||||||
|
# Disable IPv6 on Kubernetes.
|
||||||
|
- name: FELIX_IPV6SUPPORT
|
||||||
|
value: "false"
|
||||||
|
# Set Felix logging to "info"
|
||||||
|
- name: FELIX_LOGSEVERITYSCREEN
|
||||||
|
value: "info"
|
||||||
|
# Auto-detect the BGP IP address.
|
||||||
|
- name: IP
|
||||||
|
value: ""
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: 250m
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /lib/modules
|
||||||
|
name: lib-modules
|
||||||
|
readOnly: true
|
||||||
|
- mountPath: /var/run/calico
|
||||||
|
name: var-run-calico
|
||||||
|
readOnly: false
|
||||||
|
# This container installs the Calico CNI binaries
|
||||||
|
# and CNI network config file on each node.
|
||||||
|
- name: install-cni
|
||||||
|
image: quay.io/calico/cni:v1.6.1
|
||||||
|
command: ["/install-cni.sh"]
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# The CNI network config to install on each node.
|
||||||
|
- name: CNI_NETWORK_CONFIG
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: cni_network_config
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /host/opt/cni/bin
|
||||||
|
name: cni-bin-dir
|
||||||
|
- mountPath: /host/etc/cni/net.d
|
||||||
|
name: cni-net-dir
|
||||||
|
volumes:
|
||||||
|
# Used by calico/node.
|
||||||
|
- name: lib-modules
|
||||||
|
hostPath:
|
||||||
|
path: /lib/modules
|
||||||
|
- name: var-run-calico
|
||||||
|
hostPath:
|
||||||
|
path: /var/run/calico
|
||||||
|
# Used to install CNI.
|
||||||
|
- name: cni-bin-dir
|
||||||
|
hostPath:
|
||||||
|
path: /opt/cni/bin
|
||||||
|
- name: cni-net-dir
|
||||||
|
hostPath:
|
||||||
|
path: /etc/cni/net.d
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manifest deploys the Calico policy controller on Kubernetes.
|
||||||
|
# See https://github.com/projectcalico/k8s-policy
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: calico-policy-controller
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-policy
|
||||||
|
spec:
|
||||||
|
# The policy controller can only have a single active instance.
|
||||||
|
replicas: 1
|
||||||
|
strategy:
|
||||||
|
type: Recreate
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: calico-policy-controller
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-policy-controller
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
# The policy controller must run in the host network namespace so that
|
||||||
|
# it isn't governed by policy that would prevent it from working.
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
- name: calico-policy-controller
|
||||||
|
image: quay.io/calico/kube-policy-controller:v0.5.4
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# The location of the Kubernetes API. Use the default Kubernetes
|
||||||
|
# service for API access.
|
||||||
|
- name: K8S_API
|
||||||
|
value: "https://kubernetes.default:443"
|
||||||
|
# Since we're running in the host namespace and might not have KubeDNS
|
||||||
|
# access, configure the container's /etc/hosts to resolve
|
||||||
|
# kubernetes.default to the correct service clusterIP.
|
||||||
|
- name: CONFIGURE_ETC_HOSTS
|
||||||
|
value: "true"
|
|
@ -0,0 +1,144 @@
|
||||||
|
# This ConfigMap is used to configure a self-hosted Calico installation without ETCD
|
||||||
|
kind: ConfigMap
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: calico-config
|
||||||
|
namespace: kube-system
|
||||||
|
data:
|
||||||
|
# The CNI network configuration to install on each node.
|
||||||
|
cni_network_config: |-
|
||||||
|
{
|
||||||
|
"name": "k8s-pod-network",
|
||||||
|
"type": "calico",
|
||||||
|
"log_level": "debug",
|
||||||
|
"datastore_type": "kubernetes",
|
||||||
|
"hostname": "__KUBERNETES_NODE_NAME__",
|
||||||
|
"ipam": {
|
||||||
|
"type": "host-local",
|
||||||
|
"subnet": "usePodCidr"
|
||||||
|
},
|
||||||
|
"policy": {
|
||||||
|
"type": "k8s",
|
||||||
|
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
|
||||||
|
},
|
||||||
|
"kubernetes": {
|
||||||
|
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
|
||||||
|
"kubeconfig": "__KUBECONFIG_FILEPATH__"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manifest installs the calico/node container, as well
|
||||||
|
# as the Calico CNI plugins and network config on
|
||||||
|
# each master and worker node in a Kubernetes cluster.
|
||||||
|
kind: DaemonSet
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: calico-node
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
# Runs calico/node container on each Kubernetes node. This
|
||||||
|
# container programs network policy and routes on each
|
||||||
|
# host.
|
||||||
|
- name: calico-node
|
||||||
|
image: quay.io/calico/node:v1.1.0
|
||||||
|
env:
|
||||||
|
# Use Kubernetes API as the backing datastore.
|
||||||
|
- name: DATASTORE_TYPE
|
||||||
|
value: "kubernetes"
|
||||||
|
# Enable felix debug logging.
|
||||||
|
- name: FELIX_LOGSEVERITYSCREEN
|
||||||
|
value: "debug"
|
||||||
|
# Don't enable BGP.
|
||||||
|
- name: CALICO_NETWORKING_BACKEND
|
||||||
|
value: "none"
|
||||||
|
# Disable file logging so `kubectl logs` works.
|
||||||
|
- name: CALICO_DISABLE_FILE_LOGGING
|
||||||
|
value: "true"
|
||||||
|
# Set Felix endpoint to host default action to ACCEPT.
|
||||||
|
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
|
||||||
|
value: "ACCEPT"
|
||||||
|
# Disable IPV6 on Kubernetes.
|
||||||
|
- name: FELIX_IPV6SUPPORT
|
||||||
|
value: "false"
|
||||||
|
# Wait for the datastore.
|
||||||
|
- name: WAIT_FOR_DATASTORE
|
||||||
|
value: "true"
|
||||||
|
# The Calico IPv4 pool to use. This should match `--cluster-cidr`
|
||||||
|
- name: CALICO_IPV4POOL_CIDR
|
||||||
|
value: "10.244.0.0/16"
|
||||||
|
# Set based on the k8s node name.
|
||||||
|
- name: NODENAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: spec.nodeName
|
||||||
|
# No IP address needed.
|
||||||
|
- name: IP
|
||||||
|
value: ""
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: 250m
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /lib/modules
|
||||||
|
name: lib-modules
|
||||||
|
readOnly: true
|
||||||
|
- mountPath: /var/run/calico
|
||||||
|
name: var-run-calico
|
||||||
|
readOnly: false
|
||||||
|
# This container installs the Calico CNI binaries
|
||||||
|
# and CNI network config file on each node.
|
||||||
|
- name: install-cni
|
||||||
|
image: quay.io/calico/cni:v1.6.1
|
||||||
|
command: ["/install-cni.sh"]
|
||||||
|
env:
|
||||||
|
# The CNI network config to install on each node.
|
||||||
|
- name: CNI_NETWORK_CONFIG
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: cni_network_config
|
||||||
|
# Set the hostname based on the k8s node name.
|
||||||
|
- name: KUBERNETES_NODE_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: spec.nodeName
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /host/opt/cni/bin
|
||||||
|
name: cni-bin-dir
|
||||||
|
- mountPath: /host/etc/cni/net.d
|
||||||
|
name: cni-net-dir
|
||||||
|
volumes:
|
||||||
|
# Used by calico/node.
|
||||||
|
- name: lib-modules
|
||||||
|
hostPath:
|
||||||
|
path: /lib/modules
|
||||||
|
- name: var-run-calico
|
||||||
|
hostPath:
|
||||||
|
path: /var/run/calico
|
||||||
|
# Used to install CNI.
|
||||||
|
- name: cni-bin-dir
|
||||||
|
hostPath:
|
||||||
|
path: /opt/cni/bin
|
||||||
|
- name: cni-net-dir
|
||||||
|
hostPath:
|
||||||
|
path: /etc/cni/net.d
|
|
@ -0,0 +1 @@
|
||||||
|
#Nothing To Be Seen Here. Prevents Bootkube from coming up
|
|
@ -0,0 +1,75 @@
|
||||||
|
{
|
||||||
|
"kind": "Pod",
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"metadata": {
|
||||||
|
"name": "kube-controller-manager",
|
||||||
|
"namespace": "kube-system",
|
||||||
|
"creationTimestamp": null,
|
||||||
|
"labels": {
|
||||||
|
"component": "kube-controller-manager",
|
||||||
|
"tier": "control-plane"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"spec": {
|
||||||
|
"volumes": [
|
||||||
|
{
|
||||||
|
"name": "k8s",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/kubernetes"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "certs",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/ssl/certs"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"containers": [
|
||||||
|
{
|
||||||
|
"name": "kube-controller-manager",
|
||||||
|
"image": "quay.io/attcomdev/kube-controller-manager:{{ kube_controller_manager_version }}",
|
||||||
|
"command": [
|
||||||
|
"kube-controller-manager",
|
||||||
|
"--address=127.0.0.1",
|
||||||
|
"--leader-elect",
|
||||||
|
"--master=127.0.0.1:8080",
|
||||||
|
"--cluster-name=kubernetes",
|
||||||
|
"--root-ca-file=/etc/kubernetes/pki/ca.pem",
|
||||||
|
"--service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
|
||||||
|
"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem",
|
||||||
|
"--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem",
|
||||||
|
"--insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap"
|
||||||
|
],
|
||||||
|
"resources": {
|
||||||
|
"requests": {
|
||||||
|
"cpu": "200m"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"volumeMounts": [
|
||||||
|
{
|
||||||
|
"name": "k8s",
|
||||||
|
"readOnly": true,
|
||||||
|
"mountPath": "/etc/kubernetes/"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "certs",
|
||||||
|
"mountPath": "/etc/ssl/certs"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"livenessProbe": {
|
||||||
|
"httpGet": {
|
||||||
|
"path": "/healthz",
|
||||||
|
"port": 10252,
|
||||||
|
"host": "127.0.0.1"
|
||||||
|
},
|
||||||
|
"initialDelaySeconds": 15,
|
||||||
|
"timeoutSeconds": 15,
|
||||||
|
"failureThreshold": 8
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"hostNetwork": true
|
||||||
|
},
|
||||||
|
"status": {}
|
||||||
|
}
|
|
@ -0,0 +1,45 @@
|
||||||
|
---
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: kube-controller-manager
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: kube-controller-manager
|
||||||
|
spec:
|
||||||
|
replicas: 2
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: kube-controller-manager
|
||||||
|
spec:
|
||||||
|
nodeSelector:
|
||||||
|
master: "true"
|
||||||
|
containers:
|
||||||
|
- name: kube-controller-manager
|
||||||
|
image: quay.io/attcomdev/kube-controller-manager:{{ kube_controller_manager_version }}
|
||||||
|
command:
|
||||||
|
- ./hyperkube
|
||||||
|
- controller-manager
|
||||||
|
- --allocate-node-cidrs=true
|
||||||
|
- --configure-cloud-routes=false
|
||||||
|
- --cluster-cidr=10.2.0.0/16
|
||||||
|
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
|
||||||
|
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
|
||||||
|
- --leader-elect=true
|
||||||
|
- --cloud-provider=
|
||||||
|
volumeMounts:
|
||||||
|
- name: secrets
|
||||||
|
mountPath: /etc/kubernetes/secrets
|
||||||
|
readOnly: true
|
||||||
|
- name: ssl-host
|
||||||
|
mountPath: /etc/ssl/certs
|
||||||
|
readOnly: true
|
||||||
|
volumes:
|
||||||
|
- name: secrets
|
||||||
|
secret:
|
||||||
|
secretName: kube-controller-manager
|
||||||
|
- name: ssl-host
|
||||||
|
hostPath:
|
||||||
|
path: /usr/share/ca-certificates
|
||||||
|
dnsPolicy: Default # Don't use cluster DNS.
|
|
@ -0,0 +1,3 @@
|
||||||
|
---
|
||||||
|
- name: restart kubelet
|
||||||
|
service: name=kubelet state=restarted
|
|
@ -0,0 +1,95 @@
|
||||||
|
---
|
||||||
|
- name: Grab the ETCD IP
|
||||||
|
shell: hyperkube kubectl get services --all-namespaces | grep "etcd-service" | awk '{ print $3 }'
|
||||||
|
register: etcd_service_ip
|
||||||
|
|
||||||
|
# - name: Deploy Calico manifest template
|
||||||
|
# template:
|
||||||
|
# src: calico.yaml
|
||||||
|
# dest: /opt/openstack-helm/manifests/calico.yaml
|
||||||
|
# register: calico_changed
|
||||||
|
#
|
||||||
|
# - name: Install calicoctl tool
|
||||||
|
# get_url:
|
||||||
|
# url: "https://github.com/projectcalico/calicoctl/releases/download/{{ calicoctl_version }}/calicoctl"
|
||||||
|
# dest: /usr/bin/calicoctl
|
||||||
|
# validate_certs: false
|
||||||
|
# mode: 0755
|
||||||
|
#
|
||||||
|
# - name: Check for Calico deployment
|
||||||
|
# shell: hyperkube kubectl get services --all-namespaces | grep calico
|
||||||
|
# ignore_errors: True
|
||||||
|
# register: calico_deployed
|
||||||
|
#
|
||||||
|
# - name: Deploy BGP Peer Manifest (1)
|
||||||
|
# template:
|
||||||
|
# src: calico-peer.yaml
|
||||||
|
# dest: /opt/openstack-helm/manifests/calico-peer.yaml
|
||||||
|
#
|
||||||
|
# - name: Deploy BGP Peer Manifest (2)
|
||||||
|
# template:
|
||||||
|
# src: calico-peer2.yaml
|
||||||
|
# dest: /opt/openstack-helm/manifests/calico-peer2.yaml
|
||||||
|
#
|
||||||
|
# - name: Create Calico Pods
|
||||||
|
# shell: hyperkube kubectl create -f /opt/openstack-helm/manifests/calico.yaml
|
||||||
|
# when: calico_deployed | failed and "{{ inventory_hostname }} in groups['bootstrap']"
|
||||||
|
#
|
||||||
|
# - action: shell hyperkube kubectl get pods --all-namespaces | grep calico
|
||||||
|
# register: calico_output
|
||||||
|
# until: calico_output.stdout.find("Running") != -1
|
||||||
|
# retries: 20
|
||||||
|
# delay: 15
|
||||||
|
#
|
||||||
|
# - name: Create BGP Peering(1)
|
||||||
|
# shell: calicoctl create -f /opt/openstack-helm/manifests/calico-peer.yaml --skip-exists
|
||||||
|
# environment:
|
||||||
|
# ETCD_ENDPOINTS: "http://{{ etcd_service_ip.stdout }}:2379"
|
||||||
|
# when: calico_deployed | failed and "{{ inventory_hostname }} in groups['bootstrap']"
|
||||||
|
#
|
||||||
|
# - name: Create BGP Peering(2)
|
||||||
|
# shell: calicoctl create -f /opt/openstack-helm/manifests/calico-peer2.yaml --skip-exists
|
||||||
|
# environment:
|
||||||
|
# ETCD_ENDPOINTS: "http://{{ etcd_service_ip.stdout }}:2379"
|
||||||
|
# when: calico_deployed | failed and "{{ inventory_hostname }} in groups['bootstrap']"
|
||||||
|
|
||||||
|
- name: Check ClusterHA in KubeDNS
|
||||||
|
shell: hyperkube kubectl get services --all-namespaces | grep cluster-ha
|
||||||
|
ignore_errors: true
|
||||||
|
register: cluster_ha_present
|
||||||
|
|
||||||
|
- name: Install ClusterHA ConfigMaps
|
||||||
|
template:
|
||||||
|
src: cluster-ha.j2
|
||||||
|
dest: /opt/openstack-helm/manifests/cluster-ha.yaml
|
||||||
|
register: cluster_ha_configmaps
|
||||||
|
|
||||||
|
- name: Delete ClusterHA if present
|
||||||
|
shell: hyperkube kubectl delete -f /opt/openstack-helm/manifests/cluster-ha.yaml
|
||||||
|
when: cluster_ha_present | succeeded and cluster_ha_configmaps | changed
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- name: Deploy ClusterHA ConfigMaps
|
||||||
|
shell: hyperkube kubectl create -f /opt/openstack-helm/manifests/cluster-ha.yaml
|
||||||
|
when: cluster_ha_configmaps | changed
|
||||||
|
|
||||||
|
- name: Determine KubeDNS Server
|
||||||
|
shell: hyperkube kubectl get svc kube-dns --namespace=kube-system | awk '{print $2}' | sed -n '$p'
|
||||||
|
register: kube_dns_server
|
||||||
|
|
||||||
|
- name: Add KubeDNS to /etc/resolv.conf
|
||||||
|
lineinfile:
|
||||||
|
dest: /etc/resolv.conf
|
||||||
|
insertafter: "^# DO"
|
||||||
|
line: "nameserver {{ kube_dns_server.stdout }}"
|
||||||
|
state: present
|
||||||
|
backup: true
|
||||||
|
|
||||||
|
- name: Remove /etc/hosts entry if present
|
||||||
|
lineinfile:
|
||||||
|
dest: /etc/hosts
|
||||||
|
line: "{{ hostvars[groups['master'][0]]['ansible_default_ipv4']['address'] }} {{ api_server_fqdn }}"
|
||||||
|
state: absent
|
||||||
|
|
||||||
|
- name: Test Kubernetes cluster
|
||||||
|
shell: hyperkube kubectl get nodes
|
|
@ -0,0 +1,64 @@
|
||||||
|
---
|
||||||
|
#TODO: Version kubelet, with checksum
|
||||||
|
- name: Install kubelet
|
||||||
|
get_url:
|
||||||
|
url: "http://storage.googleapis.com/kubernetes-release/release/{{ kubelet_version }}/bin/linux/amd64/kubelet"
|
||||||
|
dest: /usr/bin/kubelet
|
||||||
|
# checksum: md5:33af080e876b1f3d481b0ff1ceec3ab8
|
||||||
|
mode: 0755
|
||||||
|
|
||||||
|
- name: Ensures /etc/kubernetes dir exists
|
||||||
|
file:
|
||||||
|
path: /etc/kubernetes
|
||||||
|
state: directory
|
||||||
|
|
||||||
|
#Gets Kubeconfig from the bootstrap node. See roles/bootstrap/tasks/main.yml
|
||||||
|
- name: Install kubeconfig
|
||||||
|
template:
|
||||||
|
src: kubeconfig
|
||||||
|
dest: /etc/kubernetes/kubeconfig
|
||||||
|
|
||||||
|
- name: Setup kubelet.service
|
||||||
|
template:
|
||||||
|
src: kubelet.service
|
||||||
|
dest: /etc/systemd/system/kubelet.service
|
||||||
|
notify: restart kubelet
|
||||||
|
|
||||||
|
- name: Enable Kubelet to be started on boot
|
||||||
|
systemd:
|
||||||
|
name: kubelet
|
||||||
|
state: started
|
||||||
|
enabled: yes
|
||||||
|
daemon_reload: yes
|
||||||
|
|
||||||
|
- name: Create Directories for Kubernetes manifests
|
||||||
|
file:
|
||||||
|
path: /opt/openstack-helm/manifests
|
||||||
|
state: directory
|
||||||
|
|
||||||
|
#Wait for Kubeapi Server to come up
|
||||||
|
- action: shell hyperkube kubectl get pods --all-namespaces | grep kube-apiserver
|
||||||
|
register: kubeapi_output
|
||||||
|
until: kubeapi_output.stdout.find("Running") != -1
|
||||||
|
retries: 40
|
||||||
|
delay: 15
|
||||||
|
|
||||||
|
#Wait for cluster to stabilize across all nodes
|
||||||
|
- action: shell hyperkube kubectl get pods --all-namespaces
|
||||||
|
register: cluster_stable
|
||||||
|
until: '"ContainerCreating" not in cluster_stable.stdout'
|
||||||
|
retries: 40
|
||||||
|
delay: 15
|
||||||
|
|
||||||
|
#Re-Deploy Calico with ETCD
|
||||||
|
- name: Inject Custom manifests - kube-calico.yaml
|
||||||
|
template:
|
||||||
|
src: kube-calico.yaml.j2
|
||||||
|
dest: "/tmp/bootkube/assets/manifests/kube-flannel.yaml"
|
||||||
|
notify: restart kubelet
|
||||||
|
|
||||||
|
- name: Inject Custom manifests - kube-calico-cfg.yaml
|
||||||
|
template:
|
||||||
|
src: kube-calico-cfg.yaml.j2
|
||||||
|
dest: "/tmp/bootkube/assets/manifests/kube-flannel-cfg.yaml"
|
||||||
|
notify: restart kubelet
|
|
@ -0,0 +1,6 @@
|
||||||
|
#Deploys Kubelet
|
||||||
|
---
|
||||||
|
- include: prep-host.yaml
|
||||||
|
- include: prep-hyperkube.yaml
|
||||||
|
- include: prep-cni.yaml
|
||||||
|
- include: deploy-kubernetes.yaml
|
|
@ -0,0 +1,11 @@
|
||||||
|
---
|
||||||
|
- name: Ensures CNI dir exists
|
||||||
|
file:
|
||||||
|
path: /opt/cni/bin
|
||||||
|
state: directory
|
||||||
|
|
||||||
|
- name: Install CNI binaries
|
||||||
|
unarchive:
|
||||||
|
src: "https://github.com/containernetworking/cni/releases/download/{{ cni_version }}/cni-amd64-{{ cni_version }}.tgz"
|
||||||
|
dest: /opt/cni/bin
|
||||||
|
remote_src: True
|
|
@ -0,0 +1,19 @@
|
||||||
|
---
|
||||||
|
- name: Install base packages
|
||||||
|
apt:
|
||||||
|
name: "{{ item }}"
|
||||||
|
state: present
|
||||||
|
with_items:
|
||||||
|
- "docker.io"
|
||||||
|
- "vim"
|
||||||
|
- "ethtool"
|
||||||
|
- "traceroute"
|
||||||
|
- "git"
|
||||||
|
- "build-essential"
|
||||||
|
- "lldpd"
|
||||||
|
|
||||||
|
- name: Insert Temporary Hosts File Entry for FQDN Resolution
|
||||||
|
lineinfile:
|
||||||
|
dest: /etc/hosts
|
||||||
|
line: "{{ hostvars[groups['master'][0]]['ansible_default_ipv4']['address'] }} {{ api_server_fqdn }}"
|
||||||
|
state: present
|
|
@ -0,0 +1,10 @@
|
||||||
|
---
|
||||||
|
- name: Downloads Hyperkube
|
||||||
|
get_url:
|
||||||
|
url: "http://storage.googleapis.com/kubernetes-release/release/{{ hyperkube_version }}/bin/linux/amd64/hyperkube"
|
||||||
|
dest: /usr/bin/hyperkube
|
||||||
|
|
||||||
|
- name: Set hyperkube permissions
|
||||||
|
file:
|
||||||
|
path: /usr/bin/hyperkube
|
||||||
|
mode: 0755
|
|
@ -0,0 +1,8 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: bgpPeer
|
||||||
|
metadata:
|
||||||
|
peerIP: {{ calico_peer1 }}
|
||||||
|
scope: node
|
||||||
|
node: {{ ansible_hostname }}
|
||||||
|
spec:
|
||||||
|
asNumber: 64686
|
|
@ -0,0 +1,8 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: bgpPeer
|
||||||
|
metadata:
|
||||||
|
peerIP: {{ calico_peer2 }}
|
||||||
|
scope: node
|
||||||
|
node: {{ ansible_hostname }}
|
||||||
|
spec:
|
||||||
|
asNumber: 64686
|
|
@ -0,0 +1,323 @@
|
||||||
|
# This ConfigMap is used to configure a self-hosted Calico installation.
|
||||||
|
# This ConfigMap is used to configure a self-hosted Calico installation.
|
||||||
|
kind: ConfigMap
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: calico-config
|
||||||
|
namespace: kube-system
|
||||||
|
data:
|
||||||
|
# The location of your etcd cluster. This uses the Service clusterIP
|
||||||
|
# defined below.
|
||||||
|
#etcd_endpoints: "http://10.96.232.136:6666"
|
||||||
|
#etcd_endpoints: "http://10.200.232.136:6666"
|
||||||
|
etcd_endpoints: "http://{{ etcd_service_ip.stdout }}:2379"
|
||||||
|
|
||||||
|
# True enables BGP networking, false tells Calico to enforce
|
||||||
|
# policy only, using native networking.
|
||||||
|
enable_bgp: "true"
|
||||||
|
|
||||||
|
# The CNI network configuration to install on each node.
|
||||||
|
cni_network_config: |-
|
||||||
|
{
|
||||||
|
"name": "k8s-pod-network",
|
||||||
|
"type": "calico",
|
||||||
|
"etcd_endpoints": "__ETCD_ENDPOINTS__",
|
||||||
|
"log_level": "info",
|
||||||
|
"ipam": {
|
||||||
|
"type": "calico-ipam"
|
||||||
|
},
|
||||||
|
"policy": {
|
||||||
|
"type": "k8s",
|
||||||
|
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
|
||||||
|
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
|
||||||
|
},
|
||||||
|
"kubernetes": {
|
||||||
|
"kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# The default IP Pool to be created for the cluster.
|
||||||
|
# Pod IP addresses will be assigned from this pool.
|
||||||
|
ippool.yaml: |
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ipPool
|
||||||
|
metadata:
|
||||||
|
cidr: 10.200.0.0/16
|
||||||
|
spec:
|
||||||
|
ipip:
|
||||||
|
enabled: true
|
||||||
|
nat-outgoing: true
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manifest installs the Calico etcd on the kubeadm master. This uses a DaemonSet
|
||||||
|
# to force it to run on the master even when the master isn't schedulable, and uses
|
||||||
|
# nodeSelector to ensure it only runs on the master.
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: calico-etcd
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-etcd
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-etcd
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
# Only run this pod on the master.
|
||||||
|
nodeSelector:
|
||||||
|
kubeadm.alpha.kubernetes.io/role: master
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
- name: calico-etcd
|
||||||
|
image: gcr.io/google_containers/etcd:2.2.1
|
||||||
|
env:
|
||||||
|
- name: CALICO_ETCD_IP
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: status.podIP
|
||||||
|
command: ["/bin/sh","-c"]
|
||||||
|
args: ["/usr/local/bin/etcd --name=calico --data-dir=/var/etcd/calico-data --advertise-client-urls=http://$CALICO_ETCD_IP:6666 --listen-client-urls=http://0.0.0.0:6666 --listen-peer-urls=http://0.0.0.0:6667"]
|
||||||
|
volumeMounts:
|
||||||
|
- name: var-etcd
|
||||||
|
mountPath: /var/etcd
|
||||||
|
volumes:
|
||||||
|
- name: var-etcd
|
||||||
|
hostPath:
|
||||||
|
path: /var/etcd
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manfiest installs the Service which gets traffic to the Calico
|
||||||
|
# etcd.
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-etcd
|
||||||
|
name: calico-etcd
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
# Select the calico-etcd pod running on the master.
|
||||||
|
selector:
|
||||||
|
k8s-app: calico-etcd
|
||||||
|
# This ClusterIP needs to be known in advance, since we cannot rely
|
||||||
|
# on DNS to get access to etcd.
|
||||||
|
#clusterIP: 10.96.232.136
|
||||||
|
clusterIP: 10.3.0.190
|
||||||
|
ports:
|
||||||
|
- port: 6666
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manifest installs the calico/node container, as well
|
||||||
|
# as the Calico CNI plugins and network config on
|
||||||
|
# each master and worker node in a Kubernetes cluster.
|
||||||
|
kind: DaemonSet
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: calico-node
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
# Runs calico/node container on each Kubernetes node. This
|
||||||
|
# container programs network policy and routes on each
|
||||||
|
# host.
|
||||||
|
- name: calico-node
|
||||||
|
image: quay.io/calico/node:v1.0.2
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# Enable BGP. Disable to enforce policy only.
|
||||||
|
- name: CALICO_NETWORKING
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: enable_bgp
|
||||||
|
# Disable file logging so `kubectl logs` works.
|
||||||
|
- name: CALICO_DISABLE_FILE_LOGGING
|
||||||
|
value: "true"
|
||||||
|
# Set Felix endpoint to host default action to ACCEPT.
|
||||||
|
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
|
||||||
|
value: "ACCEPT"
|
||||||
|
# Don't configure a default pool. This is done by the Job
|
||||||
|
# below.
|
||||||
|
- name: NO_DEFAULT_POOLS
|
||||||
|
value: "true"
|
||||||
|
# Auto-detect the BGP IP address.
|
||||||
|
- name: IP
|
||||||
|
value: ""
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /lib/modules
|
||||||
|
name: lib-modules
|
||||||
|
readOnly: true
|
||||||
|
- mountPath: /var/run/calico
|
||||||
|
name: var-run-calico
|
||||||
|
readOnly: false
|
||||||
|
# This container installs the Calico CNI binaries
|
||||||
|
# and CNI network config file on each node.
|
||||||
|
- name: install-cni
|
||||||
|
image: calico/cni:v1.5.6
|
||||||
|
command: ["/install-cni.sh"]
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# The CNI network config to install on each node.
|
||||||
|
- name: CNI_NETWORK_CONFIG
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: cni_network_config
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /host/opt/cni/bin
|
||||||
|
name: cni-bin-dir
|
||||||
|
- mountPath: /host/etc/cni/net.d
|
||||||
|
name: cni-net-dir
|
||||||
|
volumes:
|
||||||
|
# Used by calico/node.
|
||||||
|
- name: lib-modules
|
||||||
|
hostPath:
|
||||||
|
path: /lib/modules
|
||||||
|
- name: var-run-calico
|
||||||
|
hostPath:
|
||||||
|
path: /var/run/calico
|
||||||
|
# Used to install CNI.
|
||||||
|
- name: cni-bin-dir
|
||||||
|
hostPath:
|
||||||
|
path: /opt/cni/bin
|
||||||
|
- name: cni-net-dir
|
||||||
|
hostPath:
|
||||||
|
path: /etc/cni/net.d
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manifest deploys the Calico policy controller on Kubernetes.
|
||||||
|
# See https://github.com/projectcalico/k8s-policy
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: calico-policy-controller
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-policy
|
||||||
|
spec:
|
||||||
|
# The policy controller can only have a single active instance.
|
||||||
|
replicas: 1
|
||||||
|
strategy:
|
||||||
|
type: Recreate
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: calico-policy-controller
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-policy-controller
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
# The policy controller must run in the host network namespace so that
|
||||||
|
# it isn't governed by policy that would prevent it from working.
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
- name: calico-policy-controller
|
||||||
|
image: calico/kube-policy-controller:v0.5.2
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# The location of the Kubernetes API. Use the default Kubernetes
|
||||||
|
# service for API access.
|
||||||
|
- name: K8S_API
|
||||||
|
value: "https://kubernetes.default:443"
|
||||||
|
# Since we're running in the host namespace and might not have KubeDNS
|
||||||
|
# access, configure the container's /etc/hosts to resolve
|
||||||
|
# kubernetes.default to the correct service clusterIP.
|
||||||
|
- name: CONFIGURE_ETC_HOSTS
|
||||||
|
value: "true"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## This manifest deploys a Job which performs one time
|
||||||
|
# configuration of Calico
|
||||||
|
apiVersion: batch/v1
|
||||||
|
kind: Job
|
||||||
|
metadata:
|
||||||
|
name: configure-calico
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: configure-calico
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
restartPolicy: OnFailure
|
||||||
|
containers:
|
||||||
|
# Writes basic configuration to datastore.
|
||||||
|
- name: configure-calico
|
||||||
|
image: calico/ctl:v1.0.2
|
||||||
|
args:
|
||||||
|
- apply
|
||||||
|
- -f
|
||||||
|
- /etc/config/calico/ippool.yaml
|
||||||
|
volumeMounts:
|
||||||
|
- name: config-volume
|
||||||
|
mountPath: /etc/config
|
||||||
|
env:
|
||||||
|
# The location of the etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
volumes:
|
||||||
|
- name: config-volume
|
||||||
|
configMap:
|
||||||
|
name: calico-config
|
||||||
|
items:
|
||||||
|
- key: ippool.yaml
|
||||||
|
path: calico/ippool.yaml
|
|
@ -0,0 +1,23 @@
|
||||||
|
---
|
||||||
|
kind: Service
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: cluster-ha
|
||||||
|
spec:
|
||||||
|
clusterIP: None
|
||||||
|
ports:
|
||||||
|
- protocol: TCP
|
||||||
|
port: 443
|
||||||
|
targetPort: 443
|
||||||
|
---
|
||||||
|
kind: Endpoints
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: cluster-ha
|
||||||
|
subsets:
|
||||||
|
- addresses:
|
||||||
|
{% for node in groups['master'] %}
|
||||||
|
- ip: {{ hostvars[node]['ansible_default_ipv4']['address'] }}
|
||||||
|
{% endfor %}
|
||||||
|
ports:
|
||||||
|
- port: 443
|
|
@ -0,0 +1,53 @@
|
||||||
|
# This ConfigMap is used to configure a self-hosted Calico installation.
|
||||||
|
# Becomes kube-flannel-cfg.yaml once deployed on target host
|
||||||
|
kind: ConfigMap
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: calico-config
|
||||||
|
namespace: kube-system
|
||||||
|
data:
|
||||||
|
# Configure this with the location of your etcd cluster.
|
||||||
|
etcd_endpoints: "http://10.23.19.16:2379"
|
||||||
|
#etcd_endpoints: "http://127.0.0.1:2379"
|
||||||
|
|
||||||
|
# Configure the Calico backend to use.
|
||||||
|
calico_backend: "bird"
|
||||||
|
|
||||||
|
# The CNI network configuration to install on each node.
|
||||||
|
cni_network_config: |-
|
||||||
|
{
|
||||||
|
"name": "k8s-pod-network",
|
||||||
|
"type": "calico",
|
||||||
|
"etcd_endpoints": "__ETCD_ENDPOINTS__",
|
||||||
|
"etcd_key_file": "__ETCD_KEY_FILE__",
|
||||||
|
"etcd_cert_file": "__ETCD_CERT_FILE__",
|
||||||
|
"etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
|
||||||
|
"log_level": "info",
|
||||||
|
"ipam": {
|
||||||
|
"type": "calico-ipam"
|
||||||
|
},
|
||||||
|
"policy": {
|
||||||
|
"type": "k8s",
|
||||||
|
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
|
||||||
|
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
|
||||||
|
},
|
||||||
|
"kubernetes": {
|
||||||
|
"kubeconfig": "__KUBECONFIG_FILEPATH__"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# The default IP Pool to be created for the cluster.
|
||||||
|
# Pod IP addresses will be assigned from this pool.
|
||||||
|
ippool.yaml: |
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ipPool
|
||||||
|
metadata:
|
||||||
|
cidr: 10.2.0.0/16
|
||||||
|
spec:
|
||||||
|
nat-outgoing: true
|
||||||
|
|
||||||
|
# If you're using TLS enabled etcd uncomment the following.
|
||||||
|
# You must also populate the Secret below with these files.
|
||||||
|
etcd_ca: "" # "/calico-secrets/etcd-ca"
|
||||||
|
etcd_cert: "" # "/calico-secrets/etcd-cert"
|
||||||
|
etcd_key: "" # "/calico-secrets/etcd-key"
|
|
@ -0,0 +1,286 @@
|
||||||
|
# This manifest installs the calico/node container, as well
|
||||||
|
# as the Calico CNI plugins and network config on
|
||||||
|
# each master and worker node in a Kubernetes cluster.
|
||||||
|
# This file becomes kube-flannel.yaml once deployed to overwrite the default bootkube deployment
|
||||||
|
kind: DaemonSet
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: calico-node
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-node
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
# Runs calico/node container on each Kubernetes node. This
|
||||||
|
# container programs network policy and routes on each
|
||||||
|
# host.
|
||||||
|
- name: calico-node
|
||||||
|
image: quay.io/calico/node:v1.1.1
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# Choose the backend to use.
|
||||||
|
- name: CALICO_NETWORKING_BACKEND
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: calico_backend
|
||||||
|
# Disable file logging so `kubectl logs` works.
|
||||||
|
- name: CALICO_DISABLE_FILE_LOGGING
|
||||||
|
value: "true"
|
||||||
|
# Set Felix endpoint to host default action to ACCEPT.
|
||||||
|
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
|
||||||
|
value: "ACCEPT"
|
||||||
|
# Don't configure a default pool. This is done by the Job
|
||||||
|
# below.
|
||||||
|
- name: NO_DEFAULT_POOLS
|
||||||
|
value: "true"
|
||||||
|
- name: FELIX_LOGSEVERITYSCREEN
|
||||||
|
value: "info"
|
||||||
|
# Location of the CA certificate for etcd.
|
||||||
|
- name: ETCD_CA_CERT_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_ca
|
||||||
|
# Location of the client key for etcd.
|
||||||
|
- name: ETCD_KEY_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_key
|
||||||
|
# Location of the client certificate for etcd.
|
||||||
|
- name: ETCD_CERT_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_cert
|
||||||
|
# Auto-detect the BGP IP address.
|
||||||
|
- name: IP
|
||||||
|
value: ""
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /lib/modules
|
||||||
|
name: lib-modules
|
||||||
|
readOnly: true
|
||||||
|
- mountPath: /var/run/calico
|
||||||
|
name: var-run-calico
|
||||||
|
readOnly: false
|
||||||
|
# - mountPath: /calico-secrets
|
||||||
|
# name: etcd-certs
|
||||||
|
# This container installs the Calico CNI binaries
|
||||||
|
# and CNI network config file on each node.
|
||||||
|
- name: install-cni
|
||||||
|
image: quay.io/calico/cni:v1.6.2
|
||||||
|
command: ["/install-cni.sh"]
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# The CNI network config to install on each node.
|
||||||
|
- name: CNI_NETWORK_CONFIG
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: cni_network_config
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /host/opt/cni/bin
|
||||||
|
name: cni-bin-dir
|
||||||
|
- mountPath: /host/etc/cni/net.d
|
||||||
|
name: cni-net-dir
|
||||||
|
# - mountPath: /calico-secrets
|
||||||
|
# name: etcd-certs
|
||||||
|
volumes:
|
||||||
|
# Used by calico/node.
|
||||||
|
- name: lib-modules
|
||||||
|
hostPath:
|
||||||
|
path: /lib/modules
|
||||||
|
- name: var-run-calico
|
||||||
|
hostPath:
|
||||||
|
path: /var/run/calico
|
||||||
|
# Used to install CNI.
|
||||||
|
- name: cni-bin-dir
|
||||||
|
hostPath:
|
||||||
|
path: /opt/cni/bin
|
||||||
|
- name: cni-net-dir
|
||||||
|
hostPath:
|
||||||
|
path: /etc/cni/net.d
|
||||||
|
# Mount in the etcd TLS secrets.
|
||||||
|
# - name: etcd-certs
|
||||||
|
# secret:
|
||||||
|
# secretName: calico-etcd-secrets
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# This manifest deploys the Calico policy controller on Kubernetes.
|
||||||
|
# See https://github.com/projectcalico/k8s-policy
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: calico-policy-controller
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-policy
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
# The policy controller can only have a single active instance.
|
||||||
|
replicas: 1
|
||||||
|
strategy:
|
||||||
|
type: Recreate
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: calico-policy-controller
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico-policy
|
||||||
|
spec:
|
||||||
|
# The policy controller must run in the host network namespace so that
|
||||||
|
# it isn't governed by policy that would prevent it from working.
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
- name: calico-policy-controller
|
||||||
|
image: quay.io/calico/kube-policy-controller:v0.5.4
|
||||||
|
env:
|
||||||
|
# The location of the Calico etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# Location of the CA certificate for etcd.
|
||||||
|
- name: ETCD_CA_CERT_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_ca
|
||||||
|
# Location of the client key for etcd.
|
||||||
|
- name: ETCD_KEY_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_key
|
||||||
|
# Location of the client certificate for etcd.
|
||||||
|
- name: ETCD_CERT_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_cert
|
||||||
|
# The location of the Kubernetes API. Use the default Kubernetes
|
||||||
|
# service for API access.
|
||||||
|
- name: K8S_API
|
||||||
|
value: "https://kubernetes.default:443"
|
||||||
|
# Since we're running in the host namespace and might not have KubeDNS
|
||||||
|
# access, configure the container's /etc/hosts to resolve
|
||||||
|
# kubernetes.default to the correct service clusterIP.
|
||||||
|
- name: CONFIGURE_ETC_HOSTS
|
||||||
|
value: "true"
|
||||||
|
# volumeMounts:
|
||||||
|
# # Mount in the etcd TLS secrets.
|
||||||
|
# - mountPath: /calico-secrets
|
||||||
|
# name: etcd-certs
|
||||||
|
# volumes:
|
||||||
|
# Mount in the etcd TLS secrets.
|
||||||
|
# - name: etcd-certs
|
||||||
|
# secret:
|
||||||
|
# secretName: calico-etcd-secrets
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## This manifest deploys a Job which performs one time
|
||||||
|
# configuration of Calico
|
||||||
|
apiVersion: batch/v1
|
||||||
|
kind: Job
|
||||||
|
metadata:
|
||||||
|
name: configure-calico
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: calico
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: configure-calico
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: |
|
||||||
|
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
|
||||||
|
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
restartPolicy: OnFailure
|
||||||
|
containers:
|
||||||
|
# Writes basic configuration to datastore.
|
||||||
|
- name: configure-calico
|
||||||
|
image: calico/ctl:v1.1.1
|
||||||
|
args:
|
||||||
|
- apply
|
||||||
|
- -f
|
||||||
|
- /etc/config/calico/ippool.yaml
|
||||||
|
volumeMounts:
|
||||||
|
- name: config-volume
|
||||||
|
mountPath: /etc/config
|
||||||
|
# Mount in the etcd TLS secrets.
|
||||||
|
# - mountPath: /calico-secrets
|
||||||
|
# name: etcd-certs
|
||||||
|
env:
|
||||||
|
# The location of the etcd cluster.
|
||||||
|
- name: ETCD_ENDPOINTS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_endpoints
|
||||||
|
# Location of the CA certificate for etcd.
|
||||||
|
- name: ETCD_CA_CERT_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_ca
|
||||||
|
# Location of the client key for etcd.
|
||||||
|
- name: ETCD_KEY_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_key
|
||||||
|
# Location of the client certificate for etcd.
|
||||||
|
- name: ETCD_CERT_FILE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: calico-config
|
||||||
|
key: etcd_cert
|
||||||
|
volumes:
|
||||||
|
- name: config-volume
|
||||||
|
configMap:
|
||||||
|
name: calico-config
|
||||||
|
items:
|
||||||
|
- key: ippool.yaml
|
||||||
|
path: calico/ippool.yaml
|
||||||
|
# Mount in the etcd TLS secrets.
|
||||||
|
# - name: etcd-certs
|
||||||
|
# secret:
|
||||||
|
# secretName: calico-etcd-secrets
|
|
@ -0,0 +1,27 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Kubelet
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
|
ExecStart=/usr/bin/kubelet \
|
||||||
|
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--require-kubeconfig \
|
||||||
|
--cni-conf-dir=/etc/cni/net.d \
|
||||||
|
--cni-bin-dir=/opt/cni/bin \
|
||||||
|
--network-plugin=cni \
|
||||||
|
--lock-file=/var/run/lock/kubelet.lock \
|
||||||
|
--exit-on-lock-contention \
|
||||||
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
|
--allow-privileged \
|
||||||
|
--minimum-container-ttl-duration=6m0s \
|
||||||
|
--cluster_dns=10.3.0.10 \
|
||||||
|
--cluster_domain=cluster.local \
|
||||||
|
--node-labels=master={{ node_master|default('false') }} \
|
||||||
|
--hostname-override={{ inventory_hostname }} \
|
||||||
|
--v=2
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,26 @@
|
||||||
|
#Default Override-able variables for bootstrap role
|
||||||
|
boot_kube_version: "v0.3.13"
|
||||||
|
bootstrap_enabled: "true"
|
||||||
|
|
||||||
|
#For DNS Resilliency, override this with FQDN in your environment which resolves to all "master" servers
|
||||||
|
api_server_fqdn: "kubeapi.test.local"
|
||||||
|
|
||||||
|
#Default Override-able variables for the Kubelet role
|
||||||
|
cni_version: "v0.5.2"
|
||||||
|
hyperkube_version: "v1.5.6"
|
||||||
|
kubelet_version: "v1.5.6"
|
||||||
|
calicoctl_version: "v1.1.0"
|
||||||
|
|
||||||
|
#Calico Peering - Physical Switch Fabric IPs
|
||||||
|
calico_peer1: 10.23.21.2
|
||||||
|
calico_peer2: 10.23.21.3
|
||||||
|
|
||||||
|
## Kubernetes Add-Ons:
|
||||||
|
# Optional Items: kube_dashboard, kube_helm (more to come).
|
||||||
|
addons_enabled: false
|
||||||
|
addons:
|
||||||
|
- dashboard
|
||||||
|
- helm
|
||||||
|
- osh
|
||||||
|
- ceph
|
||||||
|
- maas
|
|
@ -0,0 +1 @@
|
||||||
|
192.168.4.64
|
|
@ -0,0 +1,27 @@
|
||||||
|
- hosts: bootstrap
|
||||||
|
remote_user: ubuntu
|
||||||
|
become: yes
|
||||||
|
become_method: sudo
|
||||||
|
roles:
|
||||||
|
- deploy-bootstrap
|
||||||
|
|
||||||
|
- hosts: master
|
||||||
|
remote_user: ubuntu
|
||||||
|
become: yes
|
||||||
|
become_method: sudo
|
||||||
|
roles:
|
||||||
|
- deploy-kubelet
|
||||||
|
|
||||||
|
- hosts: workers
|
||||||
|
remote_user: ubuntu
|
||||||
|
become: yes
|
||||||
|
become_method: sudo
|
||||||
|
roles:
|
||||||
|
- deploy-kubelet
|
||||||
|
|
||||||
|
#- hosts: master
|
||||||
|
# remote_user: ubuntu
|
||||||
|
# become: yes
|
||||||
|
# become_method: sudo
|
||||||
|
# roles:
|
||||||
|
# - deploy-addons
|
Loading…
Reference in New Issue