Merge remote-tracking branch 'origin/master' into calico-etcd

This commit is contained in:
Mark Burnett 2017-07-18 12:58:16 -05:00
commit 8ea46db324
25 changed files with 290 additions and 153 deletions

View File

@ -23,6 +23,8 @@ The detailed Roadmap can be viewed on the
To get started, see [getting started](docs/getting-started.md).
Configuration is documented [here](docs/configuration.md).
## Bugs
Bugs are tracked in

159
docs/configuration.md Normal file
View File

@ -0,0 +1,159 @@
# Promenade Configuration
Promenade is configured using a set Kubernetes-like YAML documents. Many of
these documents can be automatically derived from a few core configuration
documents or generated automatically (e.g. certificates). All of these
documents can be specified in detail allowing for fine-grained control over
cluster deployment.
Generally, these documents have the following form:
```yaml
---
apiVersion: promenade/v1
kind: Kind
metadata:
compliant: metadata
spec:
detailed: data
```
`apiVersion` identifies the document as Promenade configuration. Currently
only `promenade/v1` is supported.
`kind` describe the detailed type of document. Valid kinds are:
- `Certificate` - An x509 certificate.
- `CertificateAuthority` - An x509 certificate authority certificate.
- `CertificateAuthorityKey` - The private key for a certificate authority.
- `CertificateKey` - The private key for a certificate.
- `Cluster` - Cluster configuration containing node host names, IPs & roles.
- `Etcd` - Specific configuration for an etcd cluster.
- `Masters` - Host names and IPs of master nodes.
- `Network` - Configuration details for Kubernetes networking components.
- `Node` - Specific configuration for a single host.
- `PrivateKey` - A private key, e.g. the `controller-manager`'s token signing key.
- `PublicKey` - A public key, e.g. the key for verifying service account tokens.
- `Versions` - Specifies versions of packages and images to be deployed.
`metadata` are used to select specific documents of a given `kind`. For
example, the various services must each select their specific `Certificate`s.
`metadata` are also used by Drydock to select the configuration files that are
needed for a particular node.
`spec` contains specific data for each kind of configuration document.
## Generating Configuration from Minimal Input
To construct a complete set of cluster configuration, the minimal input are
`Cluster`, `Network` and `Versions` documents. To see complete examples of
these, please see the [example](example/vagrant-input-config.yaml).
The `Cluster` configuration must contain an entry for each host for which
configuration should be generated. Each host must contain an `ip`, and
optionally `roles` and `additional_labels`. Valid `roles` are currently
`genesis` and `master`. `additional_labels` are Kubernetes labels which will
be added to the node.
Here's an example `Cluster` document:
```yaml
apiVersion: promenade/v1
kind: Cluster
metadata:
name: example
target: none
spec:
nodes:
n0:
ip: 192.168.77.10
roles:
- master
- genesis
additional_labels:
- beta.kubernetes.io/arch=amd64
```
The `Network` document must contain:
- `cluster_domain` - The domain for the cluster, e.g. `cluster.local`.
- `cluster_dns` - The IP of the cluster DNS,e .g. `10.96.0.10`.
- `kube_service_ip` - The IP of the `kubernetes` service, e.g. `10.96.0.1`.
- `pod_ip_cidr` - The CIDR from which pod IPs will be assigned, e.g. `10.97.0.0/16`.
- `service_ip_cidr` - The CIDR from which service IPs will be assigned, e.g. `10.96.0.0/16`.
- `etcd_service_ip` - The IP address of the `etcd` service, e.g. `10.96.232.136`.
- `dns_servers` - A list of upstream DNS server IPs.
Optionally, proxy settings can be specified here as well. These should all
generally be set together: `http_proxy`, `https_proxy`, `no_proxy`.
Here's an example `Network` document:
```yaml
apiVersion: promenade/v1
kind: Network
metadata:
cluster: example
name: example
target: all
spec:
cluster_domain: cluster.local
cluster_dns: 10.96.0.10
kube_service_ip: 10.96.0.1
pod_ip_cidr: 10.97.0.0/16
service_ip_cidr: 10.96.0.0/16
etcd_service_ip: 10.96.232.136
dns_servers:
- 8.8.8.8
- 8.8.4.4
http_proxy: http://proxy.example.com:8080
https_proxy: http://proxy.example.com:8080
no_proxy: 192.168.77.10,127.0.0.1,kubernetes
```
The `Versions` document must define the Promenade image to be used and the
Docker package version. Currently, only the versions specified for these two
items are respected.
Here's an example `Versions` document:
```yaml
apiVersion: promenade/v1
kind: Versions
metadata:
cluster: example
name: example
target: all
spec:
images:
promenade: quay.io/attcomdev/promenade:latest
packages:
docker: docker.io=1.12.6-0ubuntu1~16.04.1
```
Given these documents (see the [example](example/vagrant-input-config.yaml)),
Promenade can derive the remaining configuration and generate certificates and
keys using the following command:
```bash
mkdir -p configs
docker run --rm -t \
-v $(pwd):/target \
quay.io/attcomdev/promenade:latest \
promenade -v generate \
-c /target/example/vagrant-input-config.yaml \
-o /target/configs
```
This will generate the following files in the `configs` directory:
- `up.sh` - A script which will bring up a node to create or join a cluster.
- `admin-bundle.yaml` - A collection of generated certificates, private keys
and core configuration.
- `complete-bundle.yaml` - A set of generated documents suitable for upload
into Drydock for future delivery to nodes to be provisioned to join the
cluster.
Additionally, a YAML file for each host described in the `Cluster` document
will be placed here. These files each contain every document needed for that
particular node to create or join the cluster.

View File

@ -7,11 +7,11 @@
Make sure you have [Vagrant](https://vagrantup.com) and
[VirtualBox](https://www.virtualbox.org/wiki/Downloads) installed.
Generate the certificates and keys to be used:
Generate the per-host configuration, certificates and keys to be used:
```bash
mkdir configs
docker run --rm -t -v $(pwd):/target quay.io/attcomdev/promenade:experimental promenade -v generate -c /target/example/vagrant-input-config.yaml -o /target/configs
docker run --rm -t -v $(pwd):/target quay.io/attcomdev/promenade:latest promenade -v generate -c /target/example/vagrant-input-config.yaml -o /target/configs
```
Start the VMs:
@ -23,44 +23,45 @@ vagrant up
Start the genesis node:
```bash
vagrant ssh n0 -c 'sudo /vagrant/genesis.sh /vagrant/configs/n0.yaml'
vagrant ssh n0 -c 'sudo /vagrant/configs/up.sh /vagrant/configs/n0.yaml'
```
Join the master nodes:
```bash
vagrant ssh n1 -c 'sudo /vagrant/join.sh /vagrant/configs/n1.yaml'
vagrant ssh n2 -c 'sudo /vagrant/join.sh /vagrant/configs/n2.yaml'
vagrant ssh n1 -c 'sudo /vagrant/configs/up.sh /vagrant/configs/n1.yaml'
vagrant ssh n2 -c 'sudo /vagrant/configs/up.sh /vagrant/configs/n2.yaml'
```
Join the worker node:
```bash
vagrant ssh n3 -c 'sudo /vagrant/join.sh /vagrant/configs/n3.yaml'
vagrant ssh n3 -c 'sudo /vagrant/configs/up.sh /vagrant/configs/n3.yaml'
```
### Development Cleanup
To use Promenade from behind a proxy, simply add proxy settings to the
promenade `Network` configuration document using the keys `http_proxy`,
`https_proxy`, and `no_proxy` before running `generate`.
If you are testing/developing on hosts that cannot be easily destroyed, you may
find the `cleanup.sh` script useful.
Note that it is important to specify `no_proxy` to include `kubernetes` and the
IP addresses of all the master nodes.
### Building the image
```bash
docker build -t quay.io/attcomdev/promenade:experimental .
docker build -t promenade:local .
```
For development, you may wish to save it and have the `genesis.sh` and
`join.sh` scripts load it:
For development, you may wish to save it and have the `up.sh` script load it:
```bash
docker save -o promenade.tar quay.io/attcomdev/promenade:experimental
docker save -o promenade.tar promenade:local
```
Then on a node:
```bash
PROMENADE_LOAD_IMAGE=/vagrant/promenade.tar /vagrant/genesis.sh /vagrant/path/to/node-config.yaml
PROMENADE_LOAD_IMAGE=/vagrant/promenade.tar /vagrant/up.sh /vagrant/path/to/node-config.yaml
```
To build the image from behind a proxy, you can:
@ -68,7 +69,7 @@ To build the image from behind a proxy, you can:
```bash
export http_proxy=...
export no_proxy=...
docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$http_proxy --build-arg no_proxy=$no_proxy -t quay.io/attcomdev/promenade:experimental .
docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$http_proxy --build-arg no_proxy=$no_proxy -t promenade:local .
```
## Using Promenade Behind a Proxy
@ -81,5 +82,5 @@ cd /vagrant
export DOCKER_HTTP_PROXY="http://proxy.server.com:8080"
export DOCKER_HTTPS_PROXY="https://proxy.server.com:8080"
export DOCKER_NO_PROXY="localhost,127.0.0.1"
sudo -E /vagrant/genesis.sh /vagrant/configs/n0.yaml
sudo -E /vagrant/up.sh /vagrant/configs/n0.yaml
```

View File

@ -54,3 +54,37 @@ spec:
- 8.8.4.4
#http_proxy: http://proxy.example.com:8080
#https_proxy: https://proxy.example.com:8080
---
apiVersion: promenade/v1
kind: Versions
metadata:
cluster: example
name: example
target: all
spec:
images:
armada: quay.io/attcomdev/armada:latest
calico:
cni: quay.io/calico/cni:v1.9.1
etcd: quay.io/coreos/etcd:v3.0.17
node: quay.io/calico/node:v1.3.0
policy-controller: quay.io/calico/kube-policy-controller:v0.6.0
kubernetes:
apiserver: gcr.io/google_containers/hyperkube-amd64:v1.6.4
controller-manager: quay.io/attcomdev/kube-controller-manager:v1.6.4
dns:
dnsmasq: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2
kubedns: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.2
sidecar: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2
etcd: quay.io/coreos/etcd:v3.0.17
kubectl: gcr.io/google_containers/hyperkube-amd64:v1.6.4
proxy: gcr.io/google_containers/hyperkube-amd64:v1.6.4
scheduler: gcr.io/google_containers/hyperkube-amd64:v1.6.4
promenade: quay.io/attcomdev/promenade:latest
tiller: gcr.io/kubernetes-helm/tiller:v2.4.2
packages:
docker: docker.io=1.12.6-0ubuntu1~16.04.1
dnsmasq: dnsmasq=2.75-1ubuntu0.16.04.2
socat: socat=1.7.3.1-1
additional_packages:
- ceph-common=10.2.7-0ubuntu0.16.04.1

75
join.sh
View File

@ -1,75 +0,0 @@
#!/usr/bin/env bash
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root." 1>&2
exit 1
fi
set -ex
#Promenade Variables
DOCKER_PACKAGE="docker.io"
DOCKER_VERSION=1.12.6-0ubuntu1~16.04.1
#Proxy Variables
DOCKER_HTTP_PROXY=${DOCKER_HTTP_PROXY:-${HTTP_PROXY:-${http_proxy}}}
DOCKER_HTTPS_PROXY=${DOCKER_HTTPS_PROXY:-${HTTPS_PROXY:-${https_proxy}}}
DOCKER_NO_PROXY=${DOCKER_NO_PROXY:-${NO_PROXY:-${no_proxy}}}
mkdir -p /etc/docker
cat <<EOS > /etc/docker/daemon.json
{
"live-restore": true,
"storage-driver": "overlay2"
}
EOS
#Configuration for Docker Behind a Proxy
mkdir -p /etc/systemd/system/docker.service.d
#Set HTTPS Proxy Variable
cat <<EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=${DOCKER_HTTP_PROXY}"
EOF
#Set HTTPS Proxy Variable
cat <<EOF > /etc/systemd/system/docker.service.d/https-proxy.conf
[Service]
Environment="HTTPS_PROXY=${DOCKER_HTTPS_PROXY}"
EOF
#Set No Proxy Variable
cat <<EOF > /etc/systemd/system/docker.service.d/no-proxy.conf
[Service]
Environment="NO_PROXY=${DOCKER_NO_PROXY}"
EOF
#Reload systemd and docker if present
systemctl daemon-reload
systemctl restart docker || true
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get install -y -qq --no-install-recommends \
$DOCKER_PACKAGE=$DOCKER_VERSION \
if [ -f "${PROMENADE_LOAD_IMAGE}" ]; then
echo === Loading updated promenade image ===
docker load -i "${PROMENADE_LOAD_IMAGE}"
fi
docker pull quay.io/attcomdev/promenade:experimental
docker run -t --rm \
-v /:/target \
quay.io/attcomdev/promenade:experimental \
promenade \
-v \
join \
--hostname $(hostname) \
--config-path /target$(realpath $1)
touch /var/lib/prom.done

View File

@ -25,34 +25,13 @@ def promenade(*, verbose):
type=click.Path(exists=True, file_okay=False,
dir_okay=True, resolve_path=True),
help='Location where templated files will be placed.')
def genesis(*, asset_dir, config_path, hostname, target_dir):
def up(*, asset_dir, config_path, hostname, target_dir):
op = operator.Operator.from_config(config_path=config_path,
hostname=hostname,
target_dir=target_dir)
op.genesis(asset_dir=asset_dir)
@promenade.command(help='Join an existing cluster')
@click.option('-a', '--asset-dir', default='/assets',
type=click.Path(exists=True, file_okay=False,
dir_okay=True, resolve_path=True),
help='Source path for binaries to deploy.')
@click.option('-c', '--config-path', type=click.File(),
help='Location of cluster configuration data.')
@click.option('--hostname', help='Current hostname.')
@click.option('-t', '--target-dir', default='/target',
type=click.Path(exists=True, file_okay=False,
dir_okay=True, resolve_path=True),
help='Location where templated files will be placed.')
def join(*, asset_dir, config_path, hostname, target_dir):
op = operator.Operator.from_config(config_path=config_path,
hostname=hostname,
target_dir=target_dir)
op.join(asset_dir=asset_dir)
op.up(asset_dir=asset_dir)
@promenade.command(help='Generate certs and keys')

View File

@ -33,6 +33,7 @@ class Document:
'Node',
'PrivateKey',
'PublicKey',
'Versions',
}
def __init__(self, data):
@ -68,6 +69,9 @@ class Document:
def __getitem__(self, key):
return self.data['spec'][key]
def get(self, key, default=None):
return self.data['spec'].get(key, default)
class Configuration:
def __init__(self, documents):

View File

@ -1,4 +1,4 @@
from . import config, logging, pki
from . import config, logging, pki, renderer
import os
__all__ = ['Generator']
@ -18,7 +18,7 @@ class Generator:
self.validate()
def validate(self):
required_kinds = ['Cluster', 'Network']
required_kinds = ['Cluster', 'Network', 'Versions']
for required_kind in required_kinds:
try:
self.input_config[required_kind]
@ -30,9 +30,17 @@ class Generator:
assert self.input_config['Cluster'].metadata['name'] \
== self.input_config['Network'].metadata['cluster']
def generate_up_sh(self, output_dir):
r = renderer.Renderer(config=self.input_config,
target_dir=output_dir)
r.render_generate_files()
def generate_all(self, output_dir):
self.generate_up_sh(output_dir)
cluster = self.input_config['Cluster']
network = self.input_config['Network']
versions = self.input_config['Versions']
cluster_name = cluster.metadata['name']
LOG.info('Generating configuration for cluster "%s"', cluster_name)
@ -107,6 +115,7 @@ class Generator:
network,
sa_priv,
sa_pub,
versions,
]
for hostname, data in cluster['nodes'].items():
@ -158,6 +167,7 @@ class Generator:
node,
proxy_cert,
proxy_cert_key,
versions,
]
role_specific_documents = []

View File

@ -19,13 +19,7 @@ class Operator:
self.hostname = hostname
self.target_dir = target_dir
def genesis(self, *, asset_dir=None):
self.setup(asset_dir=asset_dir)
def join(self, *, asset_dir=None):
self.setup(asset_dir=asset_dir)
def setup(self, *, asset_dir):
def up(self, *, asset_dir):
self.rsync_from(asset_dir)
self.render()

View File

@ -15,6 +15,9 @@ class Renderer:
self.config = config
self.target_dir = target_dir
def render_generate_files(self):
self.render_template_dir('generate')
def render(self):
for template_dir in self.config['Node']['templates']:
self.render_template_dir(template_dir)
@ -38,7 +41,9 @@ class Renderer:
LOG.debug('Templating "%s" into "%s"', path, target_path)
env = jinja2.Environment(undefined=jinja2.StrictUndefined)
env = jinja2.Environment(
loader=jinja2.PackageLoader('promenade', 'templates/include'),
undefined=jinja2.StrictUndefined)
env.filters['b64enc'] = _base64_encode
with open(path) as f:

View File

@ -12,7 +12,7 @@ metadata:
spec:
containers:
- name: kube-proxy
image: gcr.io/google_containers/hyperkube-amd64:v1.6.4
image: {{ config['Versions']['images']['kubernetes']['proxy'] }}
command:
- /hyperkube
- proxy

View File

@ -5,9 +5,11 @@ set -ex
export DEBIAN_FRONTEND=noninteractive
apt-get install -y --no-install-recommends \
ceph-common \
dnsmasq \
socat
{%- for package in config['Versions']['additional_packages'] %}
{{ package }} \
{%- endfor %}
{{ config['Versions']['packages']['dnsmasq'] }} \
{{ config['Versions']['packages']['socat'] }}
systemctl daemon-reload
systemctl enable kubelet

View File

@ -0,0 +1 @@
{% include "up.sh" with context %}

View File

@ -0,0 +1,18 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: drydock
spec: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: promenade-join-sh
namespace: drydock
data:
join.sh: |-
{%- filter indent(4, True) %}
{% include "up.sh" %}
{%- endfilter %}

View File

@ -48,7 +48,7 @@ spec:
- env:
- name: TILLER_NAMESPACE
value: kube-system
image: gcr.io/kubernetes-helm/tiller:v2.4.2
image: {{ config['Versions']['images']['tiller'] }}
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3

View File

@ -129,7 +129,7 @@ spec:
# container programs network policy and routes on each
# host.
- name: calico-node
image: quay.io/calico/node:v1.3.0
image: {{ config['Versions']['images']['calico']['node'] }}
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
@ -208,7 +208,7 @@ spec:
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: quay.io/calico/cni:v1.9.1
image: {{ config['Versions']['images']['calico']['cni'] }}
command: ["/install-cni.sh"]
env:
# The location of the Calico etcd cluster.
@ -297,7 +297,7 @@ spec:
serviceAccountName: calico-policy-controller
containers:
- name: calico-policy-controller
image: quay.io/calico/kube-policy-controller:v0.6.0
image: {{ config['Versions']['images']['calico']['policy-controller'] }}
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS

View File

@ -95,7 +95,7 @@ spec:
env:
- name: PROMETHEUS_PORT
value: "10055"
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.2
image: {{ config['Versions']['images']['kubernetes']['dns']['kubedns'] }}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 10053
@ -140,7 +140,7 @@ spec:
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2
image: {{ config['Versions']['images']['kubernetes']['dns']['dnsmasq'] }}
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
@ -174,7 +174,7 @@ spec:
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2
image: {{ config['Versions']['images']['kubernetes']['dns']['sidecar'] }}
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5

View File

@ -11,7 +11,7 @@ spec:
hostNetwork: true
containers:
- name: loader
image: quay.io/attcomdev/armada:master
image: {{ config['Versions']['images']['armada'] }}
imagePullPolicy: Always # We are following a moving branch for now.
command:
- /bin/bash

View File

@ -12,7 +12,7 @@ spec:
hostNetwork: true
containers:
- name: loader
image: gcr.io/google_containers/hyperkube-amd64:v1.6.4
image: {{ config['Versions']['images']['kubernetes']['kubectl'] }}
command:
- /bin/bash
- -c

View File

@ -11,7 +11,7 @@ spec:
hostNetwork: true
containers:
- name: auxiliary-etcd-0
image: quay.io/coreos/etcd:v3.2.1
image: {{ config['Versions']['images']['kubernetes']['etcd'] }}
env:
- name: ETCD_NAME
value: auxiliary-etcd-0
@ -66,7 +66,7 @@ spec:
mountPath: /etc/kubernetes/auxiliary-etcd-0/pki
readOnly: true
- name: auxiliary-etcd-1
image: quay.io/coreos/etcd:v3.2.1
image: {{ config['Versions']['images']['kubernetes']['etcd'] }}
env:
- name: ETCD_NAME
value: auxiliary-etcd-1
@ -121,7 +121,7 @@ spec:
mountPath: /etc/kubernetes/auxiliary-etcd-1/pki
readOnly: true
- name: cluster-monitor
image: quay.io/coreos/etcd:v3.2.1
image: {{ config['Versions']['images']['kubernetes']['etcd'] }}
command:
- sh
- -c

20
genesis.sh → promenade/templates/include/up.sh Executable file → Normal file
View File

@ -5,14 +5,18 @@ if [ "$(id -u)" != "0" ]; then
exit 1
fi
if [ "x$1" = "x" ]; then
echo "Path to node configuration required." 1>&2
exit 1
fi
set -ex
#Promenade Variables
DOCKER_PACKAGE="docker.io"
DOCKER_VERSION=1.12.6-0ubuntu1~16.04.1
#Proxy Variables
http_proxy={{ config['Network'].get('http_proxy', '') }}
https_proxy={{ config['Network'].get('https_proxy', '') }}
no_proxy={{ config['Network'].get('no_proxy', '') }}
DOCKER_HTTP_PROXY=${DOCKER_HTTP_PROXY:-${HTTP_PROXY:-${http_proxy}}}
DOCKER_HTTPS_PROXY=${DOCKER_HTTPS_PROXY:-${HTTPS_PROXY:-${https_proxy}}}
DOCKER_NO_PROXY=${DOCKER_NO_PROXY:-${NO_PROXY:-${no_proxy}}}
@ -54,22 +58,20 @@ systemctl restart docker || true
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get install -y -qq --no-install-recommends \
$DOCKER_PACKAGE=$DOCKER_VERSION \
{{ config['Versions']['packages']['docker'] }}
if [ -f "${PROMENADE_LOAD_IMAGE}" ]; then
echo === Loading updated promenade image ===
docker load -i "${PROMENADE_LOAD_IMAGE}"
fi
docker pull quay.io/attcomdev/promenade:experimental
docker run -t --rm \
--net host \
-v /:/target \
quay.io/attcomdev/promenade:experimental \
{{ config['Versions']['images']['promenade'] }} \
promenade \
-v \
genesis \
up \
--hostname $(hostname) \
--config-path /target$(realpath $1) 2>&1

View File

@ -13,7 +13,7 @@ spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: gcr.io/google_containers/hyperkube-amd64:v1.6.4
image: {{ config['Versions']['images']['kubernetes']['apiserver'] }}
command:
- /hyperkube
- apiserver
@ -24,6 +24,7 @@ spec:
- --client-ca-file=/etc/kubernetes/pki/cluster-ca.pem
- --insecure-port=0
- --bind-address=0.0.0.0
- --runtime-config=batch/v2alpha1=true
- --secure-port=443
- --allow-privileged=true
- --etcd-servers=https://kubernetes:2379

View File

@ -13,7 +13,7 @@ spec:
hostNetwork: true
containers:
- name: kube-controller-manager
image: quay.io/attcomdev/kube-controller-manager:v1.6.4
image: {{ config['Versions']['images']['kubernetes']['controller-manager'] }}
command:
- kube-controller-manager
- --allocate-node-cidrs=true

View File

@ -11,7 +11,7 @@ spec:
hostNetwork: true
containers:
- name: k8s-etcd
image: quay.io/coreos/etcd:v3.2.1
image: {{ config['Versions']['images']['kubernetes']['etcd'] }}
env:
- name: ETCD_NAME
valueFrom:

View File

@ -13,7 +13,7 @@ spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: gcr.io/google_containers/hyperkube-amd64:v1.6.4
image: {{ config['Versions']['images']['kubernetes']['scheduler'] }}
command:
- ./hyperkube
- scheduler