二进制安装 kubernetes v1.13.5(二)

注意:适合有基础的观看,没有太详细的说明;服务器初始化请看上一篇文章二进制部署Kubernetes v1.13.5(一)

节点初始化

三个 master 节点,三个 woker 节点

hostname os vip ip
k8s-master1 centos7 10.0.7.100 10.0.7.101
k8s-master2 centos7 10.0.7.100 10.0.7.102
k8s-master3 centos7 10.0.7.100 10.0.7.103
k8s-worker1 centos7 10.0.7.104
k8s-worker2 centos7 10.0.7.105
k8s-worker3 centos7 10.0.7.106

配置每台主机的hosts文件,添加到/etc/hosts文件中

1
2
3
4
5
6
10.0.7.101 k8s-master1
10.0.7.102 k8s-master2
10.0.7.103 k8s-master3
10.0.7.104 k8s-worker1
10.0.7.105 k8s-worker2
10.0.7.106 k8s-worker3

安装 kubernetes 二进制

1
2
3
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.5/kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64.tar.gz
cp kubernetes/server/bin/kube{-apiserver,-controller-manager,-scheduler,ctl,-proxy,let} /usr/local/bin/

发送到其他节点

1
2
3
scp  -r  /usr/local/bin root@k8s-master2:/usr/local
scp -r /usr/local/bin root@k8s-master3:/usr/local
scp -r /usr/local/bin root@k8s-worker1:/usr/local

准备证书

openssl 证书配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
mkdir -p /etc/kubernetes/pki/etcd
cd /etc/kubernetes/pki

cat <<EOF> /etc/kubernetes/pki/openssl.cnf
[ req ]
default_bits = 2048
default_md = sha256
distinguished_name = req_distinguished_name

[req_distinguished_name]

[ v3_ca ]
basicConstraints = critical, CA:TRUE
keyUsage = critical, digitalSignature, keyEncipherment, keyCertSign

[ v3_req_server ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ v3_req_client ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth

[ v3_req_apiserver ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names_cluster

[ v3_req_etcd ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names_etcd

[ alt_names_cluster ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-master1
DNS.6 = k8s-master2
DNS.7 = k8s-master3
DNS.8 = localhost
IP.1 = 10.96.0.1
IP.2 = 127.0.0.1
IP.3 = 10.0.7.100
IP.4 = 10.0.7.101
IP.5 = 10.0.7.102
IP.6 = 10.0.7.103

[ alt_names_etcd ]
DNS.1 = localhost
IP.1 = 10.0.7.101
IP.2 = 10.0.7.102
IP.3 = 10.0.7.103
IP.4 = 127.0.0.1
EOF

生成 CA 证书

path Default CN description
ca.crt,key kubernetes-ca Kubernetes general CA
etcd/ca.crt,key etcd-ca For all etcd-related functions
front-proxy-ca.crt,key kubernetes-front-proxy-ca For the front-end proxy

kubernetes-ca

1
2
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -config openssl.cnf -subj "/CN=kubernetes-ca" -extensions v3_ca -out ca.crt -days 10000

etcd-ca

1
2
openssl genrsa -out etcd/ca.key 2048
openssl req -x509 -new -nodes -key etcd/ca.key -config openssl.cnf -subj "/CN=etcd-ca" -extensions v3_ca -out etcd/ca.crt -days 10000

front-proxy-ca

1
2
openssl genrsa -out front-proxy-ca.key 2048
openssl req -x509 -new -nodes -key front-proxy-ca.key -config openssl.cnf -subj "/CN=kubernetes-ca" -extensions v3_ca -out front-proxy-ca.crt -days 10000

生成所有的证书

Default CN Parent CA O (in Subject) kind
kube-etcd etcd-ca server, client
kube-etcd-peer etcd-ca server, client
kube-etcd-healthcheck-client etcd-ca client
kube-apiserver-etcd-client etcd-ca system:masters client
kube-apiserver kubernetes-ca server
kube-apiserver-kubelet-client kubernetes-ca system:masters client
front-proxy-client kubernetes-front-proxy-ca client

证书路径

Default CN recommend key path recommended cert path command key argument cert argument
etcd-ca etcd/ca.crt kube-apiserver –etcd-cafile
etcd-client apiserver-etcd-client.key apiserver-etcd-client.crt kube-apiserver –etcd-keyfile –etcd-certfile
kubernetes-ca ca.crt kube-apiserver –client-ca-file
kube-apiserver apiserver.key apiserver.crt kube-apiserver –tls-private-key-file –tls-cert-file
apiserver-kubelet-client apiserver-kubelet-client.crt kube-apiserver –kubelet-client-certificate
front-proxy-ca front-proxy-ca.crt kube-apiserver –requestheader-client-ca-file
front-proxy-client front-proxy-client.key front-proxy-client.crt kube-apiserver –proxy-client-key-file –proxy-client-cert-file
etcd-ca etcd/ca.crt etcd –trusted-ca-file, –peer-trusted-ca-file
kube-etcd etcd/server.key etcd/server.crt etcd –key-file –cert-file
kube-etcd-peer etcd/peer.key etcd/peer.crt etcd –peer-key-file –peer-cert-file
etcd-ca etcd/ca.crt etcdctl –cacert
kube-etcd-healthcheck-client etcd/healthcheck-client.key etcd/healthcheck-client.crt etcdctl –key –cert

生成证书

apiserver-etcd-client

1
2
3
openssl genrsa -out apiserver-etcd-client.key 2048
openssl req -new -key apiserver-etcd-client.key -subj "/CN=apiserver-etcd-client/O=system:masters" -out apiserver-etcd-client.csr
openssl x509 -in apiserver-etcd-client.csr -req -CA etcd/ca.crt -CAkey etcd/ca.key -CAcreateserial -extensions v3_req_etcd -extfile openssl.cnf -out apiserver-etcd-client.crt -days 10000

kube-etcd

1
2
3
openssl genrsa -out etcd/server.key 2048
openssl req -new -key etcd/server.key -subj "/CN=etcd-server" -out etcd/server.csr
openssl x509 -in etcd/server.csr -req -CA etcd/ca.crt -CAkey etcd/ca.key -CAcreateserial -extensions v3_req_etcd -extfile openssl.cnf -out etcd/server.crt -days 10000

kube-etcd-peer

1
2
3
openssl genrsa -out etcd/peer.key 2048
openssl req -new -key etcd/peer.key -subj "/CN=etcd-peer" -out etcd/peer.csr
openssl x509 -in etcd/peer.csr -req -CA etcd/ca.crt -CAkey etcd/ca.key -CAcreateserial -extensions v3_req_etcd -extfile openssl.cnf -out etcd/peer.crt -days 10000

kube-etcd-healthcheck-client

1
2
3
openssl genrsa -out etcd/healthcheck-client.key 2048
openssl req -new -key etcd/healthcheck-client.key -subj "/CN=etcd-client" -out etcd/healthcheck-client.csr
openssl x509 -in etcd/healthcheck-client.csr -req -CA etcd/ca.crt -CAkey etcd/ca.key -CAcreateserial -extensions v3_req_etcd -extfile openssl.cnf -out etcd/healthcheck-client.crt -days 10000

kube-apiserver

1
2
3
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -subj "/CN=kube-apiserver" -config openssl.cnf -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 10000 -extensions v3_req_apiserver -extfile openssl.cnf -out apiserver.crt

apiserver-kubelet-client

1
2
3
openssl genrsa -out  apiserver-kubelet-client.key 2048
openssl req -new -key apiserver-kubelet-client.key -subj "/CN=apiserver-kubelet-client/O=system:masters" -out apiserver-kubelet-client.csr
openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 10000 -extensions v3_req_client -extfile openssl.cnf -out apiserver-kubelet-client.crt

front-proxy-client

1
2
3
openssl genrsa -out  front-proxy-client.key 2048
openssl req -new -key front-proxy-client.key -subj "/CN=front-proxy-client" -out front-proxy-client.csr
openssl x509 -req -in front-proxy-client.csr -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -CAcreateserial -days 10000 -extensions v3_req_client -extfile openssl.cnf -out front-proxy-client.crt

kube-scheduler 证书

1
2
3
openssl genrsa -out  kube-scheduler.key 2048
openssl req -new -key kube-scheduler.key -subj "/CN=system:kube-scheduler" -out kube-scheduler.csr
openssl x509 -req -in kube-scheduler.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 10000 -extensions v3_req_client -extfile openssl.cnf -out kube-scheduler.crt

sa.pub sa.key

1
2
3
4
5
openssl genrsa -out  sa.key 2048
openssl ecparam -name secp521r1 -genkey -noout -out sa.key
openssl ec -in sa.key -outform PEM -pubout -out sa.pub
openssl req -new -sha256 -key sa.key -subj "/CN=system:kube-controller-manager" -out sa.csr
openssl x509 -req -in sa.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 10000 -extensions v3_req_client -extfile openssl.cnf -out sa.crt

admin 证书

1
2
3
openssl genrsa -out  admin.key 2048
openssl req -new -key admin.key -subj "/CN=kubernetes-admin/O=system:masters" -out admin.csr
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 10000 -extensions v3_req_client -extfile openssl.cnf -out admin.crt

清理 csr srl

1
find . -name "*.csr" -o -name "*.srl"|xargs  rm -f

准备 kubeconfig

filename credential name Default CN O (in Subject)
admin.conf default-admin kubernetes-admin system:masters
controller-manager.conf default-controller-manager system:kube-controller-manager
scheduler.conf default-manager system:kube-scheduler

kube-controller-manager

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
CLUSTER_NAME="kubernetes"
KUBE_APISERVER="https://10.0.7.100:8443"
KUBE_USER="system:kube-controller-manager"
KUBE_CERT="sa"
KUBE_CONFIG="controller-manager.conf"

# 设置集群参数
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置客户端认证参数
kubectl config set-credentials ${KUBE_USER} \
--client-certificate=/etc/kubernetes/pki/${KUBE_CERT}.crt \
--client-key=/etc/kubernetes/pki/${KUBE_CERT}.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置上下文参数
kubectl config set-context ${KUBE_USER}@${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUBE_USER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置当前使用的上下文
kubectl config use-context ${KUBE_USER}@${CLUSTER_NAME} --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 查看生成的配置文件
kubectl config view --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

kube-scheduler

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
CLUSTER_NAME="kubernetes"
KUBE_APISERVER="https://10.0.7.100:8443"
KUBE_USER="system:kube-scheduler"
KUBE_CERT="kube-scheduler"
KUBE_CONFIG="scheduler.conf"

# 设置集群参数
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置客户端认证参数
kubectl config set-credentials ${KUBE_USER} \
--client-certificate=/etc/kubernetes/pki/${KUBE_CERT}.crt \
--client-key=/etc/kubernetes/pki/${KUBE_CERT}.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置上下文参数
kubectl config set-context ${KUBE_USER}@${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUBE_USER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置当前使用的上下文
kubectl config use-context ${KUBE_USER}@${CLUSTER_NAME} --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 查看生成的配置文件
kubectl config view --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

admin

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
CLUSTER_NAME="kubernetes"
KUBE_APISERVER="https://10.0.7.100:8443"
KUBE_USER="kubernetes-admin"
KUBE_CERT="admin"
KUBE_CONFIG="admin.conf"

# 设置集群参数
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置客户端认证参数
kubectl config set-credentials ${KUBE_USER} \
--client-certificate=/etc/kubernetes/pki/${KUBE_CERT}.crt \
--client-key=/etc/kubernetes/pki/${KUBE_CERT}.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置上下文参数
kubectl config set-context ${KUBE_USER}@${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUBE_USER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置当前使用的上下文
kubectl config use-context ${KUBE_USER}@${CLUSTER_NAME} --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 查看生成的配置文件
kubectl config view --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

分发证书

分发到 config 及证书其他 master 节点

1
2
scp  -r /etc/kubernetes root@k8s-master2:/etc
scp -r /etc/kubernetes root@k8s-master3:/etc

配置 HA

master 节点,配置 envoy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
mkdir -p /etc/envoy
cat <<EOF> /etc/envoy/envoy.yaml
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8443
filter_chains:
- filters:
- name: envoy.tcp_proxy
config:
stat_prefix: ingress_tcp
cluster: kube_apiserver
access_log:
- name: envoy.file_access_log
config:
path: /dev/stdout
clusters:
- name: kube_apiserver
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: 10.0.7.101
port_value: 6443
- socket_address:
address: 10.0.7.102
port_value: 6443
- socket_address:
address: 10.0.7.103
port_value: 6443

admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
EOF

envoy unit file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat<<EOF> /lib/systemd/system/envoy.service
[Unit]
Description=Envoy Container
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop %n
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull envoyproxy/envoy:latest
ExecStart=/usr/bin/docker run --rm -v /etc/envoy/envoy.yaml:/etc/envoy/envoy.yaml --network host --name %n envoyproxy/envoy:latest

[Install]
WantedBy=multi-user.target
EOF

systemctl restart envoy.service
systemctl enable envoy.service
systemctl status envoy.service -l

master 配置 keepalived

1
apt-get install keepalived -y

keepalived 健康检查脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
cat <<'EOF'> /etc/keepalived/ha_check.sh
#!/bin/sh
VIRTUAL_IP=10.0.7.100

errorExit() {
echo "*** $*" 1>&2
exit 1
}

if ip addr | grep -q $VIRTUAL_IP ; then
curl -s --max-time 2 --insecure https://${VIRTUAL_IP}:8443/ -o /dev/null || errorExit "Error GET https://${VIRTUAL_IP}:8443/"
fi
EOF

keepalived 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cat <<EOF> /etc/keepalived/keepalived.conf
vrrp_script ha-check {
script "/bin/bash /etc/keepalived/ha_check.sh"
interval 3
weight -2
fall 10
rise 2
}

vrrp_instance k8s-vip {
state BACKUP
priority 101
interface enp0s8
virtual_router_id 47
advert_int 3

unicast_peer {
10.0.7.101
10.0.7.102
10.0.7.103
}

virtual_ipaddress {
10.0.7.100
}

track_script {
ha-check
}
}
EOF

systemctl restart keepalived.service
systemctl enable keepalived.service

配置 etcd

安装 etcd 二进制

1
2
3
4
5
mkdir -p /var/lib/etcd
ETCD_VER=v3.3.10
wget https://storage.googleapis.com/etcd/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xf etcd-${ETCD_VER}-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-${ETCD_VER}-linux-amd64/{etcd,etcdctl}
mkdir -p /usr/lib/systemd/system/

设置 unit file 并启动 etcd,其他节点修改对应 ETCD_NAME 为 etcd1 和 etcd2,ip 改为节点 IP。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
ETCD_NAME=etcd0
ETCD_IP="10.0.7.101"
ETCD_IPS=(10.0.7.101 10.0.7.102 10.0.7.103)

cat<<EOF> /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd
ExecStart=/usr/local/bin/etcd \\
--name=${ETCD_NAME} \\
--data-dir=/var/lib/etcd \\
--listen-client-urls=https://127.0.0.1:2379,https://${ETCD_IP}:2379 \\
--advertise-client-urls=https://${ETCD_IP}:2379 \\
--listen-peer-urls=https://${ETCD_IP}:2380 \\
--initial-advertise-peer-urls=https://${ETCD_IP}:2380 \\
--cert-file=/etc/kubernetes/pki/etcd/server.crt \\
--key-file=/etc/kubernetes/pki/etcd/server.key \\
--client-cert-auth \\
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \\
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \\
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key \\
--peer-client-cert-auth \\
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \\
--initial-cluster=etcd0=https://${ETCD_IPS[0]}:2380,etcd1=https://${ETCD_IPS[1]}:2380,etcd2=https://${ETCD_IPS[2]}:2380 \\
--initial-cluster-token=my-etcd-token \\
--initial-cluster-state=new \\
--heartbeat-interval 1000 \\
--election-timeout 5000

Restart=always
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart etcd
systemctl enable etcd

检查

1
2
3
4
5
etcdctl \
--cert-file /etc/kubernetes/pki/etcd/healthcheck-client.crt \
--key-file /etc/kubernetes/pki/etcd/healthcheck-client.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://10.0.7.101:2379 cluster-health

配置 master 组件

启动 api-server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
NODE_IP="10.0.7.101"

cat <<EOF> /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--authorization-mode=Node,RBAC \\
--enable-admission-plugins=Initializers,DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,NamespaceLifecycle,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota,ServiceAccount,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,Priority \\
--advertise-address=${NODE_IP} \\
--bind-address=${NODE_IP} \\
--insecure-port=0 \\
--secure-port=6443 \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--enable-swagger-ui=true \\
--storage-backend=etcd3 \\
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt \\
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt \\
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key \\
--etcd-servers=https://10.0.7.101:2379,https://10.0.7.102:2379,https://10.0.7.103:2379 \\
--event-ttl=1h \\
--enable-bootstrap-token-auth \\
--client-ca-file=/etc/kubernetes/pki/ca.crt \\
--kubelet-https \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--runtime-config=api/all \\
--service-cluster-ip-range=10.96.0.0/12 \\
--service-node-port-range=30000-32767 \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \\
--requestheader-username-headers=X-Remote-User \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-allowed-names=front-proxy-client \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key \\
--feature-gates=PodShareProcessNamespace=true \\
--v=2

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
systemctl status kube-apiserver -l

启动 controller-manager

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cat<<EOF> /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--allocate-node-cidrs=true \\
--kubeconfig=/etc/kubernetes/controller-manager.conf \\
--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf \\
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf \\
--client-ca-file=/etc/kubernetes/pki/ca.crt \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key \\
--bind-address=127.0.0.1 \\
--leader-elect=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.96.0.0/12 \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \\
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
--root-ca-file=/etc/kubernetes/pki/ca.crt \\
--use-service-account-credentials=true \\
--controllers=*,bootstrapsigner,tokencleaner \\
--experimental-cluster-signing-duration=86700h \\
--feature-gates=RotateKubeletClientCertificate=true \\
--v=2
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager -l

启动 scheduler

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--leader-elect=true \\
--kubeconfig=/etc/kubernetes/scheduler.conf \\
--address=127.0.0.1 \\
--v=2
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
systemctl status kube-scheduler -l

验证组件是否正常

1
2
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl get cs

配置 bootstrap

设置 bootstrap , 创建 bootstrap 令牌

1
2
3
4
5
6
7
8
9
10
11
TOKEN_PUB=$(openssl rand -hex 3)
TOKEN_SECRET=$(openssl rand -hex 8)
BOOTSTRAP_TOKEN="${TOKEN_PUB}.${TOKEN_SECRET}"

kubectl -n kube-system create secret generic bootstrap-token-${TOKEN_PUB} \
--type 'bootstrap.kubernetes.io/token' \
--from-literal description="cluster bootstrap token" \
--from-literal token-id=${TOKEN_PUB} \
--from-literal token-secret=${TOKEN_SECRET} \
--from-literal usage-bootstrap-authentication=true \
--from-literal usage-bootstrap-signing=true

创建 bootstrap kubeconfig 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
CLUSTER_NAME="kubernetes"
KUBE_APISERVER="https://10.0.7.101:6443"
KUBE_USER="kubelet-bootstrap"
KUBE_CONFIG="bootstrap.conf"

# 设置集群参数
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置上下文参数
kubectl config set-context ${KUBE_USER}@${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUBE_USER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置客户端认证参数
kubectl config set-credentials ${KUBE_USER} \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 设置当前使用的上下文
kubectl config use-context ${KUBE_USER}@${CLUSTER_NAME} --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

# 查看生成的配置文件
kubectl config view --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

授权 kubelet 可以创建 csr

1
2
kubectl create clusterrolebinding kubeadm:kubelet-bootstrap \
--clusterrole system:node-bootstrapper --group system:bootstrappers

批准 csr 请求

允许 system:bootstrappers 组的所有 csr

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF | kubectl apply -f -
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF

允许 kubelet 能够更新自己的证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF | kubectl apply -f -
# Approve renewal CSRs for the group "system:nodes"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF

创建所需的 clusterrole

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cat <<EOF | kubectl apply -f -
# A ClusterRole which instructs the CSR approver to approve a user requesting
# node client credentials.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/nodeclient"]
verbs: ["create"]
---
# A ClusterRole which instructs the CSR approver to approve a node renewing its
# own client credentials.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeclient"]
verbs: ["create"]
EOF

配置 worker 组件

将 ca.crt 和 bootstrap.conf 发送至需要运行 worker 的节点

1
2
3
mkdir -p /etc/kubernetes/pki
scp root@k8s-master1:/etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/ca.crt
scp root@k8s-master1:/etc/kubernetes/bootstrap.conf /etc/kubernetes/bootstrap.conf

kubelet 的 yaml 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
mkdir -p /var/lib/kubelet/
cat <<EOF> /var/lib/kubelet/config.yaml
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

启动 kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
mkdir -p /usr/lib/systemd/system
mkdir -p /etc/kubernetes/manifests
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.conf \\
--kubeconfig=/etc/kubernetes/kubelet.conf \\
--config=/var/lib/kubelet/config.yaml \\
--cgroup-driver=systemd \\
--pod-infra-container-image=kuops/pause-amd64:3.1 \\
--allow-privileged=true \\
--network-plugin=cni \\
--cni-conf-dir=/etc/cni/net.d \\
--cni-bin-dir=/opt/cni/bin \\
--cert-dir=/etc/kubernetes/pki \\
--v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet -l

创建 kube-proxy 的 serviceaccount

1
kubectl -n kube-system create serviceaccount kube-proxy

创建 kube-proxy 的 cluster rolebinding

1
2
3
kubectl create clusterrolebinding kubeadm:node-proxier \
--clusterrole system:node-proxier \
--serviceaccount kube-system:kube-proxy

创建 kube-proxy 的 kubeconfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
CLUSTER_NAME="kubernetes"
KUBE_APISERVER="https://10.0.7.100:8443"
KUBE_CONFIG="kube-proxy.conf"

SECRET=$(kubectl -n kube-system get sa/kube-proxy \
--output=jsonpath='{.secrets[0].name}')

JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)

kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

kubectl config set-context ${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${CLUSTER_NAME} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

kubectl config set-credentials ${CLUSTER_NAME} \
--token=${JWT_TOKEN} \
--kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

kubectl config use-context ${CLUSTER_NAME} --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}
kubectl config view --kubeconfig=/etc/kubernetes/${KUBE_CONFIG}

其他节点拉取

1
scp root@k8s-master1:/etc/kubernetes/kube-proxy.conf /etc/kubernetes/kube-proxy.conf

kube-proxy 的 yaml 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
mkdir -p /var/lib/kube-proxy
cat <<EOF> /var/lib/kube-proxy/config.conf
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.conf
qps: 5
clusterCIDR: "10.244.0.0/16"
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: true
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "iptables"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
EOF

启动 kube-proxy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
mkdir /var/lib/kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/config.conf \\
--conntrack-max=0 \\
--conntrack-max-per-core=0 \\
--v=2
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy -l

节点标签

1
2
3
4
kubectl label node k8s-master1 node-role.kubernetes.io/master=""
kubectl label node k8s-master2 node-role.kubernetes.io/master=""
kubectl label node k8s-master3 node-role.kubernetes.io/master=""
kubectl label node k8s-worker1 node-role.kubernetes.io/worker=worker

安装 flannel

安装 cni

1
2
3
curl -LO https://github.com/containernetworking/plugins/releases/download/v0.7.4/cni-plugins-amd64-v0.7.4.tgz
mkdir -p /opt/cni/bin
tar -xf cni-plugins-amd64-v0.7.4.tgz -C /opt/cni/bin

安装 flannel

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
cat <<EOF> kube-flannel.yaml
---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unsed in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: kuopsquay/coreos.flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: kuopsquay/coreos.flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=enp0s8
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
EOF


kubectl apply -f kube-flannel.yaml

安装 coredns

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
cat <<EOF> coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: coredns
image: coredns/coredns:1.3.1
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.96.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
EOF

kubectl 设置

1
2
3
4
5
echo "source <(kubectl completion bash)" >> /etc/profile.d/kubernetes.sh
source /etc/profile.d/bash_completion.sh
source /etc/profile.d/kubernetes.sh
mkdir -p ~/.kube/
cp -f /etc/kubernetes/admin.conf ~/.kube/config

安装 helm

安装 helm

1
2
3
4
5
apt-get install socat -y
curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz
tar xf helm-v2.12.3-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
mv linux-amd64/tiller /usr/local/bin/

初始化 helm

1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl  create  serviceaccount tiller -n kube-system

kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin --serviceaccount=kube-system:tiller

helm init --upgrade --service-account tiller \
--skip-refresh -i kuops/tiller:v2.12.3 \
--stable-repo-url https://kuops.com/helm-charts-mirror

helm repo update

echo "source <(helm completion bash)" > /etc/profile.d/helm.sh
source /etc/profile.d/helm.sh

安装 istio

1
2
3
4
5
6
7
8
9
10
11
12
13
14
curl -LO https://github.com/istio/istio/releases/download/1.0.6/istio-1.0.6-linux.tar.gz
tar xf istio-1.0.6-linux.tar.gz
cp istio-1.0.6/bin/istioctl /usr/local/bin

helm template istio-1.0.6/install/kubernetes/helm/istio --name istio \
--namespace istio-system \
--set global.mtls.enabled=true \
--set tracing.enabled=true \
--set servicegraph.enabled=true \
--set prometheus.enabled=true \
--set grafana.enabled=true > istio.yaml

kubectl create namespace istio-system
kubectl apply -f istio.yaml

Kubernetes 日志管理

在Kubernetes每个组件中添加如下配置,须注意--log-dir=每个组件不同后面的目录不同

1
2
3
--logtostderr=false \
--log-dir=/var/log/kubernetes/kubelet \
--v=2

使用 logrotate 切割日志

/etc/logrotate.d创建kubelet文件并写入一下内容,其他组件需做相应的修改。

1
2
3
4
5
6
7
8
9
/var/log/kubernetes/kubelet/kubelet.*.log.* {
olddir /var/log/kubernetes/kubelet/logrotate
rotate 7
size 200M
missingok
compress
nodelaycompress
copytruncate
}

Kubernetes主机多网卡

考虑到部署Kubernetes主机可能有多网卡,这里需要注意一些小细节。如果是采用kubeadm部署的,那么很多组件默认是bind的0.0.0.0这样会导致所有网卡的ip的请求都会监听。但是在API server中--bind-address必须设置为固定IP,--advertise-address是用于向群集成员通告apiserver的IP地址,集群的其他部分必须能够访问这个地址。如果--advertise-address为空,则使用--bind-address但前提是--bind-addressbind不是0.0.0.0。如果是0.0.0.0那么Kubelet认为apiserver是哪个ip呢?这里是个问题。

1
2
3
[root@k8s-m1 ~]# kubectl get ep  
NAME ENDPOINTS AGE
kubernetes 10.15.1.3:6443,10.15.1.4:6443,10.15.1.5:6443 119d

API server参数设置(必须设置具体的IP地址):

1
2
--advertise-address=10.15.1.5 \
--bind-address=10.15.1.5 \

Flannel网卡设置

由于flannel使用默认路由的网卡接口,导致使用了外网网卡,在多网卡的时候致使pod之间无法访问。所以需要指定使用相应网卡。在command参数增加--iface=eth1--public-ip=10.15.1.4即可,具体配置如下:

1
2
3
4
5
6
7
8
9
10
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1
- --public-ip=10.15.1.4

Kubelet认证/授权

apiserver-kubelet-client 证书是 apiserver 反向连接 kubelet 组件 10250 端口需要使用的证书(例如执行 kubectl logs)。如果这里配置有误将导致 kubectl logs 无法获取日志,提示证书问题。

Kubelet认证配置段:

1
2
3
4
5
6
7
8
9
10
11
12
13
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s

Kubelet 认证

默认情况下,所有未被配置的其他身份验证方法拒绝的,对 kubelet 的 HTTPS 端点的请求将被视为匿名请求,并被授予 system:anonymous 用户名和 system:unauthenticated 组。

如果要禁用匿名访问并发送 401 Unauthorized 的未经身份验证的请求的响应:

  • 启动 kubelet 时指定 --anonymous-auth=false 标志

如果要对 kubelet 的 HTTPS 端点启用 X509 客户端证书身份验证:

  • 启动 kubelet 时指定 --client-ca-file 标志,提供 CA bundle 以验证客户端证书
  • 启动 apiserver 时指定 --kubelet-client-certificate--kubelet-client-key 标志

启用 API bearer token(包括 service account token)用于向 kubelet 的 HTTPS 端点进行身份验证:

  • 确保在 API server 中开启了 authentication.k8s.io/v1beta1 API 组。
  • 启动 kubelet 时指定 --authentication-token-webhook--kubeconfig 标志
  • Kubelet 在配置的 API server 上调用 TokenReview API 以确定来自 bearer token 的用户信息

Kubelet 授权

接着对任何成功验证的请求(包括匿名请求)授权。默认授权模式为 AlwaysAllow,允许所有请求。

细分访问 kubelet API 有很多原因:

  • 启用匿名认证,但匿名用户调用 kubelet API 的能力应受到限制
  • 启动 bearer token 认证,但是 API 用户(如 service account)调用 kubelet API 的能力应受到限制
  • 客户端证书身份验证已启用,但只有那些配置了 CA 签名的客户端证书的用户才可以使用 kubelet API

如果要细分访问 kubelet API,将授权委托给 API server:

  • 确保 API server 中启用了 authorization.k8s.io/v1beta1 API
  • 启动 kubelet 时指定 --authorization-mode=Webhook--kubeconfig 标志(指定 –authorization-mode=Webhook 后 10250 端口 RBAC 授权将会委托给 apiserver)
  • kubelet 在配置的 API server 上调用 SubjectAccessReview API,以确定每个请求是否被授权

kubelet 使用与 apiserver 相同的 请求属性 方法来授权 API 请求。

Verb(动词)是根据传入的请求的 HTTP 动词确定的:

HTTP 动词 request 动词
POST create
GET, HEAD get
PUT update
PATCH patch
DELETE delete

资源和子资源根据传入请求的路径确定:

Kubelet API 资源 子资源
/stats/* nodes stats
/metrics/* nodes metrics
/logs/* nodes log
/spec/* nodes spec
all others nodes proxy

Namespace 和 API 组属性总是空字符串,资源的名字总是 kubelet 的 Node API 对象的名字。

当以该模式运行时,请确保用户为 apiserver 指定了 --kubelet-client-certificate--kubelet-client-key 标志并授权了如下属性:

1
2
3
4
5
verb=*, resource=nodes, subresource=proxy
verb=*, resource=nodes, subresource=stats
verb=*, resource=nodes, subresource=log
verb=*, resource=nodes, subresource=spec
verb=*, resource=nodes, subresource=metrics

二进制安装 kubernetes v1.13.5(二)
https://system51.github.io/2019/08/26/deploy-kubernets-1.13/
作者
Mr.Ye
发布于
2019年8月26日
许可协议