树莓派搭建k8s集群-保姆级教学

树莓派搭建k8s集群-保姆级教学

Scroll Down

环境

  • 树莓派4b 8g 3个
  • 配置好固定ip, 文章内ip分别为192.168.10.201,192.168.10.202,192.168.10.203 参考资料
  • 其中一台简单配置虚拟ip (VIP) 192.168.10.100 [参考资料] (https://zhuanlan.zhihu.com/p/371401849) 本文使用keepalived地址漂移保证高可用

文章使用1.19版本kubernetes,不同版本启动参数会有改动,极大可能运行失败
文章使用二进制部署,较为繁琐,可先从官网下载二进制文件

准备证书

注意,证书只需要创建一次,所有节点使用相同证书

CA根证书

sudo openssl genrsa -out ca.key 2048
sudo openssl req -x509 -new -nodes -key ca.key -subj "/CN=192.168.10.203" -days 36500 --out ca.crt

-subj的"/CN"值为主机名或IP

ca证书保存在/etc/kubernetes/pki

etcd证书

创建etcd_ssl.cnf文件

sudo vim etcd_ssl.cnf
[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[ req_distinguished_name ]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
IP.1 = 192.168.10.201
IP.2 = 192.168.10.202
IP.3 = 192.168.10.203
  • server证书
sudo openssl genrsa -out etcd_server.key 2048

sudo openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr

sudo openssl x509 -req -in etcd_server.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt
  • client证书
sudo openssl genrsa -out etcd_client.key 2048

sudo openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr

sudo openssl x509 -req -in etcd_client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt

etcd证书crt和key保存到/etc/etcd/pki目录下

apiserver证书

  • master_ssl.cnf
[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[ req_distinguished_name ]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-1
DNS.6 = k8s-2
DNS.7 = k8s-3
IP.1 = 169.169.0.1
IP.2 = 192.168.10.201
IP.3 = 192.168.10.202
IP.4 = 192.168.10.203
IP.5 = 192.168.10.100
  • 生成证书
sudo openssl genrsa -out apiserver.key 2048

sudo openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=192.168.10.203" -out apiserver.csr

sudo openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt
  • 证书保存在/etc/kubernetes/pki目录下

client证书

sudo openssl genrsa -out client.key 2048

sudo openssl req -new -key client.key -subj "/CN=admin" -out client.csr

sudo openssl x509 -req -in client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -out client.crt

etcd

下载etcd文件

下载地址
etcd-3.5.2

sudo cp etcd /usr/bin
sudo cp etcdctl /usr/bin/

systemd file

sudo vim /usr/lib/systemd/system/etcd.service

写入以下内容

[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target

[Service]
Type=simple
User=root
Restart=on-failure
RestartSec=5s
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target

环境配置

sudo vim /etc/etcd/etcd.conf

写入相应环境变量

# 节点1的配置
ETCD_NAME=etcd1
ETCD_DATA_DIR=/etc/etcd/data

ETCD_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=/etc/etcd/pki/etcd_server.key
ETCD_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.10.201:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.10.201:2379
ETCD_PEER_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_PEER_KEY_FILE=/etc/etcd/pki/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_LISTEN_PEER_URLS=https://192.168.10.201:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.10.201:2380

ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.10.201:2380,etcd2=https://192.168.10.202:2380,etcd3=https://192.168.10.203:2380"
ETCD_INITIAL_CLUSTER_STATE=new

# 节点2的配置
ETCD_NAME=etcd2
ETCD_DATA_DIR=/etc/etcd/data

ETCD_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=/etc/etcd/pki/etcd_server.key
ETCD_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.10.202:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.10.202:2379
ETCD_PEER_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_PEER_KEY_FILE=/etc/etcd/pki/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_LISTEN_PEER_URLS=https://192.168.10.202:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.10.202:2380

ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.10.201:2380,etcd2=https://192.168.10.202:2380,etcd3=https://192.168.10.203:2380"
ETCD_INITIAL_CLUSTER_STATE=new

# 节点3的配置
ETCD_NAME=etcd3
ETCD_DATA_DIR=/etc/etcd/data

ETCD_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=/etc/etcd/pki/etcd_server.key
ETCD_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.10.203:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.10.203:2379
ETCD_PEER_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_PEER_KEY_FILE=/etc/etcd/pki/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_LISTEN_PEER_URLS=https://192.168.10.203:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.10.203:2380

ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.10.201:2380,etcd2=https://192.168.10.202:2380,etcd3=https://192.168.10.203:2380"
ETCD_INITIAL_CLUSTER_STATE=new

启动etcd

sudo systemctl start etcd.service
sudo systemctl enable etcd.service

验证状态是否正常

sudo etcdctl --cacert=/etc/kubernetes/pki/ca.crt --cert=/etc/etcd/pki/etcd_client.crt --key=/etc/etcd/pki/etcd_client.key --endpoints=https://192.168.10.201:2379,https://192.168.10.202:2379,https://192.168.10.203:2379 endpoint health

apiserver

systemd

  • /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
Type=simple
User=root
Restart=on-failure
RestartSec=5s
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS

[Install]
WantedBy=multi-user.target
  • env file: /etc/kubernetes/apiserver
KUBE_API_ARGS="--insecure-port=0 \
--secure-port=6443 \
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt \
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key \
--client-ca-file=/etc/kubernetes/pki/ca.crt \
--apiserver-count=3 \
--endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.10.201:2379,https://192.168.10.202:2379,https://192.168.10.203:2379 \
--etcd-cafile=/etc/kubernetes/pki/ca.crt \
--etcd-certfile=/etc/etcd/pki/etcd_client.crt \
--etcd-keyfile=/etc/etcd/pki/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=30000-32767 \
--allow-privileged=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"
  • 启动
sudo systemctl start kube-apiserver.service
sudo systemctl enable kube-apiserver.service

client配置文件

kubeconfig

保存至/etc/kubernetes/kubeconfig

apiVersion: v1
kind: Config
clusters:
  - name: default
    cluster:
      server: https://192.168.10.100:9443
      certificate-authority: /etc/kubernetes/pki/ca.crt
users:
  - name: admin
    user:
      client-certificate: /etc/kubernetes/pki/client.crt
      client-key: /etc/kubernetes/pki/client.key
contexts:
  - name: default
    context:
      cluster: default
      user: admin
current-context: default

kube-controller-manager

  • /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=always

[Install]
WantedBy=multi-user.target
  • /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-account-private-key-file=/etc/kubernetes/pki/apiserver.key \
--root-ca-file=/etc/kubernetes/pki/ca.crt \
--log-dir=/var/log/kubernetes \
--logtostderr=false \
--v=0"
  • 启动
sudo systemctl start kube-controller-manager.service
sudo systemctl enable kube-controller-manager.service

kube-scheduler

  • /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=always

[Install]
WantedBy=multi-user.target
  • /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--log-dir=/var/log/kubernetes \
--logtostderr=false \
--v=0"
  • 启动
sudo systemctl start kube-scheduler.service
sudo systemctl enable kube-scheduler.service

HAProxy

  • /usr/local/etc/haproxy/haproxy.cfg
global
        log             127.0.0.1 local2
        maxconn         4096
	daemon
        stats socket    /var/lib/haproxy/stats

defaults
	mode		http
	log		global
	option		httplog
	option		dontlognull
	option		http-server-close
	option		forwardfor	except 127.0.0.0/8
	option		redispatch
	retries		3
	timeout http-request	10s
	timeout queue		1m
	timeout connect		10s
	timeout client		1m
	timeout server		1m
	timeout http-keep-alive	10s
	timeout check		10s
	maxconn			3000

frontend kube-apiserver
	mode		tcp
	bind		*:9443
	option		tcplog
	default_backend	kube-apiserver

listen stats
	mode		http
	bind		*:8888
	stats auth	graydove:123456
	stats refresh	5s
	stats realm	HAProxy\ Statistics
	stats uri	/stats
	log		127.0.0.1 local3 err

backend kube-apiserver
	mode	tcp
	balance	roundrobin
	server k8s-master1 192.168.10.201:6443 check
	server k8s-master2 192.168.10.202:6443 check
	server k8s-master3 192.168.10.203:6443 check

使用docker运行镜像haproxy:2.5.5

sudo docker run -itd --name k8s-haproxy \
--net=host \
--restart=always \
-v /usr/local/etc/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
haproxy:2.5.5

keepalived

keepalived.conf

master

! Configuration File for keepalived

global_defs {
  router_id LVS_1
}

vrrp_script_checkhaproxy {
  script "/usr/bin/check-haproxy.sh"
  interval 2
  weight -30
}

vrrp_instance VI_1 {
  state MASTER
  interface eth0
  virtual_router_id 51
  priority 100
  advert_int 1

  virtual_ipaddress {
    192.168.10.100/24 dev eth0
  }

  authentication {
    auth_type PASS
    auth_pass password
  }

  track_script {
    checkhaproxy
  }
}

backup

! Configuration File for keepalived

global_defs {
  router_id LVS_2
}

vrrp_script_checkhaproxy {
  script "/usr/bin/check-haproxy.sh"
  interval 2
  weight -30
}

vrrp_instance VI_1 {
  state BACKUP
  interface eth0
  virtual_router_id 51
  priority 100
  advert_int 1

  virtual_ipaddress {
    192.168.10.100/24 dev eth0
  }

  authentication {
    auth_type PASS
    auth_pass password
  }

  track_script {
    checkhaproxy
  }
}

check-haproxy.sh脚本

#!/bin/bash

count=`netstat -apn | grep 9443 | wc -l`

if [ $count -gt 0 ]; then
    exit 0
else
    exit 1
fi

docker启动

osixia/keepalived:2.0.20-arm64v8的镜像有问题用不了,实际上是amd64的,有issue提到了这点

sudo docker pull linkvt/osixia_keepalived:stable
sudo docker run -itd --name k8s-keepalived \
  --restart=always \
  --net=host \
  --cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW \
  -v ${PWD}/keepalived.conf:/container/service/keepalived/assets/keepalived.conf \
  -v ${PWD}/check-haproxy.sh:/usr/bin/check-haproxy.sh \
  linkvt/osixia_keepalived:stable --copy-service

Node节点

kubelet

  • /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.target

[Service]
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=always

[Install]
WantedBy=multi-user.target
  • /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--config=/etc/kubernetes/kubelet.config \
--hostname-override=192.168.10.203 \
--network-plugin=cni \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"
  • /etc/kubernetes/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
cgroupDriver: systemd
clusterDNS: ["169.169.100.100"]
clusterDomain: cluster.local
authentication:
  anonymous:
    enabled: true
  • kubelet需要关闭swap, 使用命令sudo swapoff /var/swap,写入/etc/rc.local中开机自启

  • 启动

sudo systemctl start kubelet.service
sudo systemctl enable kubelet.service

kubeproxy

  • /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=always

[Install]
WantedBy=multi-user.target
  • /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--hostname-override=192.168.10.203 \
--cluster-cidr=10.244.0.0/16 \
--proxy-mode=iptables \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"
  • 启动
sudo systemctl start kube-proxy.service
sudo systemctl enable kube-proxy.service

查看节点状态

sudo kubectl --kubeconfig=/etc/kubernetes/kubeconfig get nodes

CNI网络插件

任选一个CNI插件安装

安装前首先下载cni-plugin并解压至/opt/cni/bin目录

calico插件

一键安装命令

sudo kubectl --kubeconfig=/etc/kubernetes/kubeconfig apply -f https://docs.projectcalico.org/manifests/calico.yaml

flannel插件

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
    - min: 0
      max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
  - kind: ServiceAccount
    name: flannel
    namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
        - operator: Exists
          effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
        - name: install-cni-plugin
          #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
          image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
          command:
            - cp
          args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
          volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
        - name: install-cni
          #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
          image: rancher/mirrored-flannelcni-flannel:v0.17.0
          command:
            - cp
          args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
          volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
      containers:
        - name: kube-flannel
          #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
          image: rancher/mirrored-flannelcni-flannel:v0.17.0
          command:
            - /opt/bin/flanneld
          args:
            - --ip-masq
            - --kube-subnet-mgr
          resources:
            requests:
              cpu: "100m"
              memory: "50Mi"
            limits:
              cpu: "100m"
              memory: "50Mi"
          securityContext:
            privileged: false
            capabilities:
              add: ["NET_ADMIN", "NET_RAW"]
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
            - name: xtables-lock
              mountPath: /run/xtables.lock
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni-plugin
          hostPath:
            path: /opt/cni/bin
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate

DNS

coreDNS

  • kubelet增加以下两个参数
--cluster-dns=169.169.100.100
--cluster-domain=cluster.local

注意版本需要对应 文档:CoreDNS-k8s_version.md

  • coredns.yaml
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
  - apiGroups:
      - ""
    resources:
      - endpoints
      - services
      - pods
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
  - kind: ServiceAccount
    name: coredns
    namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 15s
        }
        ready
        log
        kubernetes cluster.local 169.169.0.0/16 {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          prefer_udp
          policy sequential
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
        - name: coredns
          image: coredns/coredns:1.7.0
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 70Mi
          args: [ "-conf", "/etc/coredns/Corefile" ]
          volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
              readOnly: true
          ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 60
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: /ready
              port: 8181
              scheme: HTTP
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              add:
                - NET_BIND_SERVICE
              drop:
                - all
            readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
              - key: Corefile
                path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 169.169.100.100
  ports:
    - name: dns
      port: 53
      protocol: UDP
    - name: dns-tcp
      port: 53
      protocol: TCP
    - name: metrics
      port: 9153
      protocol: TCP

测试DNS

busybox.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  containers:
    - name: busybox
      image: gcr.io/google_containers/busybox
      command:
        - sleep
        - "3600"

身份验证

oldc

  • apiserver加上参数

注意:如果claim使用email,eamil_verified必须为true,也就是邮箱必须已验证

--oidc-issuer-url=https://oauth.graydove.cn \
--oidc-client-id=xxx \
--oidc-username-claim=email \
--oidc-username-prefix=- \