Skip to main content

Ubuntu使用kubeadm安装k8s集群

作草分茶About 3 min

Ubuntu 使用 kubeadm 安装k8s集群环境

安装 Docker

# 安装
sudo apt-get install -y docker.io
# 启动 docker
sudo systemctl start docker
# 设置开机自启
sudo systemctl enable docker
将用户添加到用户组
sudo gpasswd -a {username} docker

修改镜像代理


sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://52msq4gm.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

安装 k8s

准备

  1. 由于 Kubernetes 使用主机名来区分集群里的节点,所以每个节点的 hostname 必须不能重名,所以需要修改 hostname
sudo vi /etc/hostname
master
sysctl kernel.hostname=master 
  1. 学习环境我们需要使用 docker 作为底层支持,需要把 cgroup 的驱动程序改成 systemd。
sudo vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://52msq4gm.mirror.aliyuncs.com"]
}
  1. 为了让 Kubernetes 能够检查、转发网络流量,你需要修改 iptables 的配置,启用“br_netfilter”模块。
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1 # better than modify /etc/sysctl.conf
EOF

sudo sysctl --system
  1. 关闭 Linux 的 swap 分区,提升 Kubernetes 的性能。
sudo swapoff -a
sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab

安装 kubeadm

  1. 安装 kubeadm、kubelet 和 kubectl。
sudo apt install -y apt-transport-https ca-certificates curl

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

sudo apt update
sudo apt install -y kubeadm=1.23.3-00 kubelet=1.23.3-00 kubectl=1.23.3-00
# 验证是否成功安装
kubeadm version
kubectl version --client
# 锁定这三个软件的版本,避免意外升级导致版本错误
sudo apt-mark hold kubeadm kubelet kubectl
  1. 下载相关组件。
# 查看相关版本号的组件镜像名称
kubeadm config images list --kubernetes-version v1.23.3
k8s.gcr.io/kube-apiserver:v1.23.3
k8s.gcr.io/kube-controller-manager:v1.23.3
k8s.gcr.io/kube-scheduler:v1.23.3
k8s.gcr.io/kube-proxy:v1.23.3
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

提前下载组件镜像

repo=registry.aliyuncs.com/google_containers
for name in `kubeadm config images list --kubernetes-version v1.23.3`; do
    src_name=${name#k8s.gcr.io/}
    src_name=${src_name#coredns/}
    docker pull $repo/$src_name
    docker tag $repo/$src_name $name
    docker rmi $repo/$src_name
done

安装 master 节点

  1. 初始化 kubeadm
sudo kubeadm init \
    --pod-network-cidr=10.10.0.0/16 \
    --apiserver-advertise-address=192.168.0.89 \
    # 如果你想在外网访问K8s集群,那么这里设置你的外网IP
    --apiserver-cert-extra-sans=121.121.121.121 \
    --kubernetes-version=v1.23.3
  1. 初始化成功够,会打印出一串命令,直接执行就可以了。
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
 export KUBECONFIG=/etc/kubernetes/admin.conf
  1. 其他节点要加入集群必须要用指令里的 token 和 ca 证书,所以这条命令务必拷贝后保存好。
kubeadm join 192.168.0.89:6443 --token ki6bfe.9tzzeigubxdni7wj \
    --discovery-token-ca-cert-hash sha256:26267dcb0eb6033f6d0e613eb266056956bd42901fcd664acd05e1dd6443bbd4
  1. 此时查看节点状态是 NotReady,这是因为还没有安装网络插件的原因。
$ kubectl get node                 
NAME     STATUS   ROLES                  AGE   VERSION
hw-dev   NotReady    control-plane,master   13m   v1.23.3

安装网络插件 Flannel

这里是 flannel 官方配置,改一下就可以了,地址https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.ymlopen in new window 。修改文件里的“net-conf.json”字段,把 Network 改成刚才 kubeadm 的参数 --pod-network-cidr 设置的地址段。如

net-conf.json: |
    {
      "Network": "10.10.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
截止 2022-8-15 完成配置如下:
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
  - kind: ServiceAccount
    name: flannel
    namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
        - operator: Exists
          effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
        - name: install-cni-plugin
          #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
          image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
          command:
            - cp
          args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
          volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
        - name: install-cni
          #image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)
          image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1
          command:
            - cp
          args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
          volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
      containers:
        - name: kube-flannel
          #image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)
          image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1
          command:
            - /opt/bin/flanneld
          args:
            - --ip-masq
            - --kube-subnet-mgr
          resources:
            requests:
              cpu: "100m"
              memory: "50Mi"
            limits:
              cpu: "100m"
              memory: "50Mi"
          securityContext:
            privileged: false
            capabilities:
              add: ["NET_ADMIN", "NET_RAW"]
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: EVENT_QUEUE_DEPTH
              value: "5000"
          volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
            - name: xtables-lock
              mountPath: /run/xtables.lock
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni-plugin
          hostPath:
            path: /opt/cni/bin
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate

应用这个配置:

kubectl apply -f kube-flannel.yml

稍等一会儿可以看到节点的状态已经是 Ready 了。

安装 worker 节点

执行刚才备份的命令,命令结束之后等待一段时间,worker 节点的 k8s 就安装好了。

kubeadm join 192.168.0.89:6443 --token e074xs.2vz38d7g49f592x8 \
        --discovery-token-ca-cert-hash sha256:2f9cd0471982a71c9842efad15bd1da9e8caadf795eaf7dc5e311c10afceb255