Install Kubernetes 1.24

2023-11-21 16:20
文章标签 kubernetes install 1.24

本文主要是介绍Install Kubernetes 1.24,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

本文是 王树森一口气完全解读 (可能是)全网最新版本 Kubernetes 1.24.1 集群部署的实践,特别感谢https://space.bilibili.com/479602299

Kubernetes容器运行时演进

早期的kubernetes runtime架构,远没这么复杂,kubelet创建容器,直接调用docker daemon,docker daemon自己调用libcontainer就把容器运行起来。

国际大厂们认为运行时标准不能被 Docker 一家公司控制,于是就串通搞了开放容器标准 OCI。忽悠Docker 把 libcontainer 封装了一下,变成 runC 捐献出来作为 OCI 的参考实现。

OCI(开放容器标准),规定了2点:

  • 容器镜像要长啥样,即 ImageSpec。里面的大致规定就是你这个东西需要是一个压缩了的文件夹,文件夹里以 xxx 结构放 xxx 文件;
  • 容器要需要能接收哪些指令,这些指令的行为是什么,即 RuntimeSpec。这里面的大致内容就是“容器”要能够执行 “create”,“start”,“stop”,“delete” 这些命令,并且行为要规范。

runC 参考实现,就是它能按照标准将符合标准的容器镜像运行起来,标准的好处就是方便搞创新,只要符合标准,生态圈里的其它工具都能和我一起工作(……当然 OCI 这个标准本身制定得不怎么样,真正工程上还是要做一些 adapter 的),那我的镜像就可以用任意的工具去构建,我的“容器”就不一定非要用 namespace 和 cgroups 来做隔离。这就让各种虚拟化容器可以更好地参与到容器实现当中。

再接下来 rkt(coreos推出的,类似docker) 想从 Docker 那边分一杯羹,希望 Kubernetes 原生支持 rkt 作为运行时,而且 PR 还真的合进去了。但是,整合出现的很多坑让Kubernetes疲于奔命。

然后,在Kubernetes 1.5 推出了 CRI 机制,即容器运行时接口(Container Runtime Interface),Kubernetes 告诉大家,你们想做 Runtime 可以啊,实现这个接口就成,成功反客为主。

不过 ,当时的 Kubernetes 尚未达到如今这般武林盟主的地位,容器运行时当然不能说我跟 Kubernetes 绑死了只提供 CRI 接口,于是就有了 shim(垫片)这个说法,一个 shim 的职责就是作为 Adapter 将各种容器运行时本身的接口适配到 Kubernetes 的 CRI 接口上,如下图中dockershim。

在这里插入图片描述

这时,Docker 要搞 Swarm 进军 PaaS 市场,于是做了个架构切分,把容器操作都移动到一个单独的 Daemon 进程 containerd 中去,让 Docker Daemon 专门负责上层的封装编排。可惜 Swarm 在 Kubernetes 面前惨败。

之后,Docker 公司就把 containerd 项目捐给 CNCF 缩回去安心搞 Docker 企业版了。

Docker+containerd的runtime 实在是有点复杂了,于是Kubernetes就有了直接拿 containerd 做 oci-runtime 的方案。当然,除了 Kubernetes 之外,containerd 还要接诸如 Swarm 等调度系统,因此它不会去直接实现 CRI,这个适配工作当然就要交给一个 shim 了。

containerd 1.0 中,对 CRI 的适配通过一个单独的进程 CRI-containerd 来完成;

containerd 1.1 中做的又更漂亮一点,砍掉了 CRI-containerd 这个进程,直接把适配逻辑作为插件放进了 containerd 主进程中。

但在 containerd 做这些事情前,社区就已经有了一个更为专注的 cri-runtime:CRI-O,它非常纯粹,就是兼容 CRI 和 OCI,做一个 Kubernetes 专用的运行时:
在这里插入图片描述

其中 conmon 就对应 containerd-shim,大体意图是一样的。

CRI-O 和(直接调用)containerd 的方案比起默认的 dockershim 确实简洁很多,但没啥生产环境的验证案例。直到不久前的1.24版本,Kubernetes终于不再原生支持Docker,以后的生产环境想必越来越多的containerd 的方案了。

Kubernetes 1.24 安装准备

概述

从上面的讲诉我们可以看到以下几种实现runtime的方式,其中kubelet直接调用Docker 管理器的方式现在1.24已经不支持了。

在这里插入图片描述

  • 集群创建方式1:Containerd
    默认情况下,Kubernetes在创建集群的时候,使用的就是 Containerd方式。
  • 集群创建方式2:Docker
    Docker使用的普及率较高,虽然Kubernetes 1.24默认情况下废弃了kubelet对于Docker的支持,但是我们还可以借助于Mirantis维护的cri-dockerd插件方式来实现Kubernetes集群的创建。
  • 集群创建方式3:CRI-O
    CRI-O的方式是Kubernetes创建容器最直接的一种方式,在创建集群的时候,需要借助于cri-o插件的方式来实现Kubernetes集群的创建。

注意:后两种方式需要对Kubelet程序的启动参数进行改造

下面就这三种方式来分别实现:

我们使用Linux Ubuntu 20.04作为主机OS,首先设定好apt 源

#ali源
deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse#清华源# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
apt update

前置条件

在Kubernetes官方文档中,我们可以找到对环境的要求

安装 kubeadm

Before you begin

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令
  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
  • 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
  • 开启机器上的某些端口。请参见这里 了解更多详细信息。主要是6443端口,如下命令检查是否开启
nc 127.0.0.1 6443 

Releases · containerd/containerd

wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64
cp runc.amd64 /usr/local/bin/runc   #把runc直接拷贝即可
chmod +x  /usr/local/bin/runc
cp /usr/local/bin/runc /usr/bin
cp /usr/local/bin/runc /usr/local/sbin/wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz
tar xf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C /root@worker02:~# containerd --version
containerd github.com/containerd/containerd v1.6.4 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16

通过装好的node-cp查看containerd的服务

root@cp:~# systemctl status containerd
● containerd.service - containerd container runtimeLoaded: loaded **(/lib/systemd/system/containerd.service**; enabled; vendor preset: enabled)Active: active (running) since Wed 2022-06-01 06:19:25 UTC; 3h 5min agoDocs: https://containerd.ioMain PID: 36533 (containerd)root@cp:~# cat /lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerdType=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target

而离线安装的在/etc/systemd/system/containerd.service,也是通过systemctl status containerd查看

root@worker02:~# cat /etc/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerdType=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target

启动服务

root@worker02:~# systemctl daemon-reload
root@worker02:~# systemctl restart containerd

创建配置目录

mkdir -p /etc/containerd

从装好的node传递配置文件/etc/containerd/config.toml和/etc/crictl.yaml,也可以按照前面的方法生成后修改

root@cp:~# scp /etc/containerd/config.toml root@worker02:/etc/containerd/config.toml root@cp:~# scp /etc/crictl.yaml root@worker02:/etc/crictl.yamlsystemctl restart containerd

Kubeadm初始化集群

查看需要多少images(报错是不能访问k8s.gcr.io)

root@cp:~# kubeadm config images list
W0601 06:40:29.809756   39745 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://storage.googleapis.com/kubernetes-release/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0601 06:40:29.809867   39745 version.go:104] falling back to the local client version: v1.24.1
k8s.gcr.io/kube-apiserver:v1.24.1
k8s.gcr.io/kube-controller-manager:v1.24.1
k8s.gcr.io/kube-scheduler:v1.24.1
k8s.gcr.io/kube-proxy:v1.24.1
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6

使用kubeadm命令初始化集群

kubeadm init --kubernetes-version=1.24.1  --apiserver-advertise-address=192.168.81.21 --apiserver-bind-port=6443 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=10.211.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket=unix:///run/containerd/containerd.sock --ignore-preflight-errors=Swap

可以用kubeadm init —help来查看语法。

root@cp:~# kubeadm init --kubernetes-version=1.24.1  --apiserver-advertise-address=192.168.81.21 --apiserver-bind-port=6443 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=10.211.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket=unix:///run/containerd/containerd.sock --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
......Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.81.21:6443 --token fybv6g.xlt3snl52qs5wyoo \--discovery-token-ca-cert-hash sha256:8545518e775368c0982638b9661355e6682a1f3ba98386b4ca0453449edc97ca

已经下好的images

root@cp:~# crictl images ls
IMAGE                                                                         TAG                 IMAGE ID            SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.8.6              a4ca41631cc7a       13.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.3-0             aebe758cef4cd       102MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.24.1             e9f4b425f9192       33.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.24.1             b4ea7e648530d       31MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.24.1             beb86f5d8e6cd       39.5MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.24.1             18688a72645c5       15.5MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.6                 6270bb605e12e       302kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.7                 221177c6082a8       311kB#如果用ctr命令需要指定namespaceroot@cp:~# ctr namespace ls
NAME    LABELS 
default        
k8s.io         root@cp:~# ctr -n k8s.io image ls
REF                                                                                                                                                 TYPE                                                      DIGEST                                                                  SIZE      PLATFORMS                                                                    LABELS                          
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6                                                                                  application/vnd.docker.distribution.manifest.list.v2+json sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e 13.0 MiB  linux/amd64,linux/arm,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x   io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e                 application/vnd.docker.distribution.manifest.list.v2+json sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e 13.0 MiB  linux/amd64,linux/arm,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x   io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0                                                                                    application/vnd.docker.distribution.manifest.list.v2+json sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 97.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5                    application/vnd.docker.distribution.manifest.list.v2+json sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 97.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.1                                                                          application/vnd.docker.distribution.manifest.list.v2+json sha256:ad9608e8a9d758f966b6ca6795b50a4723982328194bde214804b21efd48da44 32.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver@sha256:ad9608e8a9d758f966b6ca6795b50a4723982328194bde214804b21efd48da44          application/vnd.docker.distribution.manifest.list.v2+json sha256:ad9608e8a9d758f966b6ca6795b50a4723982328194bde214804b21efd48da44 32.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:594a3f5bbdd0419ac57d580da8dfb061237fa48d0c9909991a3af70630291f7a 29.6 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager@sha256:594a3f5bbdd0419ac57d580da8dfb061237fa48d0c9909991a3af70630291f7a application/vnd.docker.distribution.manifest.list.v2+json sha256:594a3f5bbdd0419ac57d580da8dfb061237fa48d0c9909991a3af70630291f7a 29.6 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.1                                                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:1652df3138207570f52ae0be05cbf26c02648e6a4c30ced3f779fe3d6295ad6d 37.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy@sha256:1652df3138207570f52ae0be05cbf26c02648e6a4c30ced3f779fe3d6295ad6d              application/vnd.docker.distribution.manifest.list.v2+json sha256:1652df3138207570f52ae0be05cbf26c02648e6a4c30ced3f779fe3d6295ad6d 37.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.1                                                                          application/vnd.docker.distribution.manifest.list.v2+json sha256:0d2de567157e3fb97dfa831620a3dc38d24b05bd3721763a99f3f73b8cbe99c9 14.8 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler@sha256:0d2de567157e3fb97dfa831620a3dc38d24b05bd3721763a99f3f73b8cbe99c9          application/vnd.docker.distribution.manifest.list.v2+json sha256:0d2de567157e3fb97dfa831620a3dc38d24b05bd3721763a99f3f73b8cbe99c9 14.8 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6                                                                                       application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 294.7 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7                                                                                       application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db                   application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 294.7 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c                   application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
sha256:18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:0d2de567157e3fb97dfa831620a3dc38d24b05bd3721763a99f3f73b8cbe99c9 14.8 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 294.7 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e 13.0 MiB  linux/amd64,linux/arm,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x   io.cri-containerd.image=managed 
sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 97.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
sha256:b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:594a3f5bbdd0419ac57d580da8dfb061237fa48d0c9909991a3af70630291f7a 29.6 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
sha256:beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:1652df3138207570f52ae0be05cbf26c02648e6a4c30ced3f779fe3d6295ad6d 37.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
sha256:e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:ad9608e8a9d758f966b6ca6795b50a4723982328194bde214804b21efd48da44 32.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed

使用calico作为CNI

root@cp:~# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

Worker节点加入

root@worker01:~# kubeadm join 192.168.81.21:6443 --token fybv6g.xlt3snl52qs5wyoo \
>         --discovery-token-ca-cert-hash sha256:8545518e775368c0982638b9661355e6682a1f3ba98386b4ca0453449edc97ca 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.#CP check:
root@cp:/home/zyi# kubectl get node -owide
NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp         Ready    control-plane   30h   v1.24.1   192.168.81.21   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.6.4
worker01   Ready    <none>          30h   v1.24.1   192.168.81.22   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.6.4
worker02   Ready    <none>          27h   v1.24.1   192.168.81.23   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.6.4
root@cp:~# kubectl get po -A -owide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE       NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-56cdb7c587-v46wk   1/1     Running   0          118m    10.211.5.3      worker01   <none>           <none>
kube-system   calico-node-2qq4n                          1/1     Running   0          118m    192.168.81.21   cp         <none>           <none>
kube-system   calico-node-slnp9                          1/1     Running   0          2m27s   192.168.81.23   worker02   <none>           <none>
kube-system   calico-node-v2xd8                          1/1     Running   0          118m    192.168.81.22   worker01   <none>           <none>
kube-system   coredns-7f74c56694-4b4wp                   1/1     Running   0          3h      10.211.5.1      worker01   <none>           <none>
kube-system   coredns-7f74c56694-mmvgb                   1/1     Running   0          3h      10.211.5.2      worker01   <none>           <none>
kube-system   etcd-cp                                    1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>
kube-system   kube-apiserver-cp                          1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>
kube-system   kube-controller-manager-cp                 1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>
kube-system   kube-proxy-4n2jk                           1/1     Running   0          2m27s   192.168.81.23   worker02   <none>           <none>
kube-system   kube-proxy-8zdvt                           1/1     Running   0          169m    192.168.81.22   worker01   <none>           <none>
kube-system   kube-proxy-rpf78                           1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>
kube-system   kube-scheduler-cp                          1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>

使用Docker运行时创建集群

安装官网的描述可以使用Docker Engine创建集群

Note: 以下操作假设你使用 [cri-dockerd](https://github.com/Mirantis/cri-dockerd) 适配器来将 Docker Engine 与 Kubernetes 集成。

  1. 在你的每个节点上,遵循安装 Docker 引擎指南为你的 Linux 发行版安装 Docker

  2. 按照源代码仓库中的说明安装 [cri-dockerd](https://github.com/Mirantis/cri-dockerd)

    对于 cri-dockerd,默认情况下,CRI 套接字是 /run/cri-dockerd.sock

  3. 初始化kubernetes Cluster

安装Docker-ce

在线安装方式,配置软件源等

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt update
apt install -y containerd.io docker-ce docker-ce-cli注意:
默认情况下,docker服务使用的就是containerd接口服务,
通过 journalctl -u docker.service
可知:unix:///run/containerd/containerd.sock

查看Docker info

root@worker01:~# docker info
Client:...Server:Containers: 0...Server Version: 20.10.16Storage Driver: overlay2Backing Filesystem: extfsSupports d_type: trueNative Overlay Diff: trueuserxattr: falseLogging Driver: json-file**Cgroup Driver: cgroupfs**Cgroup Version: 1
...
WARNING: No swap limit support

上面可以看到docker的default Cgroup Driver是cgroupfs,而查看kubelet却是systemd

root@worker01:~# journalctl -u kubelet | grep systemd |more
Jun 03 02:48:41 worker01 systemd[1]: kubelet.service: Main process exited
, code=exited, status=1/FAILURE
Jun 03 02:48:41 worker01 systemd[1]: kubelet.service: Failed with result 
'exit-code'.
Jun 03 02:48:52 worker01 systemd[1]: kubelet.service: Scheduled restart j
ob, restart counter is at 4441.
Jun 03 02:48:52 worker01 systemd[1]: Stopped kubelet: The Kubernetes NodeAgent.
Jun 03 02:48:52 worker01 systemd[1]: Started kubelet: The Kubernetes NodeAgent.
Jun 03 02:48:52 worker01 kubelet[97187]:       --cgroup-driver string    Driver that the kubelet uses to manipula
te cgroups on the host.  Possible values: 'cgroupfs', 'systemd' (default 
"cgroupfs") (DEPRECATED: This parameter should be set via the config filespecified by the Kubelet's --config flag. See https://kubernetes.io/docs
/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jun 03 02:48:52 worker01 systemd[1]: kubelet.service: Main process exited
, code=exited, status=1/FAILURE

下面来改docker的Cgroup Dirver的参数(每台都做):

创建专属的systemd服务管理目录

mkdir -p /etc/systemd/system/docker.service.d

定制配置文件

tee /etc/docker/daemon.json <<EOF
{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"
},"storage-driver": "overlay2"
}
EOF# 重新启动服务
systemctl daemon-reload
systemctl restart docker
systemctl enable dockerroot@worker01:~# docker info |grep CgroupCgroup Driver: systemdCgroup Version: 1
WARNING: No swap limit support

获取cri-dockers插件以支持docker

插件cri-dockerd安装方式

mkdir -p /data/softs && cd /data/softswget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.1/cri-dockerd-0.2.1.amd64.tgz
tar xf cri-dockerd-0.2.1.amd64.tgz
mv cri-dockerd/cri-dockerd /usr/local/bin/#检查效果
cri-dockerd --versionroot@master:/data/softs# cri-dockerd --version
cri-dockerd 0.2.1 (HEAD)

定制服务文件/etc/systemd/system/cri-docker.service,为cri-dockerd启动读取

[Unit]
Description=CRI Interface for Docker Application container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --image-pull-progress-deadline=30s --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 --cri-dockerd-root-directory=/var/lib/dockershim --docker-endpoint=unix:///var/run/docker.sock --cri-dockerd-root-directory=/var/lib/docker
ExecReload=/bin/kill -s HUP $MAINPIDTimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s 
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
killMode=process[Install]
WantedBy=multi-user.target

定制专属的服务(Optional)socket文件/usr/lib/systemd/system/cri-docker.socket

[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service[Socket]
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker[Install]
WantedBy=sockets.target

启动服务

systemctl daemon-reload
systemctl enable cri-docker.service
systemctl restart cri-docker.servicesystemctl status --no-pager cri-docker.service#检测效果
crictl --runtime-endpoint /var/run/cri-dockerd.sock ps root@master:/data/softs# crictl --runtime-endpoint /var/run/cri-dockerd.sock ps 
I0604 10:50:12.902161  380647 util_unix.go:104] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/cri-dockerd.sock" fullURLFormat="unix:///var/run/cri-dockerd.sock"
I0604 10:50:12.911201  380647 util_unix.go:104] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/cri-dockerd.sock" fullURLFormat="unix:///var/run/cri-dockerd.sock"
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD

此时有go:104提示,以下yaml文件可以去掉前面的提示

# cat /etc/crictl.yaml
runtime-endpoint: "unix:///var/run/cri-dockerd.sock"
image-endpoint: "unix:///var/run/cri-dockerd.sock"
timeout: 10
debug: false
pull-image-on-create: true
disable-pull-on-run: false

测试效果

crictl psroot@master:/data/softs# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD

接下来,保证所有主机得到配置文件

root@master:/data/softs# scp /etc/systemd/system/cri-docker.service worker01:/etc/systemd/system/cri-docker.service
cri-docker.service                                                               100%  934     1.9MB/s   00:00    
root@master:/data/softs# scp /usr/lib/systemd/system/cri-docker.socket worker01:/usr/lib/systemd/system/cri-docker.socket
cri-docker.socket                                                                100%  210   458.5KB/s   00:00    
root@master:/data/softs# scp /etc/crictl.yaml worker01:/etc/crictl.yaml
crictl.yaml                                                                      100%  183   718.5KB/s   00:00

创建集群

kubelet改造,所有node

#cat/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
ExecStart=...--pod-infra-container-image=registry.cn-[hangzhou.aliyuncs.com/google_containers/pause:3.7 --](http://hangzhou.aliyuncs.com/google_containers/pause:3.7--)container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --containerd=unix:///var/run/cri-dockerd.socksystemctl daemon-reload
systemctl restart kubelet

kubeadm集群初始化

kubeadm init --kubernetes-version=1.24.1 \
--apiserver-advertise-address=192.168.81.20 \
--image-repository registry.cn-h[angzhou.aliyuncs.com/google_containers](http://angzhou.aliyuncs.com/google_containers) \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.211.0.0/16 \
--cri-socket unix:///var/run/cri-dockerd.sock \
--ignore-preflight-errors=Swaproot@master:/data/softs# kubeadm init --kubernetes-version=1.24.1 \
> --apiserver-advertise-address=192.168.81.20 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.211.0.0/16 \
> --cri-socket unix:///var/run/cri-dockerd.sock \
> --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.81.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.81.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.81.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.002541 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: zpqirm.so0xmeo6b46gaj41
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.81.20:6443 --token zpqirm.so0xmeo6b46gaj41 \--discovery-token-ca-cert-hash sha256:e8469d13b8ff07ce2803134048bb109a16e6b15b9e3279c4c556066549025c47 
root@master:/data/softs# mkdir -p $HOME/.kube
root@master:/data/softs#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master:/data/softs#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

在worker节点

root@worker01:/data/softs# kubeadm join 192.168.81.20:6443 --token zpqirm.so0xmeo6b46gaj41 \
>         --discovery-token-ca-cert-hash sha256:e8469d13b8ff07ce2803134048bb109a16e6b15b9e3279c4c556066549025c47 
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

这是因为底层的默认cri是containerd

执行kubeadm join加入CRI

root@worker01:/data/softs# kubeadm join 192.168.81.20:6443 --token zpqirm.so0xmeo6b46gaj41         --discovery-token-ca-cert-hash sha256:e8469d13b8ff07ce2803134048bb109a16e6b15b9e3279c4c556066549025c47  --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

CRI-O运行时创建集群

官网的文档

容器运行时

安装CRI-O

OS=xUbuntu_20.04
CRIO_VERSION=1.24
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list 
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.listcurl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS/Release.key | sudo apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key add -#echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
#echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list#mkdir -p /usr/share/keyrings
#curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg
#curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpgapt-get update
apt-get install cri-o cri-o-runcsystemctl start crio
systemctl enable crio
systemctl status crio

修改配置

修改默认的网段

#/etc/cni/net.d/100-crio-bridge.conf
sed -i 's/10.85.0.0/10.211.0.0/g' /etc/cni/net.d/100-crio-bridge.conf

修改基本的配置

#grep -Env '#|^$|^\['/etc/crio/crio.conf
169:cgroup_manager = "systemd"
451:pause_image ="[registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6](http://registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6)"

重启服务配置

systemctl restart crio

验证效果

curl -v --unix-socket /var/run/crio/crio.sock [http://localhost/info](http://localhost/info)root@first:~# curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info
*   Trying /var/run/crio/crio.sock:0...
* Connected to localhost (/var/run/crio/crio.sock) port 80 (#0)
> GET /info HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Sat, 04 Jun 2022 15:10:27 GMT
< Content-Length: 239
< 
* Connection #0 to host localhost left intact
{"storage_driver":"overlay","storage_root":"/var/lib/containers/storage","cgroup_driver":"systemd","default_id_mappings":{"uids":[{"container_id":0,"host_id":0,"size":4294967295}],"gids":[{"container_id":0,"host_id":0,"size":4294967295}]}}

配置crictl.yaml参数

# cat /etc/crictl.yaml
runtime-endpoint: "unix:///var/run/crio/crio.sock"
image-endpoint: "unix:///var/run/crio/crio.sock"
timeout: 10
debug: false
pull-image-on-create: true
disable-pull-on-run: false

初始化集群

修改kubelet参数

#cat/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
ExecStart=...--container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5msystemctl daemon-reload
systemctl restart kubelet

集群初始化

kubeadm init --kubernetes-version=1.24.1 \
--apiserver-advertise-address=192.168.81.1 \
--image-repository=registry.cn-[hangzhou.aliyuncs.com/google_containers \](http://hangzhou.aliyuncs.com/google_containers%5C)
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.211.0.0/16 \
--cri-socket unix:///var/run/crio/crio.sock \
--ignore-preflight-errors=Swaproot@main:~# kubeadm init --kubernetes-version=1.24.1 \
> --apiserver-advertise-address=192.168.81.1 \
> --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.211.0.0/16 \
> --cri-socket unix:///var/run/crio/crio.sock \
> --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local main] and IPs [10.96.0.1 192.168.81.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost main] and IPs [192.168.81.1 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost main] and IPs [192.168.81.1 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.003679 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node main as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node main as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: k9dmq6.cuhj0atd4jhz4y6o
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.81.1:6443 --token k9dmq6.cuhj0atd4jhz4y6o \--discovery-token-ca-cert-hash sha256:99de4906a2f690147d59ee71c1e2e916e64b6a8f6efae5bd28bebcb711cd28ab

Worker 加入集群

root@worker03:~# kubeadm join 192.168.81.1:6443 --token k9dmq6.cuhj0atd4jhz4y6o \
>         --discovery-token-ca-cert-hash sha256:99de4906a2f690147d59ee71c1e2e916e64b6a8f6efae5bd28bebcb711cd28ab 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

这篇关于Install Kubernetes 1.24的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/403536

相关文章

解决Maven项目idea找不到本地仓库jar包问题以及使用mvn install:install-file

《解决Maven项目idea找不到本地仓库jar包问题以及使用mvninstall:install-file》:本文主要介绍解决Maven项目idea找不到本地仓库jar包问题以及使用mvnin... 目录Maven项目idea找不到本地仓库jar包以及使用mvn install:install-file基

使用国内镜像源优化pip install下载的方法步骤

《使用国内镜像源优化pipinstall下载的方法步骤》在Python开发中,pip是一个不可或缺的工具,用于安装和管理Python包,然而,由于默认的PyPI服务器位于国外,国内用户在安装依赖时可... 目录引言1. 为什么需要国内镜像源?2. 常用的国内镜像源3. 临时使用国内镜像源4. 永久配置国内镜

pip install jupyterlab失败的原因问题及探索

《pipinstalljupyterlab失败的原因问题及探索》在学习Yolo模型时,尝试安装JupyterLab但遇到错误,错误提示缺少Rust和Cargo编译环境,因为pywinpty包需要它... 目录背景问题解决方案总结背景最近在学习Yolo模型,然后其中要下载jupyter(有点LSVmu像一个

Kubernetes常用命令大全近期总结

《Kubernetes常用命令大全近期总结》Kubernetes是用于大规模部署和管理这些容器的开源软件-在希腊语中,这个词还有“舵手”或“飞行员”的意思,使用Kubernetes(有时被称为“... 目录前言Kubernetes 的工作原理为什么要使用 Kubernetes?Kubernetes常用命令总

Kubernetes PodSecurityPolicy:PSP能实现的5种主要安全策略

Kubernetes PodSecurityPolicy:PSP能实现的5种主要安全策略 1. 特权模式限制2. 宿主机资源隔离3. 用户和组管理4. 权限提升控制5. SELinux配置 💖The Begin💖点点关注,收藏不迷路💖 Kubernetes的PodSecurityPolicy(PSP)是一个关键的安全特性,它在Pod创建之前实施安全策略,确保P

K8S(Kubernetes)开源的容器编排平台安装步骤详解

K8S(Kubernetes)是一个开源的容器编排平台,用于自动化部署、扩展和管理容器化应用程序。以下是K8S容器编排平台的安装步骤、使用方式及特点的概述: 安装步骤: 安装Docker:K8S需要基于Docker来运行容器化应用程序。首先要在所有节点上安装Docker引擎。 安装Kubernetes Master:在集群中选择一台主机作为Master节点,安装K8S的控制平面组件,如AP

什么是Kubernetes PodSecurityPolicy?

@TOC 💖The Begin💖点点关注,收藏不迷路💖 1、什么是PodSecurityPolicy? PodSecurityPolicy(PSP)是Kubernetes中的一个安全特性,用于在Pod创建前进行安全策略检查,限制Pod的资源使用、运行权限等,提升集群安全性。 2、为什么需要它? 默认情况下,Kubernetes允许用户自由创建Pod,可能带来安全风险。

容器编排平台Kubernetes简介

目录 什么是K8s 为什么需要K8s 什么是容器(Contianer) K8s能做什么? K8s的架构原理  控制平面(Control plane)         kube-apiserver         etcd         kube-scheduler         kube-controller-manager         cloud-controlle

【Kubernetes】K8s 的安全框架和用户认证

K8s 的安全框架和用户认证 1.Kubernetes 的安全框架1.1 认证:Authentication1.2 鉴权:Authorization1.3 准入控制:Admission Control 2.Kubernetes 的用户认证2.1 Kubernetes 的用户认证方式2.2 配置 Kubernetes 集群使用密码认证 Kubernetes 作为一个分布式的虚拟

kubernetes集群部署Zabbix监控平台

一、zabbix介绍 1.zabbix简介 Zabbix是一个基于Web界面的分布式系统监控的企业级开源软件。可以监视各种系统与设备的参数,保障服务器及设备的安全运营。 2.zabbix特点 (1)安装与配置简单。 (2)可视化web管理界面。 (3)免费开源。 (4)支持中文。 (5)自动发现。 (6)分布式监控。 (7)实时绘图。 3.zabbix的主要功能