高可用k8s集群(k8s-1.29.2)

2024-02-25 22:36
文章标签 云原生 集群 k8s 可用 1.29

本文主要是介绍高可用k8s集群(k8s-1.29.2),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

0、高可用k8s集群(k8s-1.29.2)

文章目录

      • 0、高可用k8s集群(k8s-1.29.2)
      • 0、环境准备(centos-7.9、rocky-9.3 环境配置+调优)
      • 1、nginx + keepalived(负载均衡+高可用)
            • 1.1、nginx
            • 1.2、keepalived
      • 2、安装containerd-1.6.28(官方源)(centos-7.9、rocky-9.3 )
      • 3、安装k8s(kubeadm-1.29.2、kubelet-1.29.2、kubectl-1.29.2)(官方源)(centos-7.9、rocky-9.3 )
      • 4、初始化 k8s-1.29.2 集群
      • 5、安装 k8s 集群网络(calico)
      • 6、coredns 解析测试是否正常
      • 7、k8s-node节点后期的加入命令(按照上面操作安装好containerd、kubeadm、kubelet、kubectl)

containerd-1.6.28 + k8s-1.29.2(最新)(kubeadm方式)(containerd容器运行时版)

kubeadm方式安装最新版k8s-1.29.2(containerd容器运行时)

containerd-1.6.28 + k8s-1.29.2(最新)(kubeadm方式)

containerd-1.6.28

k8s-1.29.2

  • k8s-master1(centos-7.9)(4c8g-200g)
  • k8s-master2(centos-7.9)(4c8g-200g)
  • k8s-master3(centos-7.9)(4c8g-200g)
  • k8s-node1(centos-7.9)(8c16g-200g)
  • k8s-node2(rocky-9.3)(8c16g-200g)
  • k8s-node3(rocky-9.3)(8c16g-200g)

主机规划

ip
k8s-master1192.168.1.201nginx+keepalived
k8s-master2192.168.1.203nginx+keepalived
k8s-master3192.168.1.205nginx+keepalived
vip192.168.1.10

网络分配

网络名称网段
Node网络192.168.1.0/24
Service网络10.96.0.0/12
Pod网络10.244.0.0/16

0、环境准备(centos-7.9、rocky-9.3 环境配置+调优)

# 颜色
echo "PS1='\[\033[35m\][\[\033[00m\]\[\033[31m\]\u\[\033[33m\]\[\033[33m\]@\[\033[03m\]\[\033[35m\]\h\[\033[00m\] \[\033[5;32m\]\w\[\033[00m\]\[\033[35m\]]\[\033[00m\]\[\033[5;31m\]\\$\[\033[00m\] '" >> ~/.bashrc && source ~/.bashrcecho 'PS1="[\[\e[33m\]\u\[\e[0m\]\[\e[31m\]@\[\e[0m\]\[\e[35m\]\h\[\e[0m\]:\[\e[32m\]\w\[\e[0m\]] \[\e[33m\]\t\[\e[0m\] \[\e[31m\]Power\[\e[0m\]=\[\e[32m\]\!\[\e[0m\] \[\e[35m\]^0^\[\e[0m\]\n\[\e[95m\]公主请输命令^0^\[\e[0m\] \[\e[36m\]\\$\[\e[0m\] "' >> ~/.bashrc && source ~/.bashrc# 0、centos7 环境配置
# 安装 vim
yum -y install vim wget net-tools# 行号
echo "set nu" >> /root/.vimrc# 搜索关键字高亮
sed -i "8calias grep='grep --color'" /root/.bashrc# 腾讯源
cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo-bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
wget -O /etc/yum.repos.d/CentOS-Epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repoyum clean all
yum makecache
# 1、设置主机名
hostnamectl set-hostname k8s-master1 && su -
hostnamectl set-hostname k8s-master2 && su -
hostnamectl set-hostname k8s-master3 && su -
hostnamectl set-hostname k8s-node1 && su -
hostnamectl set-hostname k8s-node2 && su -
hostnamectl set-hostname k8s-node3 && su -
# 2、添加hosts解析
cat >> /etc/hosts << EOF
192.168.1.201 k8s-master1
192.168.1.203 k8s-master2
192.168.1.205 k8s-master3
192.168.1.101 k8s-node1
192.168.1.102 k8s-node2
192.168.1.103 k8s-node3
EOF
# 3、同步时间
yum -y install ntpsystemctl enable ntpd --now
# 4、永久关闭seLinux(需重启系统生效)
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 5、永久关闭swap(需重启系统生效)
swapoff -a  # 临时关闭
sed -i 's/.*swap.*/#&/g' /etc/fstab # 永久关闭
# 6、升级内核为5.4版本(需重启系统生效)
# https://elrepo.org/tiki/kernel-lt
# https://elrepo.org/linux/kernel/el7/x86_64/RPMS/rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-6.el7.elrepo.noarch.rpmyum --disablerepo="*" --enablerepo="elrepo-kernel" list availableyum --enablerepo=elrepo-kernel install -y kernel-ltgrub2-set-default 0
# 这里先重启再继续
# reboot
# 7、关闭防火墙、清空iptables规则
systemctl disable firewalld && systemctl stop firewalldiptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X && iptables -P FORWARD ACCEPT
# 8、关闭 NetworkManager
systemctl disable NetworkManager && systemctl stop NetworkManager
# 9、加载IPVS模块
yum -y install ipset ipvsadmmkdir -p /etc/sysconfig/modules
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOFmodprobe -- nf_conntrackchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
# 10、开启br_netfilter、ipv4 路由转发
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOFsudo modprobe overlay
sudo modprobe br_netfilter# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF# 应用 sysctl 参数而不重新启动
sudo sysctl --system# 查看是否生效
lsmod | grep br_netfilter
lsmod | grep overlaysysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
# 11、内核调优
cat > /etc/sysctl.d/99-sysctl.conf << 'EOF'
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).# Controls IP packet forwarding# Controls source route verification
net.ipv4.conf.default.rp_filter = 1# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0# Controls the System Request debugging functionality of the kernel# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.default.promote_secondaries = 1
net.ipv6.neigh.default.gc_thresh3 = 4096kernel.sysrq = 1
net.ipv6.conf.all.disable_ipv6=0
net.ipv6.conf.default.disable_ipv6=0
net.ipv6.conf.lo.disable_ipv6=0
kernel.numa_balancing = 0
kernel.shmmax = 68719476736
kernel.printk = 5
net.core.rps_sock_flow_entries=8192
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_local_reserved_ports=60001,60002
net.core.rmem_max=16777216
fs.inotify.max_user_watches=524288
kernel.core_pattern=core
net.core.dev_weight_tx_bias=1
net.ipv4.tcp_max_orphans=32768
kernel.pid_max=4194304
kernel.softlockup_panic=1
fs.file-max=3355443
net.core.bpf_jit_harden=1
net.ipv4.tcp_max_tw_buckets=32768
fs.inotify.max_user_instances=8192
net.core.bpf_jit_kallsyms=1
vm.max_map_count=262144
kernel.threads-max=262144
net.core.bpf_jit_enable=1
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_wmem=4096 12582912    16777216
net.core.wmem_max=16777216
net.ipv4.neigh.default.gc_thresh1=2048
net.core.somaxconn=32768
net.ipv4.neigh.default.gc_thresh3=8192
net.ipv4.ip_forward=1
net.ipv4.neigh.default.gc_thresh2=4096
net.ipv4.tcp_max_syn_backlog=8096
net.bridge.bridge-nf-call-iptables=1
net.ipv4.tcp_rmem=4096  12582912        16777216
EOF# 应用 sysctl 参数而不重新启动
sudo sysctl --system
# 12、设置资源配置文件
cat >> /etc/security/limits.conf << 'EOF'
* soft nofile 100001
* hard nofile 100002
root soft nofile 100001
root hard nofile 100002
* soft memlock unlimited
* hard memlock unlimited
* soft nproc 254554
* hard nproc 254554
* soft sigpending 254554
* hard sigpending 254554
EOFgrep -vE "^\s*#" /etc/security/limits.confulimit -a

1、nginx + keepalived(负载均衡+高可用)

1.1、nginx
cat > /etc/yum.repos.d/nginx.repo << 'EOF'
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
EOF
yum install nginx -ymv /etc/nginx/nginx.conf /etc/nginx/nginx.conf-bak
mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf-bak
cat > /etc/nginx/nginx.conf << "EOF"
user  nginx;
worker_processes  auto;error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;events {worker_connections  10240;
}stream {log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log  /var/log/nginx/k8s-access.log  main;upstream k8s-apiserver {server 192.168.1.201:6443 weight=5 max_fails=1 fail_timeout=3s;  	#k8s-master1的IP和6443端口server 192.168.1.203:6443 weight=5 max_fails=1 fail_timeout=3s;  	#k8s-master2的IP和6443端口server 192.168.1.205:6443 weight=5 max_fails=1 fail_timeout=3s;  	#k8s-master3的IP和6443端口}server {listen 9443; #监听的是9443端口proxy_pass k8s-apiserver; #使用proxy_pass模块进行反向代理}
}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;include /etc/nginx/conf.d/*.conf;
}
EOF
systemctl enable --now nginx
systemctl status nginx
netstat -tnlp |grep 9443
1.2、keepalived
yum -y install keepalivedcp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-bak

k8s-master1(keepalived的MASTER配置)

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalivedglobal_defs {router_id LVS_k8s
}vrrp_script check_nginx {script "/etc/keepalived/check_nginx.sh"interval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state MASTER # masterinterface ens33# 网卡virtual_router_id 51priority 100 # 权重advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.1.10 # vip}track_script {check_nginx}
}
EOF

k8s-master2(keepalived的BACKUP配置)

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalivedglobal_defs {router_id LVS_k8s
}vrrp_script check_nginx {script "/etc/keepalived/check_nginx.sh"interval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state BACKUP # slaveinterface ens33 # 网卡virtual_router_id 51priority 90 # 权重advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.1.10 # vip}track_script {check_nginx}
}
EOF

k8s-master3(keepalived的BACKUP配置)

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalivedglobal_defs {router_id LVS_k8s
}vrrp_script check_nginx {script "/etc/keepalived/check_nginx.sh"interval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state BACKUP # slaveinterface ens33 # 网卡virtual_router_id 51priority 80 # 权重advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.1.10 # vip}track_script {check_nginx}
}
EOF
systemctl enable --now keepalivedsystemctl status keepalived
cat > /etc/keepalived/check_nginx.sh << 'EOF'
#!/bin/bash
# NGINX down
num=`ps -ef | grep nginx | awk '{print $1}' | grep nginx | wc -l`
if [ $num -eq 0 ];thensystemctl start nginxsleep 1if [ `ps -ef | grep nginx | awk '{print $1}' | grep nginx | wc -l` -eq 0 ];thensystemctl stop keepalivedfi
fi
EOFchmod +x /etc/keepalived/check_nginx.sh

2、安装containerd-1.6.28(官方源)(centos-7.9、rocky-9.3 )

wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repoyum makecache
yum list containerd.io --showduplicates | sort -r
yum -y install containerd.io-1.6.28
containerd config default | sudo tee /etc/containerd/config.toml# 修改cgroup Driver为systemd
sed -ri 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml# 更改sandbox_image
sed -ri 's#registry.k8s.io\/pause:3.6#registry.aliyuncs.com\/google_containers\/pause:3.9#' /etc/containerd/config.toml# 添加镜像加速
# https://github.com/DaoCloud/public-image-mirror
# 1、指定配置文件目录
sed -i 's/config_path = ""/config_path = "\/etc\/containerd\/certs.d\/"/g' /etc/containerd/config.toml
# 2、配置加速
# docker.io 镜像加速
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << 'EOF'
server = "https://docker.io" # 源镜像地址[host."https://xk9ak4u9.mirror.aliyuncs.com"] # 阿里-镜像加速地址capabilities = ["pull","resolve"][host."https://docker.m.daocloud.io"] # 道客-镜像加速地址capabilities = ["pull","resolve"][host."https://dockerproxy.com"] # 镜像加速地址capabilities = ["pull", "resolve"][host."https://docker.mirrors.sjtug.sjtu.edu.cn"] # 上海交大-镜像加速地址capabilities = ["pull","resolve"][host."https://docker.mirrors.ustc.edu.cn"] # 中科大-镜像加速地址capabilities = ["pull","resolve"][host."https://docker.nju.edu.cn"] # 南京大学-镜像加速地址capabilities = ["pull","resolve"][host."https://registry-1.docker.io"]capabilities = ["pull","resolve","push"]
EOF# registry.k8s.io 镜像加速
mkdir -p /etc/containerd/certs.d/registry.k8s.io
cat > /etc/containerd/certs.d/registry.k8s.io/hosts.toml << 'EOF'
server = "https://registry.k8s.io"[host."https://k8s.m.daocloud.io"]capabilities = ["pull", "resolve", "push"]
EOF# quay.io 镜像加速
mkdir -p /etc/containerd/certs.d/quay.io
cat > /etc/containerd/certs.d/quay.io/hosts.toml << 'EOF'
server = "https://quay.io"[host."https://quay.m.daocloud.io"]capabilities = ["pull", "resolve", "push"]
EOF# docker.elastic.co镜像加速
mkdir -p /etc/containerd/certs.d/docker.elastic.co
tee /etc/containerd/certs.d/docker.elastic.co/hosts.toml << 'EOF'
server = "https://docker.elastic.co"[host."https://elastic.m.daocloud.io"]capabilities = ["pull", "resolve", "push"]
EOFsystemctl daemon-reloadsystemctl enable containerd --nowsystemctl restart containerd
systemctl status containerd

镜像加速配置无需重启服务,即可生效

#设置crictl
cat << EOF >> /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF

3、安装k8s(kubeadm-1.29.2、kubelet-1.29.2、kubectl-1.29.2)(官方源)(centos-7.9、rocky-9.3 )

cat > /etc/yum.repos.d/kubernetes.repo << 'EOF'
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=0
EOFyum makecacheyum -y install kubeadm-1.29.2 kubelet-1.29.2 kubectl-1.29.2systemctl enable --now kubelet

4、初始化 k8s-1.29.2 集群

mkdir ~/kubeadm_init && cd ~/kubeadm_initkubeadm config print init-defaults > kubeadm-init.yamlcat > ~/kubeadm_init/kubeadm-init.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.1.201 # 修改自己的ipbindPort: 6443
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: k8s-master1taints:- effect: NoSchedulekey: node-role.kubernetes.io/k8s-master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.1.10:9443 # 高可用vip的ip
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.29.2
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
# 查看所需镜像列表
kubeadm config images list --config kubeadm-init.yamlkubeadm config images list --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.29.2
# 预拉取镜像
kubeadm config images pull --config kubeadm-init.yamlkubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.29.2
# 初始化
kubeadm init --config=kubeadm-init.yaml --upload-certs --dry-runkubeadm init --config=kubeadm-init.yaml --upload-certs | tee kubeadm-init.log
# 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、安装 k8s 集群网络(calico)

查看calico与k8s的版本对应关系

https://docs.tigera.io/calico/3.27/getting-started/kubernetes/requirements

这里k8s-1.29.2,所以使用calico-v3.27.0版本(版本对应很关键)

mkdir -p ~/calico-ymlcd ~/calico-yml && wget https://github.com/projectcalico/calico/raw/v3.27.0/manifests/calico.yaml
1 修改CIDR
- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"2 指定网卡
# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"
# 下面添加- name: IP_AUTODETECTION_METHODvalue: "interface=ens33,ens160"# ens33为本地网卡名字(自己机器啥网卡就改啥)
# 1 修改CIDR
sed -i 's/192\.168/10\.244/g' calico.yamlsed -i 's/# \(- name: CALICO_IPV4POOL_CIDR\)/\1/' calico.yaml
sed -i 's/# \(\s*value: "10.244.0.0\/16"\)/\1/' calico.yaml
# 2 指定网卡(ens33为本地网卡名字(自己机器啥网卡就改啥))
sed -i '/value: "k8s,bgp"/a \            - name: IP_AUTODETECTION_METHOD' \calico.yamlsed -i '/- name: IP_AUTODETECTION_METHOD/a \              value: "interface=ens33,ens160"' \calico.yaml
kubectl apply -f ~/calico-yml/calico.yaml

6、coredns 解析测试是否正常

[root@k8s-master ~]# kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local   # 看到这个说明dns解析正常Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ #
kubectl run -it --rm dns-test --image=busybox:1.28.4 shkubectl run -it --rm dns-test --image=ccr.ccs.tencentyun.com/huanghuanhui/busybox:1.28.4 sh
nslookup kubernetes

7、k8s-node节点后期的加入命令(按照上面操作安装好containerd、kubeadm、kubelet、kubectl)

kubeadm token listkubeadm token create --print-join-command --dry-runkubeadm token create --print-join-command
kubeadm token listkubeadm init phase upload-certs --upload-certs

这篇关于高可用k8s集群(k8s-1.29.2)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/746919

相关文章

Jenkins分布式集群配置方式

《Jenkins分布式集群配置方式》:本文主要介绍Jenkins分布式集群配置方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1.安装jenkins2.配置集群总结Jenkins是一个开源项目,它提供了一个容易使用的持续集成系统,并且提供了大量的plugin满

Redis分片集群、数据读写规则问题小结

《Redis分片集群、数据读写规则问题小结》本文介绍了Redis分片集群的原理,通过数据分片和哈希槽机制解决单机内存限制与写瓶颈问题,实现分布式存储和高并发处理,但存在通信开销大、维护复杂及对事务支持... 目录一、分片集群解android决的问题二、分片集群图解 分片集群特征如何解决的上述问题?(与哨兵模

SpringBoot连接Redis集群教程

《SpringBoot连接Redis集群教程》:本文主要介绍SpringBoot连接Redis集群教程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1. 依赖2. 修改配置文件3. 创建RedisClusterConfig4. 测试总结1. 依赖 <de

k8s上运行的mysql、mariadb数据库的备份记录(支持x86和arm两种架构)

《k8s上运行的mysql、mariadb数据库的备份记录(支持x86和arm两种架构)》本文记录在K8s上运行的MySQL/MariaDB备份方案,通过工具容器执行mysqldump,结合定时任务实... 目录前言一、获取需要备份的数据库的信息二、备份步骤1.准备工作(X86)1.准备工作(arm)2.手

Nginx使用Keepalived部署web集群(高可用高性能负载均衡)实战案例

《Nginx使用Keepalived部署web集群(高可用高性能负载均衡)实战案例》本文介绍Nginx+Keepalived实现Web集群高可用负载均衡的部署与测试,涵盖架构设计、环境配置、健康检查、... 目录前言一、架构设计二、环境准备三、案例部署配置 前端 Keepalived配置 前端 Nginx

Redis高可用-主从复制、哨兵模式与集群模式详解

《Redis高可用-主从复制、哨兵模式与集群模式详解》:本文主要介绍Redis高可用-主从复制、哨兵模式与集群模式的使用,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝... 目录Redis高可用-主从复制、哨兵模式与集群模式概要一、主从复制(Master-Slave Repli

Redis分片集群的实现

《Redis分片集群的实现》Redis分片集群是一种将Redis数据库分散到多个节点上的方式,以提供更高的性能和可伸缩性,本文主要介绍了Redis分片集群的实现,具有一定的参考价值,感兴趣的可以了解一... 目录1. Redis Cluster的核心概念哈希槽(Hash Slots)主从复制与故障转移2.

MySQL双主搭建+keepalived高可用的实现

《MySQL双主搭建+keepalived高可用的实现》本文主要介绍了MySQL双主搭建+keepalived高可用的实现,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,... 目录一、测试环境准备二、主从搭建1.创建复制用户2.创建复制关系3.开启复制,确认复制是否成功4.同

k8s部署MongDB全过程

《k8s部署MongDB全过程》文章介绍了如何在Kubernetes集群中部署MongoDB,包括环境准备、创建Secret、创建服务和Deployment,并通过Robo3T工具测试连接... 目录一、环境准备1.1 环境说明1.2 创建 namespace1.3 创建mongdb账号/密码二、创建Sec

centos7基于keepalived+nginx部署k8s1.26.0高可用集群

《centos7基于keepalived+nginx部署k8s1.26.0高可用集群》Kubernetes是一个开源的容器编排平台,用于自动化地部署、扩展和管理容器化应用程序,在生产环境中,为了确保集... 目录一、初始化(所有节点都执行)二、安装containerd(所有节点都执行)三、安装docker-