CentOS 7.8 kubeadm安装 k8s 1.26

2023-10-25 10:15

本文主要是介绍CentOS 7.8 kubeadm安装 k8s 1.26,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

CentOS 7.8 kubeadm安装 k8s 1.26

规划

集群

主机名IP地址
k8s-master01192.168.200.101
k8s-master02192.168.200.102
k8s-master03192.168.200.103
k8s-node01192.168.200.201
k8s-node02192.168.200.202

VIP(虚拟IP)

192.168.200.80

Harbor

http://192.168.200.50

基础

1.设置静态IP

2.设置ssh远程登录

安装epel源
yum install epel-release -y
yum makecache fast
安装一些工具
yum install -y yum-utils device-mapper-persistent-data lvm2

基本设置

防火墙处理
禁用SELINUX
sed -i 's/enforcing/disabled/' /etc/selinux/config
禁用所有swap交换分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
临时关闭交换分区swap
swapoff -a
分别设置主机名(根据当前主机名称修改)
vi /etc/hostname
添加 hosts 主机地址映射
cat >> /etc/hosts <<EOF
192.168.200.101 k8s-master01
192.168.200.102 k8s-master02
192.168.200.103 k8s-master03
192.168.200.201 k8s-node01
192.168.200.202 k8s-node02
EOF
时间同步

安装ntpdata

yum -y install ntp 

同步时间

ntpdate ntp1.aliyun.com

设置开机自动同步与定时执行

设置profile

追加

echo "ulimit -n 65535" >>/etc/profile

刷新配置

source /etc/profile
设置免密码登录

只有master01 节点执行

Master01 节点免密码登录其他节点(提示输入全部回车,不用输入)

ssh-keygen -t rsa

将密钥发送到其他主机上

ssh-copy-id -i .ssh/id_rsa.pub k8s-master01
ssh-copy-id -i .ssh/id_rsa.pub k8s-master02
ssh-copy-id -i .ssh/id_rsa.pub k8s-master03
ssh-copy-id -i .ssh/id_rsa.pub k8s-node01
ssh-copy-id -i .ssh/id_rsa.pub k8s-node02

yes并输入密码

内核升级

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定所以需要升级内核 4.18+ 以上

升级系统软件
yum install wget jg psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
yum update -y --exclude=kernel* && reboot
升级内核

下载选好的内核版本

wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

安装内核

yum localinstall -y kernel-ml* 

更改启动内核

grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg 
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"  

查看启动内核是否是我们需要的

grubby --default-kernel  

重启生效,确认使用内核版本

reboot 
uname -a
内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
net.ipv4.conf.all.route_localnet = 1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.inotify.max_user_instances=8192
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

加载配置

sysctl --system

安装docker

[所有节点安装docker]

添加docker yum源

yum -y install yum-utils
yum-config-manager \--add-repo \https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's/download.docker.com/mirrors.aliyun.com\/docker-ce/g' /etc/yum.repos.d/docker-ce.repo

更新yum包软件索引

yum makecache fast

查询所有版本

yum list docker-ce.x86_64 --showduplicates | sort -r

安装指定版本

yum -y install docker-ce-20.10.10-3.el7

启动&开机自启 docker

systemctl restart docker && systemctl enable docker
docker 设置非安全仓库

注意如果是私有仓库请设置非安全仓库,否则跳过

配置

想要推镜像,需要设置非安全仓库(这里使用http协议,没有使用https)

vim /etc/docker/daemon.json

192.168.200.50是自己搭建的 harbor仓库地址

{"insecure-registries":["192.168.200.50"]
}

重启docker

systemctl restart docker

查看信息

docker info 

安装cri-dockerd

[所有节点都装cri-dockerd]

上传或下载cri-dockerd安装包

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.1/cri-dockerd-0.3.1-3.el7.x86_64.rpm

安装cri-dockerd

rpm -ivh cri-dockerd-0.3.1-3.el7.x86_64.rpm

修改镜像地址为国内,否则kubelet拉取不了镜像导致启动失败

vi /usr/lib/systemd/system/cri-docker.service

使用阿里云源

ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7

使用私有仓库

ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=192.168.200.50/google_containers/pause:3.9

启动cri-dockerd

systemctl daemon-reload 
systemctl enable cri-docker && systemctl start cri-docker

安装kubeadm

[所有节点安装kubeadm]

设置kubernetes 阿里云yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

快速建立yum缓存

yum -y makecache fast

查询所有版本

yum list kubeadm.x86_64 --showduplicates | sort -r

安装指定版本 kubeadm kubectl kubelet

yum install kubeadm-1.26.4-0.x86_64 kubectl-1.26.4-0.x86_64 kubelet-1.26.4-0.x86_64
systemctl restart kubelet && systemctl enable kubelet

安装高可用组件

[所有Master节点安装 keepalived与haproxy ]

yum -y install keepalived haproxy
配置HAProxy
所有master节点,配置相同

注意修改ip

cat > /etc/haproxy/haproxy.cfg << EOF
globalmaxconn  2000ulimit-n  16384log  127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode  httpoption  httplogtimeout connect 5000timeout client  50000timeout server  50000timeout http-request 15stimeout http-keep-alive 15sfrontend k8s-masterbind 0.0.0.0:8443bind 127.0.0.1:8443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server k8s-master01    192.168.200.101:6443  checkserver k8s-master02    192.168.200.102:6443  checkserver k8s-master03    192.168.200.103:6443  check
EOF
配置keepalived

注意每个master节点不一样

k8s-master01

注意修改ip,网卡名称!

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1
}
vrrp_instance VI_1 {state MASTERinterface ens33mcast_src_ip 192.168.200.101virtual_router_id 51priority 101nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.200.80}track_script {chk_apiserver 
} }
EOF
k8s-master02

注意修改ip,网卡名称

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1}
vrrp_instance VI_1 {state BACKUPinterface ens33mcast_src_ip 192.168.200.102virtual_router_id 51priority 90nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.200.80}track_script {chk_apiserver 
} }
EOF
k8s-master03

注意修改ip,网卡名称

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2  rise 1
}
vrrp_instance VI_1 {state BACKUPinterface ens33mcast_src_ip 192.168.200.103virtual_router_id 51priority 80nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.200.80}track_script {chk_apiserver 
} }
EOF
健康检测脚本

master 三节点相同

cat > /etc/keepalived/check_apiserver.sh << EOF
#!/bin/basherr=0
for k in $(seq 1 3)
docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfi
doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi
EOF

给执行权限

chmod +x /etc/keepalived/check_apiserver.sh
启动haproxy和keepalived

所有master节点启动haproxy和keepalived

systemctl enable --now haproxy && systemctl enable --now keepalived
验证VIP
ping 192.168.200.80

初始化k8s集群

只有master01执行初始化

vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.200.101bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/cri-dockerd.sockname: k8s-master01taints:- effect: PreferNoSchedulekey: node-role.kubernetes.io/master
---
apiServer:certSANs:- 192.168.200.80timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.200.80:8443
controllerManager: {}
dns: {type:CoreDNS
}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:dnsDomain: cluster.localpodSubnet: 172.168.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}
转换版本

只有master01执行

如果初始化ymal版本比较旧可以使用如下命令进行转换新版本

kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
拉取k8s组件镜像

可提前拉取镜像非必要

kubeadm config images pull --config kubeadm-config.yaml
初始化集群

只有master01执行

kubeadm init --config kubeadm-config.yaml --upload-certs

成功示例:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.200.80:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:839dcb5784d74069bb4d5bc57912125894cbc79006c6cdbe8627d2115de4aa3f \--control-plane --certificate-key 1651b597b323b1888f0308f761e1bc0716d2cda08b0024070821ad2cb06314d4

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
“kubeadm init phase upload-certs --upload-certs” to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.200.80:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:839dcb5784d74069bb4d5bc57912125894cbc79006c6cdbe8627d2115de4aa3f 

[root@k8s-master01 ~]#

Token查看过期时间

只有master01可执行

获取信息

kubectl get secret -n kube-system

NAME TYPE DATA AGE
bootstrap-token-abcdef bootstrap.kubernetes.io/token 6 15h

查看内容

kubectl get secret -n kube-system bootstrap-token-abcdef -oyaml

apiVersion: v1
data:
auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=
expiration: MjAyMy0wNS0wMVQxNTowMzoyMlo=
token-id: YWJjZGVm
token-secret: MDEyMzQ1Njc4OWFiY2RlZg==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: “2023-04-30T15:03:22Z”
name: bootstrap-token-abcdef
namespace: kube-system
resourceVersion: “210”
uid: cd8dc8d5-2dfd-4d22-9c8b-320767e6c7ad
type: bootstrap.kubernetes.io/token

查看过期时间(base64编码)

echo "MjAyMy0wNS0wMVQxNTowMzoyMlo=" | base64 --decode
重新生成Token

如果token过期那么可以从新生成

node节点:

kubeadm token create --print-join-command

kubeadm join 192.168.200.101:6443 --token fkx9wu.k20ta9krq2r5lc5y --discovery-token-ca-cert-hash sha256:a4f07db80f54e277c8514991c76b040913310dba73397e4447126dcd11b72073

master节点:

注意要加上 --config=你的路径/初始化集群时候.yaml

kubeadm init phase upload-certs --upload-certs --config=/root/new.yaml --print-join-command

[root@k8s-master01 ~]# kubeadm init phase upload-certs --upload-certs --config=/root/new.yaml
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[upload-certs] Using certificate key:
094c1265df2831b7370757f89078215a117604a462438b067bd6909235384f3c

重新初始化 Kubernetes 集群

1.使用 kubeadm reset 命令将节点还原为初始状态。这将停止 kubelet 服务,删除所有与 Kubernetes 相关的容器和 Pod、配置文件和数据。确保你已备份任何重要数据,因为此操作将永久删除集群数据。

sudo kubeadm reset

2.清理 kubelet 的配置文件和证书目录:

sudo rm -rf /etc/kubernetes
sudo rm -rf /var/lib/kubelet
sudo rm -rf /var/lib/etcd

3.清理 CNI(容器网络接口)配置和数据:

sudo rm -rf /etc/cni/net.d
sudo rm -rf /var/lib/cni

4.删除 .kube 目录,这是 kubectl 的配置目录:

sudo rm -rf $HOME/.kube

重新初始化…

sudo kubeadm init --config kubeadm-config.yaml

安装插件

安装calico

[只有master01执行]

官方教程

https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises

下载yaml

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml -O

打开编辑

vi calico.yaml

搜索 CALICO_IPV4POOL_CIDR

将前面的注释去掉下面的pi为pod的网段,保存推出

修改这里的ip,注意格式对齐,不然会报错

- name: CALICO_IPV4POOL_CIDRvalue: "172.168.0.0/16"

开始部署

kubectl apply -f calico.yaml

如果镜像拉取不下来编辑 calico.yaml 修改镜像源

可以搭建私有docker harbor 仓库来存储镜像以后从本地拉取即可

如下所示:

docker push 192.168.200.50/calico/cni:v3.25.1
docker push 192.168.200.50/library/calico/cni:v3.25.1
docker push 192.168.200.50/calico/cni:v3.25.1

查看pod 状态

kubectl get pod -n kube-system 

NAME READY STATUS RESTARTS AGE
calico-kube-controllers-557ff7c5d4-9cz2k 1/1 Running 0 89s
calico-node-fdj5s 1/1 Running 0 89s
coredns-5bbd96d687-ml854 1/1 Running 0 15h
coredns-5bbd96d687-mw5j7 1/1 Running 0 15h
etcd-k8s-master01 1/1 Running 10 (160m ago) 15h
kube-apiserver-k8s-master01 1/1 Running 10 (160m ago) 15h
kube-controller-manager-k8s-master01 1/1 Running 11 (118s ago) 15h
kube-proxy-6hlvn 1/1 Running 10 (160m ago) 15h
kube-scheduler-k8s-master01 1/1 Running 11 (118s ago) 15h

查看节点状态

kubectl get node

NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 15h v1.26.4

其他节点加入集群

除master01外[master02…03…N 执行]

其他master节点加入

注意替换token,过期请从新生成token

  kubeadm join 192.168.200.80:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:a17b9bd75fe97d3d96736ee7e99b58db7a5ba0ea9757dcc7d332f81ec7130697 \--control-plane --certificate-key 3f9e1a84015565ebeedf232990b0a3117ecf817a36fb71185ae562fdfe967c0d --cri-socket=/var/run/cri-dockerd.sock

node节点加入

注意替换token,过期请从新生成token

kubeadm join 192.168.200.80:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:a17b9bd75fe97d3d96736ee7e99b58db7a5ba0ea9757dcc7d332f81ec7130697 --cri-socket=/var/run/cri-dockerd.sock

从集群初始化节点 master01 将 admin.conf 传递至其他结点(以便使用kubectl命令)

scp /etc/kubernetes/admin.conf root@192.168.200.102:/etc/kubernetes/

传递完成后再 其他节点 上执行以下命令,以便 kubectl 使用 admin.conf 文件:

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

全部加入成功:

[root@k8s-master01 ~]#
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 22m v1.26.4
k8s-master02 Ready control-plane 15m v1.26.4
k8s-master03 Ready control-plane 15m v1.26.4
k8s-node01 Ready 41s v1.26.4
k8s-node02 Ready 48s v1.26.4
[root@k8s-master01 ~]#

其他指令

从 Kubernetes 集群中移除一个 master 节点

  1. 首先,在要移除的 master 节点上,使用 kubeadm 重置该节点:

    sudo kubeadm reset
    

    这将删除与 Kubernetes 相关的所有配置和组件。

  2. 接下来,从其他 master 节点(例如,master01)移除 master02 的 etcd 成员。首先,找到 etcd 成员列表:

    (如果不存在直接跳过第3步)

    kubectl exec -n kube-system -it etcd-k8s-master01 -- etcdctl member list --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key
    

    找到 master02 的 etcd 成员 ID(它看起来像一个长的十六进制数字)。

  3. 使用以下命令删除 master02 的 etcd 成员:

    kubectl exec -n kube-system -it etcd-master01 -- etcdctl member remove <member_id> --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key
    

    请将 `` 替换为在第 2 步中找到的 master02 的 etcd 成员 ID。

  4. 更新你的负载均衡器或 VIP 配置,以便不再将流量转发到 master02。

  5. 作为最后一步,从集群中删除 master02 的节点对象:

    kubectl delete node master02
    

现在,master02 节点已从 Kubernetes 集群中移除。

将本地镜像推送私有仓库

从新打标签

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0 192.168.200.50/google_containers/kube-apiserver:v1.26.0docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0 192.168.200.50/google_containers/kube-controller-manager:v1.26.0docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0 192.168.200.50/google_containers/kube-proxy:v1.26.0docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0 192.168.200.50/google_containers/kube-scheduler:v1.26.0docker tag registry.aliyuncs.com/google_containers/etcd:3.5.6-0 192.168.200.50/google_containers/etcd:3.5.6-0docker tag registry.aliyuncs.com/google_containers/pause:3.9 192.168.200.50/google_containers/pause:3.9docker tag registry.aliyuncs.com/google_containers/coredns:v1.9.3 192.168.200.50/google_containers/coredns:v1.9.3

推送

docker push 192.168.200.50/google_containers/kube-apiserver:v1.26.0docker push 192.168.200.50/google_containers/kube-controller-manager:v1.26.0docker push 192.168.200.50/google_containers/kube-proxy:v1.26.0docker push 192.168.200.50/google_containers/kube-scheduler:v1.26.0docker push 192.168.200.50/google_containers/etcd:3.5.6-0docker push 192.168.200.50/google_containers/pause:3.9docker push 192.168.200.50/google_containers/coredns:v1.9.3
查看node详细信息
kubectl get pods --all-namespaces -o wide
检查 Kubernetes 集群中的事件
kubectl get events --namespace=kube-system
查看pod其中容器的日志
kubectl logs calico-node-rlvwm -n kube-system

查看 k8s calico的yaml配置

kubectl get pods --all-namespaces -l k8s-app=calico-node -o yaml

这篇关于CentOS 7.8 kubeadm安装 k8s 1.26的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/281885

相关文章

Linux系统中查询JDK安装目录的几种常用方法

《Linux系统中查询JDK安装目录的几种常用方法》:本文主要介绍Linux系统中查询JDK安装目录的几种常用方法,方法分别是通过update-alternatives、Java命令、环境变量及目... 目录方法 1:通过update-alternatives查询(推荐)方法 2:检查所有已安装的 JDK方

SQL Server安装时候没有中文选项的解决方法

《SQLServer安装时候没有中文选项的解决方法》用户安装SQLServer时界面全英文,无中文选项,通过修改安装设置中的国家或地区为中文中国,重启安装程序后界面恢复中文,解决了问题,对SQLSe... 你是不是在安装SQL Server时候发现安装界面和别人不同,并且无论如何都没有中文选项?这个问题也

2025版mysql8.0.41 winx64 手动安装详细教程

《2025版mysql8.0.41winx64手动安装详细教程》本文指导Windows系统下MySQL安装配置,包含解压、设置环境变量、my.ini配置、初始化密码获取、服务安装与手动启动等步骤,... 目录一、下载安装包二、配置环境变量三、安装配置四、启动 mysql 服务,修改密码一、下载安装包安装地

Redis MCP 安装与配置指南

《RedisMCP安装与配置指南》本文将详细介绍如何安装和配置RedisMCP,包括快速启动、源码安装、Docker安装、以及相关的配置参数和环境变量设置,感兴趣的朋友一起看看吧... 目录一、Redis MCP 简介二、安www.chinasem.cn装 Redis MCP 服务2.1 快速启动(推荐)2.

在macOS上安装jenv管理JDK版本的详细步骤

《在macOS上安装jenv管理JDK版本的详细步骤》jEnv是一个命令行工具,正如它的官网所宣称的那样,它是来让你忘记怎么配置JAVA_HOME环境变量的神队友,:本文主要介绍在macOS上安装... 目录前言安装 jenv添加 JDK 版本到 jenv切换 JDK 版本总结前言China编程在开发 Java

Linux下在线安装启动VNC教程

《Linux下在线安装启动VNC教程》本文指导在CentOS7上在线安装VNC,包含安装、配置密码、启动/停止、清理重启步骤及注意事项,强调需安装VNC桌面以避免黑屏,并解决端口冲突和目录权限问题... 目录描述安装VNC安装 VNC 桌面可能遇到的问题总结描js述linux中的VNC就类似于Window

虚拟机Centos7安装MySQL数据库实践

《虚拟机Centos7安装MySQL数据库实践》用户分享在虚拟机安装MySQL的全过程及常见问题解决方案,包括处理GPG密钥、修改密码策略、配置远程访问权限及防火墙设置,最终通过关闭防火墙和停止Net... 目录安装mysql数据库下载wget命令下载MySQL安装包安装MySQL安装MySQL服务安装完成

JAVA中安装多个JDK的方法

《JAVA中安装多个JDK的方法》文章介绍了在Windows系统上安装多个JDK版本的方法,包括下载、安装路径修改、环境变量配置(JAVA_HOME和Path),并说明如何通过调整JAVA_HOME在... 首先去oracle官网下载好两个版本不同的jdk(需要登录Oracle账号,没有可以免费注册)下载完

Java JDK1.8 安装和环境配置教程详解

《JavaJDK1.8安装和环境配置教程详解》文章简要介绍了JDK1.8的安装流程,包括官网下载对应系统版本、安装时选择非系统盘路径、配置JAVA_HOME、CLASSPATH和Path环境变量,... 目录1.下载JDK2.安装JDK3.配置环境变量4.检验JDK官网下载地址:Java Downloads

SQL server数据库如何下载和安装

《SQLserver数据库如何下载和安装》本文指导如何下载安装SQLServer2022评估版及SSMS工具,涵盖安装配置、连接字符串设置、C#连接数据库方法和安全注意事项,如混合验证、参数化查... 目录第一步:打开官网下载对应文件第二步:程序安装配置第三部:安装工具SQL Server Manageme