云原生边缘计算 kubeedge部署丨建议K8s1.24以内版本使用

本文主要是介绍云原生边缘计算 kubeedge部署丨建议K8s1.24以内版本使用,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

云原生边缘计算 kubeedge 部署

一、k8s集群部署及基础服务提供

1.1 k8s集群部署

由于kubeedge支持k8s版本版本较低,不建议使用k8s 1.24、1.25、1.26集群版本。可参考github提供的版本支持依据。

在这里插入图片描述

1.2 基础服务提供 负载均衡器 metallb

由于需要为cloudcore与edgecore提供通信地址,建议使用LB为cloudcore提供公网IP或K8S集群节点相同网段IP地址,实际生产中使用的是公网IP地址。

kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:strictARP: true
kubectl rollout restart daemonset kube-proxy -n kube-system

在这里插入图片描述

# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.5/config/manifests/metallb-native.yaml
创建全局IP地址池
[root@k8s-master01 ~]# vim first-ippool.yaml
[root@k8s-master01 ~]# cat first-ippool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:name: first-poolnamespace: metallb-system
spec:addresses:- 192.168.10.200-192.168.10.210
验证是否创建
[root@k8s-master01 ~]# kubectl get ipaddresspool -n metallb-system
NAME         AGE
first-pool   23s

在这里插入图片描述

开启二层转发,实现在k8s集群节点外访问
[root@k8s-master01 ~]# vim l2forward.yaml
[root@k8s-master01 ~]# cat l2forward.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:name: examplenamespace: metallb-system

在这里插入图片描述

二、kubeedge架构

在这里插入图片描述

三、kubeedge cloudcore部署

在这里插入图片描述

3.1 获取keadm工具

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

[root@k8s-master01 ~]# wget https://github.com/kubeedge/kubeedge/releases/download/v1.12.1/keadm-v1.12.1-linux-amd64.tar.gz
[root@k8s-master01 ~]# ls
keadm-v1.12.1-linux-amd64.tar.gz
[root@k8s-master01 ~]# tar xf keadm-v1.12.1-linux-amd64.tar.gz
[root@k8s-master01 ~]# ls
keadm-v1.12.1-linux-amd64.tar.gz  keadm-v1.12.1-linux-amd64
[root@k8s-master01 ~]# ls keadm-v1.12.1-linux-amd64
keadm  version
[root@k8s-master01 ~]# ls keadm-v1.12.1-linux-amd64/keadm/
keadm
[root@k8s-master01 ~]# mv keadm-v1.12.1-linux-amd64/keadm/keadm /usr/local/bin/
[root@k8s-master01 ~]# keadm+----------------------------------------------------------+| KEADM                                                    || Easily bootstrap a KubeEdge cluster                      ||                                                          || Please give us feedback at:                              || https://github.com/kubeedge/kubeedge/issues              |+----------------------------------------------------------+Create a cluster with cloud node(which controls the edge node cluster), and edge nodes(where native containerized application, in the form ofpods and deployments run), connects to devices.Usage:keadm [command]Examples:+----------------------------------------------------------+| On the cloud machine:                                    |+----------------------------------------------------------+| master node (on the cloud)# sudo keadm init              |+----------------------------------------------------------++----------------------------------------------------------+| On the edge machine:                                     |+----------------------------------------------------------+| worker node (at the edge)# sudo keadm join <flags>       |+----------------------------------------------------------+You can then repeat the second step on, as many other machines as you like.Available Commands:completion  generate the autocompletion script for the specified shellconfig      Use this command to configure keadmdebug       debug function to help diagnose the clusterdeprecated  keadm deprecated commandgettoken    To get the token for edge nodes to join the clusterhelp        Help about any commandinit        Bootstraps cloud component. Checks and install (if required) the pre-requisites.join        Bootstraps edge component. Checks and install (if required) the pre-requisites. Execute it on any edge node machine you wish to joinmanifest    Render the manifests by using a list of set flags like helm.reset       Teardowns KubeEdge (cloud(helm installed) & edge) componentupgrade     Upgrade edge component. Upgrade the edge node to the desired version.version     Print the version of keadmFlags:-h, --help   help for keadmAdditional help topics:keadm beta       keadm beta commandUse "keadm [command] --help" for more information about a command.

3.2 cloudcore部署

[root@k8s-master01 ~]# keadm init --advertise-address=192.168.10.200 --set iptablesManager.mode="external" --profile version=v1.12.1
Kubernetes version verification passed, KubeEdge installation will start...
CLOUDCORE started
=========CHART DETAILS=======
NAME: cloudcore
LAST DEPLOYED: Sun Jan  8 21:52:24 2023
NAMESPACE: kubeedge
STATUS: deployed
REVISION: 1
[root@k8s-master01 ~]# kubectl get ns
NAME               STATUS   AGE
calico-apiserver   Active   13m
calico-system      Active   15m
default            Active   18m
kube-node-lease    Active   18m
kube-public        Active   18m
kube-system        Active   18m
kubeedge           Active   30s 在这里
tigera-operator    Active   16m
[root@k8s-master01 ~]# kubectl get pods -n kubeedge
NAME                           READY   STATUS    RESTARTS   AGE
cloud-iptables-manager-5hdtp   1/1     Running   0          58s
cloud-iptables-manager-lsmmd   1/1     Running   0          58s
cloudcore-5876c76687-8rtj4     1/1     Running   0          57s
[root@k8s-master01 ~]# kubectl get svc -n kubeedge
NAME        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                             AGE
cloudcore   ClusterIP   10.110.216.131   <none>        10000/TCP,10001/TCP,10002/TCP,10003/TCP,10004/TCP   57s
[root@k8s-master01 ~]# kubectl edit svc cloudcore -n kubeedge
service/cloudcore edited
修改位置:selector:k8s-app: kubeedgekubeedge: cloudcoresessionAffinity: Nonetype: LoadBalancer   此处由clusterIP修改为LoadBalancer
[root@k8s-master01 ~]# kubectl get svc -n kubeedge
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                                           AGE
cloudcore   LoadBalancer   10.98.223.240   192.168.10.200   10000:32400/TCP,10001:32049/TCP,10002:31062/TCP,10003:32041/TCP,10004:32174/TCP   75s
[root@k8s-master01 ~]# kubectl get daemonset -n kube-system | grep -v NAME | awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/affinity", "value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
daemonset.apps/kube-proxy patched
[root@k8s-master01 ~]# kubectl get daemonset -n metallb-system | grep -v NAME | awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/affinity", "value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
daemonset.apps/speaker patched
[root@k8s-master01 ~]# kubectl get daemonset -n calico-system | grep -v NAME | awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n calico-system --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/affinity", "value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
daemonset.apps/calico-node patched
daemonset.apps/csi-node-driver patched
[root@k8s-master01 ~]# kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS      AGE
coredns-7bdbbf6bf5-2ftxd               1/1     Running   1 (13h ago)   13h
coredns-7bdbbf6bf5-l292b               1/1     Running   1 (13h ago)   13h
etcd-k8s-master01                      1/1     Running   1 (13h ago)   13h
kube-apiserver-k8s-master01            1/1     Running   1 (13h ago)   13h
kube-controller-manager-k8s-master01   1/1     Running   1 (13h ago)   13h
kube-proxy-j7674                       1/1     Running   0             8m42s
kube-proxy-t8nrc                       1/1     Running   0             8m43s
kube-proxy-zh5qw                       1/1     Running   0             8m40s
kube-scheduler-k8s-master01            1/1     Running   1 (13h ago)   13h
metrics-server-684454657f-rgwhp        0/1     Running   0             10s
[root@k8s-master01 ~]# kubectl patch deploy metrics-server -n kube-system --type='json' -p='[{"op":"add","path": "/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]'
deployment.apps/metrics-server patched
[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS      AGE
coredns-7bdbbf6bf5-2ftxd               1/1     Running   1 (13h ago)   13h
coredns-7bdbbf6bf5-l292b               1/1     Running   1 (13h ago)   13h
etcd-k8s-master01                      1/1     Running   1 (13h ago)   13h
kube-apiserver-k8s-master01            1/1     Running   1 (13h ago)   13h
kube-controller-manager-k8s-master01   1/1     Running   1 (13h ago)   13h
kube-proxy-j7674                       1/1     Running   0             15m
kube-proxy-t8nrc                       1/1     Running   0             15m
kube-proxy-zh5qw                       1/1     Running   0             15m
kube-scheduler-k8s-master01            1/1     Running   1 (13h ago)   13h
metrics-server-85bc67fbcd-4lgvn        1/1     Running   0             2m7s
[root@k8s-master01 ~]# iptables -t nat -A OUTPUT -p tcp --dport 10351 -j DNAT --to 192.168.10.200:10003

四、kubeedge edgecore部署

[root@localhost ~]# hostnamectl set-hostname edgenode-1

在这里插入图片描述

在这里插入图片描述

[root@edgenode-1 ~]# wget https://github.com/kubeedge/kubeedge/releases/download/v1.12.1/keadm-v1.12.1-linux-amd64.tar.gz
[root@edgenode-1 ~]# tar xf keadm-v1.12.1-linux-amd64.tar.gz
[root@edgenode-1 ~]# mv keadm-v1.12.1-linux-amd64/keadm/keadm /usr/local/bin/
[root@edgenode-1 ~]# keadm+----------------------------------------------------------+| KEADM                                                    || Easily bootstrap a KubeEdge cluster                      ||                                                          || Please give us feedback at:                              || https://github.com/kubeedge/kubeedge/issues              |+----------------------------------------------------------+Create a cluster with cloud node(which controls the edge node cluster), and edge nodes(where native containerized application, in the form ofpods and deployments run), connects to devices.Usage:keadm [command]Examples:+----------------------------------------------------------+| On the cloud machine:                                    |+----------------------------------------------------------+| master node (on the cloud)# sudo keadm init              |+----------------------------------------------------------++----------------------------------------------------------+| On the edge machine:                                     |+----------------------------------------------------------+| worker node (at the edge)# sudo keadm join <flags>       |+----------------------------------------------------------+You can then repeat the second step on, as many other machines as you like.Available Commands:completion  generate the autocompletion script for the specified shellconfig      Use this command to configure keadmdebug       debug function to help diagnose the clusterdeprecated  keadm deprecated commandgettoken    To get the token for edge nodes to join the clusterhelp        Help about any commandinit        Bootstraps cloud component. Checks and install (if required) the pre-requisites.join        Bootstraps edge component. Checks and install (if required) the pre-requisites. Execute it on any edge node machine you wish to joinmanifest    Render the manifests by using a list of set flags like helm.reset       Teardowns KubeEdge (cloud(helm installed) & edge) componentupgrade     Upgrade edge component. Upgrade the edge node to the desired version.version     Print the version of keadmFlags:-h, --help   help for keadmAdditional help topics:keadm beta       keadm beta commandUse "keadm [command] --help" for more information about a command.
[root@edgenode-1 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@edgenode-1 ~]# yum -y install docker-ce
[root@edgenode-1 ~]# systemctl enable --now docker
[root@k8s-master01 ~]# keadm gettoken
bec7e346f2b62b87bb01cb8082111b05759aa16ba298d7b97442581c3bccee52.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NzMzMjI1Mjd9.BzQiyUFBp1dax9NC7BOssRe4Pgj wOE24w2jE7S8Hp-0
[root@edgenode-1 ~]# TOKEN=bec7e346f2b62b87bb01cb8082111b05759aa16ba298d7b97442581c3bccee52.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NzMzMjI1Mjd9. BzQiyUFBp1dax9NC7BOssRe4PgjwOE24w2jE7S8Hp-0
[root@edgenode-1 ~]# SERVER=192.168.10.200:10000
[root@edgenode-1 ~]# keadm join --token=$TOKEN --cloudcore-ipport=$SERVER --kubeedge-version=1.12.1
输出内容:
I0109 11:53:10.734725   11811 command.go:845] 1. Check KubeEdge edgecore process status
I0109 11:53:10.743044   11811 command.go:845] 2. Check if the management directory is clean
I0109 11:53:10.743129   11811 join.go:100] 3. Create the necessary directories
I0109 11:53:10.744103   11811 join.go:176] 4. Pull Images
Pulling kubeedge/installation-package:v1.12.1 ...
Pulling eclipse-mosquitto:1.6.15 ...
Pulling kubeedge/pause:3.1 ...
I0109 11:53:10.749692   11811 join.go:176] 5. Copy resources from the image to the management directory
I0109 11:53:11.324868   11811 join.go:176] 6. Start the default mqtt service
I0109 11:53:11.630975   11811 join.go:100] 7. Generate systemd service file
I0109 11:53:11.631178   11811 join.go:100] 8. Generate EdgeCore default configuration
I0109 11:53:11.631211   11811 join.go:230] The configuration does not exist or the parsing fails, and the default configuration is generated
W0109 11:53:11.744671   11811 validation.go:71] NodeIP is empty , use default ip which can connect to cloud.
I0109 11:53:11.746621   11811 join.go:100] 9. Run EdgeCore daemon
I0109 11:53:11.923574   11811 join.go:317]
I0109 11:53:11.923594   11811 join.go:318] KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
edgenode-1     Ready    agent,edge             88m   v1.22.6-kubeedge-v1.12.1
k8s-master01   Ready    control-plane,master   15h   v1.22.17
k8s-worker01   Ready    <none>                 15h   v1.22.17
k8s-worker02   Ready    <none>                 15h   v1.22.17
[root@edgenode-1 ~]# systemctl status edgecore
● edgecore.serviceLoaded: loaded (/etc/systemd/system/edgecore.service; enabled; vendor preset: disabled)Active: active (running) since 一 2023-01-09 11:53:11 CST; 13s agoMain PID: 12058 (edgecore)Tasks: 14Memory: 44.6MCGroup: /system.slice/edgecore.service└─12058 /usr/local/bin/edgecore1月 09 11:53:12 edgenode-1 edgecore[12058]: E0109 11:53:12.508148   12058 kubelet.go:1831] "Skipping pod synchronization" err="container runti...ted yet"
1月 09 11:53:12 edgenode-1 edgecore[12058]: E0109 11:53:12.909472   12058 kubelet.go:1831] "Skipping pod synchronization" err="container runti...ted yet"
1月 09 11:53:13 edgenode-1 edgecore[12058]: E0109 11:53:13.711153   12058 kubelet.go:1831] "Skipping pod synchronization" err="container runti...ted yet"
1月 09 11:53:15 edgenode-1 edgecore[12058]: E0109 11:53:15.312124   12058 kubelet.go:1831] "Skipping pod synchronization" err="container runti...ted yet"
1月 09 11:53:18 edgenode-1 edgecore[12058]: E0109 11:53:18.512561   12058 kubelet.go:1831] "Skipping pod synchronization" err="container runti...ted yet"
1月 09 11:53:22 edgenode-1 edgecore[12058]: I0109 11:53:22.170516   12058 reconciler.go:157] "Reconciler: start to sync state"
1月 09 11:53:22 edgenode-1 edgecore[12058]: I0109 11:53:22.329807   12058 kuberuntime_manager.go:1078] "Updating runtime config through cri wi....3.0/24"
1月 09 11:53:22 edgenode-1 edgecore[12058]: I0109 11:53:22.330123   12058 docker_service.go:363] "Docker cri received runtime config" runtimeC.../24,},}"
1月 09 11:53:22 edgenode-1 edgecore[12058]: I0109 11:53:22.330276   12058 kubelet_network.go:62] "Updating Pod CIDR" originalPodCIDR="" newPod....3.0/24"
1月 09 11:53:23 edgenode-1 edgecore[12058]: E0109 11:53:23.514067   12058 kubelet.go:1831] "Skipping pod synchronization" err="container runti...ted yet"
Hint: Some lines were ellipsized, use -l to show in full.
edgecore配置文件位置 [root@edgenode-1 ~]# cat /etc/kubeedge/config/edgecore.yaml
[root@edgenode-1 ~]# journalctl -u edgecore.service -f
[root@edgenode-1 ~]# docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED          STATUS          PORTS                                       NAMES
c658285c03d9   eclipse-mosquitto:1.6.15   "/docker-entrypoint.…"   22 seconds ago   Up 22 seconds   0.0.0.0:1883->1883/tcp, :::1883->1883/tcp   mqtt

五、通过kubeedge部署nginx应用并实现访问

[root@k8s-master01 ~]# vim nginx.yaml
[root@k8s-master01 ~]# cat nginx.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx
spec:selector:matchLabels:app: nginxreplicas: 1template:metadata:labels:app: nginxspec:nodeName: edgenode-1containers:- name: nginximage: nginx:latest
---
apiVersion: v1
kind: Service
metadata:name: nginx-svc
spec:type: LoadBalancerports:- port: 80targetPort: 80selector:app: nginx
[root@k8s-master01 ~]# kubectl apply -f nginx.yaml
deployment.apps/nginx created
service/nginx-svc created
[root@k8s-master01 ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
nginx-7c994ccd94-cnstq   1/1     Running   0          26s   172.17.0.6   edgenode-1   <none>           <none>
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
nginx-svc    LoadBalancer   10.107.115.15   192.168.10.201   80:31886/TCP   47s
[root@k8s-master01 ~]# kubectl exec -it  nginx-7c994ccd94-g9d5p -- /bin/bash
root@nginx-7c994ccd94-g9d5p:/#
root@nginx-7c994ccd94-g9d5p:/# pwd
/
root@nginx-7c994ccd94-g9d5p:/# ls
bin  boot  dev  docker-entrypoint.d  docker-entrypoint.sh  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

六、查看已部署到边缘侧应用日志

[root@edgenode-1 ~]# ls /etc/kubeedge/
ca  certs  config  dmi.sock
[root@edgenode-1 ~]# ls /etc/kubeedge/config/
edgecore.yaml
[root@edgenode-1 ~]# vim /etc/kubeedge/config/edgecore.yaml
edgeStream:enable: true 此处由flase修改为truehandshakeTimeout: 30readDeadline: 15server: 192.168.10.200:10004
[root@edgenode-1 ~]# systemctl restart edgecore
[root@edgenode-1 ~]# docker ps
CONTAINER ID   IMAGE                          COMMAND                  CREATED          STATUS          PORTS                                       NAMES
35ea17f6790f   nginx                          "/docker-entrypoint.…"   3 minutes ago    Up 3 minutes                                                k8s_nginx_nginx-7c994ccd94-g9d5p_default_762eb218-58e8-441a-8951-152438bc17b7_0
d9c778946274   kubeedge/pause:3.1             "/pause"                 3 minutes ago    Up 3 minutes                                                k8s_POD_nginx-7c994ccd94-g9d5p_default_762eb218-58e8-441a-8951-152438bc17b7_0
[root@edgenode-1 ~]# docker logs 35ea17f6790f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/01/09 13:28:26 [notice] 1#1: using the "epoll" event method
2023/01/09 13:28:26 [notice] 1#1: nginx/1.23.3
2023/01/09 13:28:26 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/01/09 13:28:26 [notice] 1#1: OS: Linux 5.4.213-1.el7.elrepo.x86_64
2023/01/09 13:28:26 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/01/09 13:28:26 [notice] 1#1: start worker processes
2023/01/09 13:28:26 [notice] 1#1: start worker process 29
2023/01/09 13:28:26 [notice] 1#1: start worker process 30
2023/01/09 13:28:26 [notice] 1#1: start worker process 31
2023/01/09 13:28:26 [notice] 1#1: start worker process 32
[root@k8s-master01 ~]# kubectl logs nginx-7c994ccd94-g9d5p
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/01/09 13:28:26 [notice] 1#1: using the "epoll" event method
2023/01/09 13:28:26 [notice] 1#1: nginx/1.23.3
2023/01/09 13:28:26 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/01/09 13:28:26 [notice] 1#1: OS: Linux 5.4.213-1.el7.elrepo.x86_64
2023/01/09 13:28:26 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/01/09 13:28:26 [notice] 1#1: start worker processes
2023/01/09 13:28:26 [notice] 1#1: start worker process 29
2023/01/09 13:28:26 [notice] 1#1: start worker process 30
2023/01/09 13:28:26 [notice] 1#1: start worker process 31
2023/01/09 13:28:26 [notice] 1#1: start worker process 32

这篇关于云原生边缘计算 kubeedge部署丨建议K8s1.24以内版本使用的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/416604

相关文章

Java中流式并行操作parallelStream的原理和使用方法

《Java中流式并行操作parallelStream的原理和使用方法》本文详细介绍了Java中的并行流(parallelStream)的原理、正确使用方法以及在实际业务中的应用案例,并指出在使用并行流... 目录Java中流式并行操作parallelStream0. 问题的产生1. 什么是parallelS

Linux join命令的使用及说明

《Linuxjoin命令的使用及说明》`join`命令用于在Linux中按字段将两个文件进行连接,类似于SQL的JOIN,它需要两个文件按用于匹配的字段排序,并且第一个文件的换行符必须是LF,`jo... 目录一. 基本语法二. 数据准备三. 指定文件的连接key四.-a输出指定文件的所有行五.-o指定输出

Linux jq命令的使用解读

《Linuxjq命令的使用解读》jq是一个强大的命令行工具,用于处理JSON数据,它可以用来查看、过滤、修改、格式化JSON数据,通过使用各种选项和过滤器,可以实现复杂的JSON处理任务... 目录一. 简介二. 选项2.1.2.2-c2.3-r2.4-R三. 字段提取3.1 普通字段3.2 数组字段四.

Linux kill正在执行的后台任务 kill进程组使用详解

《Linuxkill正在执行的后台任务kill进程组使用详解》文章介绍了两个脚本的功能和区别,以及执行这些脚本时遇到的进程管理问题,通过查看进程树、使用`kill`命令和`lsof`命令,分析了子... 目录零. 用到的命令一. 待执行的脚本二. 执行含子进程的脚本,并kill2.1 进程查看2.2 遇到的

详解SpringBoot+Ehcache使用示例

《详解SpringBoot+Ehcache使用示例》本文介绍了SpringBoot中配置Ehcache、自定义get/set方式,并实际使用缓存的过程,文中通过示例代码介绍的非常详细,对大家的学习或者... 目录摘要概念内存与磁盘持久化存储:配置灵活性:编码示例引入依赖:配置ehcache.XML文件:配置

Java 虚拟线程的创建与使用深度解析

《Java虚拟线程的创建与使用深度解析》虚拟线程是Java19中以预览特性形式引入,Java21起正式发布的轻量级线程,本文给大家介绍Java虚拟线程的创建与使用,感兴趣的朋友一起看看吧... 目录一、虚拟线程简介1.1 什么是虚拟线程?1.2 为什么需要虚拟线程?二、虚拟线程与平台线程对比代码对比示例:三

Nginx分布式部署流程分析

《Nginx分布式部署流程分析》文章介绍Nginx在分布式部署中的反向代理和负载均衡作用,用于分发请求、减轻服务器压力及解决session共享问题,涵盖配置方法、策略及Java项目应用,并提及分布式事... 目录分布式部署NginxJava中的代理代理分为正向代理和反向代理正向代理反向代理Nginx应用场景

k8s按需创建PV和使用PVC详解

《k8s按需创建PV和使用PVC详解》Kubernetes中,PV和PVC用于管理持久存储,StorageClass实现动态PV分配,PVC声明存储需求并绑定PV,通过kubectl验证状态,注意回收... 目录1.按需创建 PV(使用 StorageClass)创建 StorageClass2.创建 PV

Python版本与package版本兼容性检查方法总结

《Python版本与package版本兼容性检查方法总结》:本文主要介绍Python版本与package版本兼容性检查方法的相关资料,文中提供四种检查方法,分别是pip查询、conda管理、PyP... 目录引言为什么会出现兼容性问题方法一:用 pip 官方命令查询可用版本方法二:conda 管理包环境方法

Redis 基本数据类型和使用详解

《Redis基本数据类型和使用详解》String是Redis最基本的数据类型,一个键对应一个值,它的功能十分强大,可以存储字符串、整数、浮点数等多种数据格式,本文给大家介绍Redis基本数据类型和... 目录一、Redis 入门介绍二、Redis 的五大基本数据类型2.1 String 类型2.2 Hash