安装OpenEBS,镜像总是报错ImagePullBackOff或者ErrImagePull的解决方法

本文主要是介绍安装OpenEBS,镜像总是报错ImagePullBackOff或者ErrImagePull的解决方法,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

按照 KubeSphere 官方文档安装 OpenEBS,镜像总是报错ImagePullBackOff或者ErrImagePull的解决方法

helm 有很多更换 源 的文章,有一些是写更换阿里云的源,但是阿里云的源根本没更新OpenEBS的镜像。
在网上找到1个可用的源:

可用的源:
stable: http://mirror.azure.cn/kubernetes/charts/

无用的源:
http://mirror.azure.cn/kubernetes/charts-incubator/
https://kubernetes-charts.storage.googleapis.com/ 拒绝访问
https://registry.cn-hangzhou.aliyuncs.com/ 无法访问
https://quay.io/ 被墙

不可用的源执行 helm repo add … 命令会报如下错误:

[root@k8snode1 etcd]# helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator/
WARNING: Deprecated index file format. Try 'helm repo update'
Error: Looks like "http://mirror.azure.cn/kubernetes/charts-incubator/" is not a valid chart repository or cannot be reached: no API version specified

所以把helm的源替换成上面的 http://mirror.azure.cn/kubernetes/charts/

# 先移除旧的源,如果没有添加过旧的源,请忽略
helm repo remove stable# 添加新的源
helm repo add stable http://mirror.azure.cn/kubernetes/charts/# 更新源
helm repo update

开始安装OpenEBS

helm install --namespace openebs --name openebs stable/openebs --version 1.5.0

执行命令 kubectl get pods --all-namespaces 发现 openebs的镜像总是报错ImagePullBackOff或者ErrImagePull 。

kubectl get pods --all-namespaces

执行 helm search 发现并没有 openebs 1.5.0 的镜像。

helm search

在这里插入图片描述
所以,上面添加了新的源也没用,还得想其他办法将 镜像 拉取到本地。

官方给出了2种安装 OpenEBS的方法:
在这里插入图片描述
我们先从 https://openebs.github.io/charts/openebs-operator-1.5.0.yaml 这里将 openebs-operator-1.5.0.yaml这个文件下载下来。

注:https://openebs.github.io/charts/openebs-operator-1.5.0.yaml 这个链接已经失效,打不开了。

openebs-operator-1.5.0.yaml 文件内容:

# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:name: openebs-maya-operatornamespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: openebs-maya-operator
rules:
- apiGroups: ["*"]resources: ["nodes", "nodes/proxy"]verbs: ["*"]
- apiGroups: ["*"]resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]verbs: ["*"]
- apiGroups: ["*"]resources: ["statefulsets", "daemonsets"]verbs: ["*"]
- apiGroups: ["*"]resources: ["resourcequotas", "limitranges"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]resources: ["volumesnapshots", "volumesnapshotdatas"]verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]resources: ["customresourcedefinitions"]verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["*"]resources: [ "disks", "blockdevices", "blockdeviceclaims"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "castemplates", "runtasks"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]resources: ["leases"]verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]verbs: ["get"]
- apiGroups: ["*"]resources: [ "upgradetasks"]verbs: ["*" ]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: openebs-maya-operator
subjects:
- kind: ServiceAccountname: openebs-maya-operatornamespace: openebs
roleRef:kind: ClusterRolename: openebs-maya-operatorapiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:name: maya-apiservernamespace: openebslabels:name: maya-apiserveropenebs.io/component-name: maya-apiserveropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: maya-apiserveropenebs.io/component-name: maya-apiserverreplicas: 1strategy:type: RecreaterollingUpdate: nulltemplate:metadata:labels:name: maya-apiserveropenebs.io/component-name: maya-apiserveropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: maya-apiserverimagePullPolicy: IfNotPresentimage: quay.io/openebs/m-apiserver:1.5.0ports:- containerPort: 5656env:# OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s# based on this config. This is ignored if empty.# This is supported for maya api server version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"# OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s# based on this address. This is ignored if empty.# This is supported for maya api server version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://172.28.128.3:8080"# OPENEBS_NAMESPACE provides the namespace of this deployment as an# environment variable- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName# OPENEBS_MAYA_POD_NAME provides the name of this pod as# environment variable- name: OPENEBS_MAYA_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default# storageclass and storagepool will not be created.- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIGvalue: "true"# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be# configured as a part of openebs installation.# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOLvalue: "false"# OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath# to be used for saving the shared content between the side cars# of cstor volume pod.# The default path used is /var/openebs/sparse#- name: OPENEBS_IO_CSTOR_TARGET_DIR#  value: "/var/openebs/sparse"# OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath# to be used for saving the shared content between the side cars# of cstor pool pod. This ENV is also used to indicate the location# of the sparse devices.# The default path used is /var/openebs/sparse#- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR#  value: "/var/openebs/sparse"# OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath# to be used for default Jiva StoragePool loaded by OpenEBS# The default path used is /var/openebs# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true#- name: OPENEBS_IO_JIVA_POOL_DIR#  value: "/var/openebs"# OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath# to be used for default openebs-hostpath storageclass loaded by OpenEBS# The default path used is /var/openebs/local# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true#- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR#  value: "/var/openebs/local"- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGEvalue: "quay.io/openebs/jiva:1.5.0"- name: OPENEBS_IO_JIVA_REPLICA_IMAGEvalue: "quay.io/openebs/jiva:1.5.0"- name: OPENEBS_IO_JIVA_REPLICA_COUNTvalue: "3"- name: OPENEBS_IO_CSTOR_TARGET_IMAGEvalue: "quay.io/openebs/cstor-istgt:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_IMAGEvalue: "quay.io/openebs/cstor-pool:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGEvalue: "quay.io/openebs/cstor-pool-mgmt:1.5.0"- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGEvalue: "quay.io/openebs/cstor-volume-mgmt:1.5.0"- name: OPENEBS_IO_VOLUME_MONITOR_IMAGEvalue: "quay.io/openebs/m-exporter:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGEvalue: "quay.io/openebs/m-exporter:1.5.0"- name: OPENEBS_IO_HELPER_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"# OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage# events to Google Analytics- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"# OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)# for periodic ping events sent to Google Analytics.# Default is 24h.# Minimum is 1h. You can convert this to weekly by setting 168h#- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL#  value: "24h"livenessProbe:exec:command:- /usr/local/bin/mayactl- versioninitialDelaySeconds: 30periodSeconds: 60readinessProbe:exec:command:- /usr/local/bin/mayactl- versioninitialDelaySeconds: 30periodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:name: maya-apiserver-servicenamespace: openebslabels:openebs.io/component-name: maya-apiserver-svc
spec:ports:- name: apiport: 5656protocol: TCPtargetPort: 5656selector:name: maya-apiserversessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-provisionernamespace: openebslabels:name: openebs-provisioneropenebs.io/component-name: openebs-provisioneropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-provisioneropenebs.io/component-name: openebs-provisionerreplicas: 1strategy:type: RecreaterollingUpdate: nulltemplate:metadata:labels:name: openebs-provisioneropenebs.io/component-name: openebs-provisioneropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisionerimagePullPolicy: IfNotPresentimage: quay.io/openebs/openebs-k8s-provisioner:1.5.0env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that provisioner should forward the volume create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"livenessProbe:exec:command:- pgrep- ".*openebs"initialDelaySeconds: 30periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-snapshot-operatornamespace: openebslabels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatoropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatoropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: snapshot-controllerimage: quay.io/openebs/snapshot-controller:1.5.0imagePullPolicy: IfNotPresentenv:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacelivenessProbe:exec:command:- pgrep- ".*controller"initialDelaySeconds: 30periodSeconds: 60# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that snapshot controller should forward the snapshot create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"- name: snapshot-provisionerimage: quay.io/openebs/snapshot-provisioner:1.5.0imagePullPolicy: IfNotPresentenv:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that snapshot provisioner  should forward the clone create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"livenessProbe:exec:command:- pgrep- ".*provisioner"initialDelaySeconds: 30periodSeconds: 60
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:name: openebs-ndm-confignamespace: openebslabels:openebs.io/component-name: ndm-config
data:# udev-probe is default or primary probe which should be enabled to run ndm# filterconfigs contails configs of filters - in their form fo include# and exclude comma separated stringsnode-disk-manager.config: |probeconfigs:- key: udev-probename: udev probestate: true- key: seachest-probename: seachest probestate: false- key: smart-probename: smart probestate: truefilterconfigs:- key: os-disk-exclude-filtername: os disk exclude filterstate: trueexclude: "/,/etc/hosts,/boot"- key: vendor-filtername: vendor filterstate: trueinclude: ""exclude: "CLOUDBYT,OpenEBS"- key: path-filtername: path filterstate: trueinclude: ""exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: openebs-ndmnamespace: openebslabels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-ndmopenebs.io/component-name: ndmupdateStrategy:type: RollingUpdatetemplate:metadata:labels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 1.5.0spec:# By default the node-disk-manager will be run on all kubernetes nodes# If you would like to limit this to only some nodes, say the nodes# that have storage attached, you could label those node and use# nodeSelector.## e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"#nodeSelector:#  "openebs.io/nodegroup": "storage-node"serviceAccountName: openebs-maya-operatorhostNetwork: truecontainers:- name: node-disk-managerimage: quay.io/openebs/node-disk-manager-amd64:v0.4.5imagePullPolicy: AlwayssecurityContext:privileged: truevolumeMounts:- name: configmountPath: /host/node-disk-manager.configsubPath: node-disk-manager.configreadOnly: true- name: udevmountPath: /run/udev- name: procmountmountPath: /host/procreadOnly: true- name: sparsepathmountPath: /var/openebs/sparseenv:# namespace in which NDM is installed will be passed to NDM Daemonset# as environment variable- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# pass hostname as env variable using downward API to the NDM container- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# specify the directory where the sparse files need to be created.# if not specified, then sparse files will not be created.- name: SPARSE_FILE_DIRvalue: "/var/openebs/sparse"# Size(bytes) of the sparse file to be created.- name: SPARSE_FILE_SIZEvalue: "10737418240"# Specify the number of sparse files to be created- name: SPARSE_FILE_COUNTvalue: "0"livenessProbe:exec:command:- pgrep- ".*ndm"initialDelaySeconds: 30periodSeconds: 60volumes:- name: configconfigMap:name: openebs-ndm-config- name: udevhostPath:path: /run/udevtype: Directory# mount /proc (to access mount file of process 1 of host) inside container# to read mount-point of disks and partitions- name: procmounthostPath:path: /proctype: Directory- name: sparsepathhostPath:path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-ndm-operatornamespace: openebslabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: node-disk-operatorimage: quay.io/openebs/node-disk-operator-amd64:v0.4.5imagePullPolicy: AlwaysreadinessProbe:exec:command:- stat- /tmp/operator-sdk-readyinitialDelaySeconds: 4periodSeconds: 10failureThreshold: 1env:- name: WATCH_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# the service account of the ndm-operator pod- name: SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPERATOR_NAMEvalue: "node-disk-operator"- name: CLEANUP_JOB_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-admission-servernamespace: openebslabels:app: admission-webhookopenebs.io/component-name: admission-webhookopenebs.io/version: 1.5.0
spec:replicas: 1strategy:type: RecreaterollingUpdate: nullselector:matchLabels:app: admission-webhooktemplate:metadata:labels:app: admission-webhookopenebs.io/component-name: admission-webhookopenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: admission-webhookimage: quay.io/openebs/admission-server:1.5.0imagePullPolicy: IfNotPresentargs:- -alsologtostderr- -v=2- 2>&1env:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: ADMISSION_WEBHOOK_NAMEvalue: "openebs-admission-server"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-localpv-provisionernamespace: openebslabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisionerreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisioner-hostpathimagePullPolicy: Alwaysimage: quay.io/openebs/provisioner-localpv:1.5.0env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"- name: OPENEBS_IO_HELPER_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"livenessProbe:exec:command:- pgrep- ".*localpv"initialDelaySeconds: 30periodSeconds: 60
---

分析上面的文件内容,我们可以看到,它需要从 quay.io 网站中 拉取这些 openebs 的镜像。但是这个网站被墙了,所以拉不下来。

      containers:- name: maya-apiserverimagePullPolicy: IfNotPresentimage: quay.io/openebs/m-apiserver:1.5.0ports:- containerPort: 5656————————————————————————————————————————————————containers:- name: openebs-provisionerimagePullPolicy: IfNotPresentimage: quay.io/openebs/openebs-k8s-provisioner:1.5.0————————————————————————————————————————————————containers:- name: snapshot-controllerimage: quay.io/openebs/snapshot-controller:1.5.0imagePullPolicy: IfNotPresent————————————————————————————————————————————————containers:- name: node-disk-managerimage: quay.io/openebs/node-disk-manager-amd64:v0.4.5imagePullPolicy: Always————————————————————————————————————————————————containers:- name: node-disk-operatorimage: quay.io/openebs/node-disk-operator-amd64:v0.4.5imagePullPolicy: Always————————————————————————————————————————————————containers:- name: admission-webhookimage: quay.io/openebs/admission-server:1.5.0imagePullPolicy: IfNotPresent————————————————————————————————————————————————
containers:- name: openebs-provisioner-hostpathimagePullPolicy: Alwaysimage: quay.io/openebs/provisioner-localpv:1.5.0————————————————————————————————————————————————

所以我们上 docker hub 网站上,看看有没有 openebs 的镜像。发现是有的。

在这里插入图片描述
然后点进去一个看看有没有 1.5.0 版本的, 发现还是有的。
在这里插入图片描述

于是我们把前面 openebs-operator-1.5.0.yaml 要用到的镜像全部使用 docker pull 的方式拉取到本地。直接复制下面的命令拉取。到时候哪个拉取失败,再去 docker hub 看看是否有对应的镜像。

docker pull openebs/m-apiserver:1.5.0docker pull openebs/openebs-k8s-provisioner:1.5.0docker pull openebs/snapshot-controller:1.5.0docker pull openebs/node-disk-manager-amd64:v0.4.5docker pull openebs/node-disk-operator-amd64:v0.4.5docker pull openebs/admission-server:1.5.0docker pull openebs/provisioner-localpv:1.5.0

上面的镜像全部拉取到本地之后,就修改 openebs-operator-1.5.0.yaml 文件,将所有的 image: quay.io/ 替换为 image:
让它从本地拉取镜像。
在这里插入图片描述

修改之后的 openebs-operator-1.5.0.yaml 文件内容:

# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:name: openebs-maya-operatornamespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: openebs-maya-operator
rules:
- apiGroups: ["*"]resources: ["nodes", "nodes/proxy"]verbs: ["*"]
- apiGroups: ["*"]resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]verbs: ["*"]
- apiGroups: ["*"]resources: ["statefulsets", "daemonsets"]verbs: ["*"]
- apiGroups: ["*"]resources: ["resourcequotas", "limitranges"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]resources: ["volumesnapshots", "volumesnapshotdatas"]verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]resources: ["customresourcedefinitions"]verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["*"]resources: [ "disks", "blockdevices", "blockdeviceclaims"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "castemplates", "runtasks"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]resources: ["leases"]verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]verbs: ["get"]
- apiGroups: ["*"]resources: [ "upgradetasks"]verbs: ["*" ]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: openebs-maya-operator
subjects:
- kind: ServiceAccountname: openebs-maya-operatornamespace: openebs
roleRef:kind: ClusterRolename: openebs-maya-operatorapiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:name: maya-apiservernamespace: openebslabels:name: maya-apiserveropenebs.io/component-name: maya-apiserveropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: maya-apiserveropenebs.io/component-name: maya-apiserverreplicas: 1strategy:type: RecreaterollingUpdate: nulltemplate:metadata:labels:name: maya-apiserveropenebs.io/component-name: maya-apiserveropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: maya-apiserverimagePullPolicy: IfNotPresentimage: openebs/m-apiserver:1.5.0ports:- containerPort: 5656env:# OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s# based on this config. This is ignored if empty.# This is supported for maya api server version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"# OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s# based on this address. This is ignored if empty.# This is supported for maya api server version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://172.28.128.3:8080"# OPENEBS_NAMESPACE provides the namespace of this deployment as an# environment variable- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName# OPENEBS_MAYA_POD_NAME provides the name of this pod as# environment variable- name: OPENEBS_MAYA_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default# storageclass and storagepool will not be created.- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIGvalue: "true"# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be# configured as a part of openebs installation.# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOLvalue: "false"# OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath# to be used for saving the shared content between the side cars# of cstor volume pod.# The default path used is /var/openebs/sparse#- name: OPENEBS_IO_CSTOR_TARGET_DIR#  value: "/var/openebs/sparse"# OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath# to be used for saving the shared content between the side cars# of cstor pool pod. This ENV is also used to indicate the location# of the sparse devices.# The default path used is /var/openebs/sparse#- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR#  value: "/var/openebs/sparse"# OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath# to be used for default Jiva StoragePool loaded by OpenEBS# The default path used is /var/openebs# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true#- name: OPENEBS_IO_JIVA_POOL_DIR#  value: "/var/openebs"# OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath# to be used for default openebs-hostpath storageclass loaded by OpenEBS# The default path used is /var/openebs/local# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true#- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR#  value: "/var/openebs/local"- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGEvalue: "quay.io/openebs/jiva:1.5.0"- name: OPENEBS_IO_JIVA_REPLICA_IMAGEvalue: "quay.io/openebs/jiva:1.5.0"- name: OPENEBS_IO_JIVA_REPLICA_COUNTvalue: "3"- name: OPENEBS_IO_CSTOR_TARGET_IMAGEvalue: "quay.io/openebs/cstor-istgt:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_IMAGEvalue: "quay.io/openebs/cstor-pool:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGEvalue: "quay.io/openebs/cstor-pool-mgmt:1.5.0"- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGEvalue: "quay.io/openebs/cstor-volume-mgmt:1.5.0"- name: OPENEBS_IO_VOLUME_MONITOR_IMAGEvalue: "quay.io/openebs/m-exporter:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGEvalue: "quay.io/openebs/m-exporter:1.5.0"- name: OPENEBS_IO_HELPER_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"# OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage# events to Google Analytics- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"# OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)# for periodic ping events sent to Google Analytics.# Default is 24h.# Minimum is 1h. You can convert this to weekly by setting 168h#- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL#  value: "24h"livenessProbe:exec:command:- /usr/local/bin/mayactl- versioninitialDelaySeconds: 30periodSeconds: 60readinessProbe:exec:command:- /usr/local/bin/mayactl- versioninitialDelaySeconds: 30periodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:name: maya-apiserver-servicenamespace: openebslabels:openebs.io/component-name: maya-apiserver-svc
spec:ports:- name: apiport: 5656protocol: TCPtargetPort: 5656selector:name: maya-apiserversessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-provisionernamespace: openebslabels:name: openebs-provisioneropenebs.io/component-name: openebs-provisioneropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-provisioneropenebs.io/component-name: openebs-provisionerreplicas: 1strategy:type: RecreaterollingUpdate: nulltemplate:metadata:labels:name: openebs-provisioneropenebs.io/component-name: openebs-provisioneropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisionerimagePullPolicy: IfNotPresentimage: openebs/openebs-k8s-provisioner:1.5.0env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that provisioner should forward the volume create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"livenessProbe:exec:command:- pgrep- ".*openebs"initialDelaySeconds: 30periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-snapshot-operatornamespace: openebslabels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatoropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatoropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: snapshot-controllerimage: openebs/snapshot-controller:1.5.0imagePullPolicy: IfNotPresentenv:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacelivenessProbe:exec:command:- pgrep- ".*controller"initialDelaySeconds: 30periodSeconds: 60# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that snapshot controller should forward the snapshot create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"- name: snapshot-provisionerimage: openebs/snapshot-provisioner:1.5.0imagePullPolicy: IfNotPresentenv:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that snapshot provisioner  should forward the clone create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"livenessProbe:exec:command:- pgrep- ".*provisioner"initialDelaySeconds: 30periodSeconds: 60
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:name: openebs-ndm-confignamespace: openebslabels:openebs.io/component-name: ndm-config
data:# udev-probe is default or primary probe which should be enabled to run ndm# filterconfigs contails configs of filters - in their form fo include# and exclude comma separated stringsnode-disk-manager.config: |probeconfigs:- key: udev-probename: udev probestate: true- key: seachest-probename: seachest probestate: false- key: smart-probename: smart probestate: truefilterconfigs:- key: os-disk-exclude-filtername: os disk exclude filterstate: trueexclude: "/,/etc/hosts,/boot"- key: vendor-filtername: vendor filterstate: trueinclude: ""exclude: "CLOUDBYT,OpenEBS"- key: path-filtername: path filterstate: trueinclude: ""exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: openebs-ndmnamespace: openebslabels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-ndmopenebs.io/component-name: ndmupdateStrategy:type: RollingUpdatetemplate:metadata:labels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 1.5.0spec:# By default the node-disk-manager will be run on all kubernetes nodes# If you would like to limit this to only some nodes, say the nodes# that have storage attached, you could label those node and use# nodeSelector.## e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"#nodeSelector:#  "openebs.io/nodegroup": "storage-node"serviceAccountName: openebs-maya-operatorhostNetwork: truecontainers:- name: node-disk-managerimage: openebs/node-disk-manager-amd64:v0.4.5imagePullPolicy: AlwayssecurityContext:privileged: truevolumeMounts:- name: configmountPath: /host/node-disk-manager.configsubPath: node-disk-manager.configreadOnly: true- name: udevmountPath: /run/udev- name: procmountmountPath: /host/procreadOnly: true- name: sparsepathmountPath: /var/openebs/sparseenv:# namespace in which NDM is installed will be passed to NDM Daemonset# as environment variable- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# pass hostname as env variable using downward API to the NDM container- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# specify the directory where the sparse files need to be created.# if not specified, then sparse files will not be created.- name: SPARSE_FILE_DIRvalue: "/var/openebs/sparse"# Size(bytes) of the sparse file to be created.- name: SPARSE_FILE_SIZEvalue: "10737418240"# Specify the number of sparse files to be created- name: SPARSE_FILE_COUNTvalue: "0"livenessProbe:exec:command:- pgrep- ".*ndm"initialDelaySeconds: 30periodSeconds: 60volumes:- name: configconfigMap:name: openebs-ndm-config- name: udevhostPath:path: /run/udevtype: Directory# mount /proc (to access mount file of process 1 of host) inside container# to read mount-point of disks and partitions- name: procmounthostPath:path: /proctype: Directory- name: sparsepathhostPath:path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-ndm-operatornamespace: openebslabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: node-disk-operatorimage: openebs/node-disk-operator-amd64:v0.4.5imagePullPolicy: AlwaysreadinessProbe:exec:command:- stat- /tmp/operator-sdk-readyinitialDelaySeconds: 4periodSeconds: 10failureThreshold: 1env:- name: WATCH_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# the service account of the ndm-operator pod- name: SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPERATOR_NAMEvalue: "node-disk-operator"- name: CLEANUP_JOB_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-admission-servernamespace: openebslabels:app: admission-webhookopenebs.io/component-name: admission-webhookopenebs.io/version: 1.5.0
spec:replicas: 1strategy:type: RecreaterollingUpdate: nullselector:matchLabels:app: admission-webhooktemplate:metadata:labels:app: admission-webhookopenebs.io/component-name: admission-webhookopenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: admission-webhookimage: openebs/admission-server:1.5.0imagePullPolicy: IfNotPresentargs:- -alsologtostderr- -v=2- 2>&1env:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: ADMISSION_WEBHOOK_NAMEvalue: "openebs-admission-server"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-localpv-provisionernamespace: openebslabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisionerreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisioner-hostpathimagePullPolicy: Alwaysimage: openebs/provisioner-localpv:1.5.0env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"- name: OPENEBS_IO_HELPER_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"livenessProbe:exec:command:- pgrep- ".*localpv"initialDelaySeconds: 30periodSeconds: 60
---

上面的步骤完成后,使用 kubectl apply 命令安装。

[root@k8snode1 software]# kubectl apply -f openebs-operator-1.5.0.yaml
namespace/openebs created
serviceaccount/openebs-maya-operator created
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
deployment.apps/maya-apiserver created
service/maya-apiserver-service created
deployment.apps/openebs-provisioner created
deployment.apps/openebs-snapshot-operator created
configmap/openebs-ndm-config created
daemonset.apps/openebs-ndm created
deployment.apps/openebs-ndm-operator created
deployment.apps/openebs-admission-server created
deployment.apps/openebs-localpv-provisioner created

但是,使用 kubectl get pods --all-namespaces 查看,发现镜像还是报错ImagePullBackOff或者ErrImagePull 。

原因可以参考一下这篇文章《k8s所在的节点docker有镜像 为什么还要从远程拉取》。

因我只在 master 节点拉取了这些镜像,没有在 node 节点拉取,把上面的镜像再拉取一遍到 node 节点上就行了。


介绍另一种方式:

因为我昨天访问 docker hub, 结果 docker hub 也访问不了。即 docker pull 也拉取不了镜像的时候,我们可以在网上找到对应的 tar 镜像,使用 docker load 命令将 tar 镜像拉取到本地。

例子:

docker load -i openebs-k8s-provisioner_1.5.0.tar openebs-k8s-provisioner:1.5.0

可以参考这篇文章

在这里插入图片描述

最终效果:

在这里插入图片描述

最后,也可以将自己拉取下来的镜像推送到阿里云上去:
参考 阿里云文档
在这里插入图片描述

这篇关于安装OpenEBS,镜像总是报错ImagePullBackOff或者ErrImagePull的解决方法的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!


原文地址:
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.chinasem.cn/article/821253

相关文章

Linux系统中查询JDK安装目录的几种常用方法

《Linux系统中查询JDK安装目录的几种常用方法》:本文主要介绍Linux系统中查询JDK安装目录的几种常用方法,方法分别是通过update-alternatives、Java命令、环境变量及目... 目录方法 1:通过update-alternatives查询(推荐)方法 2:检查所有已安装的 JDK方

SQL Server安装时候没有中文选项的解决方法

《SQLServer安装时候没有中文选项的解决方法》用户安装SQLServer时界面全英文,无中文选项,通过修改安装设置中的国家或地区为中文中国,重启安装程序后界面恢复中文,解决了问题,对SQLSe... 你是不是在安装SQL Server时候发现安装界面和别人不同,并且无论如何都没有中文选项?这个问题也

Java Thread中join方法使用举例详解

《JavaThread中join方法使用举例详解》JavaThread中join()方法主要是让调用改方法的thread完成run方法里面的东西后,在执行join()方法后面的代码,这篇文章主要介绍... 目录前言1.join()方法的定义和作用2.join()方法的三个重载版本3.join()方法的工作原

2025版mysql8.0.41 winx64 手动安装详细教程

《2025版mysql8.0.41winx64手动安装详细教程》本文指导Windows系统下MySQL安装配置,包含解压、设置环境变量、my.ini配置、初始化密码获取、服务安装与手动启动等步骤,... 目录一、下载安装包二、配置环境变量三、安装配置四、启动 mysql 服务,修改密码一、下载安装包安装地

Redis MCP 安装与配置指南

《RedisMCP安装与配置指南》本文将详细介绍如何安装和配置RedisMCP,包括快速启动、源码安装、Docker安装、以及相关的配置参数和环境变量设置,感兴趣的朋友一起看看吧... 目录一、Redis MCP 简介二、安www.chinasem.cn装 Redis MCP 服务2.1 快速启动(推荐)2.

在macOS上安装jenv管理JDK版本的详细步骤

《在macOS上安装jenv管理JDK版本的详细步骤》jEnv是一个命令行工具,正如它的官网所宣称的那样,它是来让你忘记怎么配置JAVA_HOME环境变量的神队友,:本文主要介绍在macOS上安装... 目录前言安装 jenv添加 JDK 版本到 jenv切换 JDK 版本总结前言China编程在开发 Java

Linux下在线安装启动VNC教程

《Linux下在线安装启动VNC教程》本文指导在CentOS7上在线安装VNC,包含安装、配置密码、启动/停止、清理重启步骤及注意事项,强调需安装VNC桌面以避免黑屏,并解决端口冲突和目录权限问题... 目录描述安装VNC安装 VNC 桌面可能遇到的问题总结描js述linux中的VNC就类似于Window

java内存泄漏排查过程及解决

《java内存泄漏排查过程及解决》公司某服务内存持续增长,疑似内存泄漏,未触发OOM,排查方法包括检查JVM配置、分析GC执行状态、导出堆内存快照并用IDEAProfiler工具定位大对象及代码... 目录内存泄漏内存问题排查1.查看JVM内存配置2.分析gc是否正常执行3.导出 dump 各种工具分析4.

在MySQL中实现冷热数据分离的方法及使用场景底层原理解析

《在MySQL中实现冷热数据分离的方法及使用场景底层原理解析》MySQL冷热数据分离通过分表/分区策略、数据归档和索引优化,将频繁访问的热数据与冷数据分开存储,提升查询效率并降低存储成本,适用于高并发... 目录实现冷热数据分离1. 分表策略2. 使用分区表3. 数据归档与迁移在mysql中实现冷热数据分

Spring Boot从main方法到内嵌Tomcat的全过程(自动化流程)

《SpringBoot从main方法到内嵌Tomcat的全过程(自动化流程)》SpringBoot启动始于main方法,创建SpringApplication实例,初始化上下文,准备环境,刷新容器并... 目录1. 入口:main方法2. SpringApplication初始化2.1 构造阶段3. 运行阶