安装OpenEBS,镜像总是报错ImagePullBackOff或者ErrImagePull的解决方法

本文主要是介绍安装OpenEBS,镜像总是报错ImagePullBackOff或者ErrImagePull的解决方法,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

按照 KubeSphere 官方文档安装 OpenEBS,镜像总是报错ImagePullBackOff或者ErrImagePull的解决方法

helm 有很多更换 源 的文章,有一些是写更换阿里云的源,但是阿里云的源根本没更新OpenEBS的镜像。
在网上找到1个可用的源:

可用的源:
stable: http://mirror.azure.cn/kubernetes/charts/

无用的源:
http://mirror.azure.cn/kubernetes/charts-incubator/
https://kubernetes-charts.storage.googleapis.com/ 拒绝访问
https://registry.cn-hangzhou.aliyuncs.com/ 无法访问
https://quay.io/ 被墙

不可用的源执行 helm repo add … 命令会报如下错误:

[root@k8snode1 etcd]# helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator/
WARNING: Deprecated index file format. Try 'helm repo update'
Error: Looks like "http://mirror.azure.cn/kubernetes/charts-incubator/" is not a valid chart repository or cannot be reached: no API version specified

所以把helm的源替换成上面的 http://mirror.azure.cn/kubernetes/charts/

# 先移除旧的源,如果没有添加过旧的源,请忽略
helm repo remove stable# 添加新的源
helm repo add stable http://mirror.azure.cn/kubernetes/charts/# 更新源
helm repo update

开始安装OpenEBS

helm install --namespace openebs --name openebs stable/openebs --version 1.5.0

执行命令 kubectl get pods --all-namespaces 发现 openebs的镜像总是报错ImagePullBackOff或者ErrImagePull 。

kubectl get pods --all-namespaces

执行 helm search 发现并没有 openebs 1.5.0 的镜像。

helm search

在这里插入图片描述
所以,上面添加了新的源也没用,还得想其他办法将 镜像 拉取到本地。

官方给出了2种安装 OpenEBS的方法:
在这里插入图片描述
我们先从 https://openebs.github.io/charts/openebs-operator-1.5.0.yaml 这里将 openebs-operator-1.5.0.yaml这个文件下载下来。

注:https://openebs.github.io/charts/openebs-operator-1.5.0.yaml 这个链接已经失效,打不开了。

openebs-operator-1.5.0.yaml 文件内容:

# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:name: openebs-maya-operatornamespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: openebs-maya-operator
rules:
- apiGroups: ["*"]resources: ["nodes", "nodes/proxy"]verbs: ["*"]
- apiGroups: ["*"]resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]verbs: ["*"]
- apiGroups: ["*"]resources: ["statefulsets", "daemonsets"]verbs: ["*"]
- apiGroups: ["*"]resources: ["resourcequotas", "limitranges"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]resources: ["volumesnapshots", "volumesnapshotdatas"]verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]resources: ["customresourcedefinitions"]verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["*"]resources: [ "disks", "blockdevices", "blockdeviceclaims"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "castemplates", "runtasks"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]resources: ["leases"]verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]verbs: ["get"]
- apiGroups: ["*"]resources: [ "upgradetasks"]verbs: ["*" ]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: openebs-maya-operator
subjects:
- kind: ServiceAccountname: openebs-maya-operatornamespace: openebs
roleRef:kind: ClusterRolename: openebs-maya-operatorapiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:name: maya-apiservernamespace: openebslabels:name: maya-apiserveropenebs.io/component-name: maya-apiserveropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: maya-apiserveropenebs.io/component-name: maya-apiserverreplicas: 1strategy:type: RecreaterollingUpdate: nulltemplate:metadata:labels:name: maya-apiserveropenebs.io/component-name: maya-apiserveropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: maya-apiserverimagePullPolicy: IfNotPresentimage: quay.io/openebs/m-apiserver:1.5.0ports:- containerPort: 5656env:# OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s# based on this config. This is ignored if empty.# This is supported for maya api server version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"# OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s# based on this address. This is ignored if empty.# This is supported for maya api server version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://172.28.128.3:8080"# OPENEBS_NAMESPACE provides the namespace of this deployment as an# environment variable- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName# OPENEBS_MAYA_POD_NAME provides the name of this pod as# environment variable- name: OPENEBS_MAYA_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default# storageclass and storagepool will not be created.- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIGvalue: "true"# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be# configured as a part of openebs installation.# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOLvalue: "false"# OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath# to be used for saving the shared content between the side cars# of cstor volume pod.# The default path used is /var/openebs/sparse#- name: OPENEBS_IO_CSTOR_TARGET_DIR#  value: "/var/openebs/sparse"# OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath# to be used for saving the shared content between the side cars# of cstor pool pod. This ENV is also used to indicate the location# of the sparse devices.# The default path used is /var/openebs/sparse#- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR#  value: "/var/openebs/sparse"# OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath# to be used for default Jiva StoragePool loaded by OpenEBS# The default path used is /var/openebs# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true#- name: OPENEBS_IO_JIVA_POOL_DIR#  value: "/var/openebs"# OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath# to be used for default openebs-hostpath storageclass loaded by OpenEBS# The default path used is /var/openebs/local# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true#- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR#  value: "/var/openebs/local"- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGEvalue: "quay.io/openebs/jiva:1.5.0"- name: OPENEBS_IO_JIVA_REPLICA_IMAGEvalue: "quay.io/openebs/jiva:1.5.0"- name: OPENEBS_IO_JIVA_REPLICA_COUNTvalue: "3"- name: OPENEBS_IO_CSTOR_TARGET_IMAGEvalue: "quay.io/openebs/cstor-istgt:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_IMAGEvalue: "quay.io/openebs/cstor-pool:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGEvalue: "quay.io/openebs/cstor-pool-mgmt:1.5.0"- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGEvalue: "quay.io/openebs/cstor-volume-mgmt:1.5.0"- name: OPENEBS_IO_VOLUME_MONITOR_IMAGEvalue: "quay.io/openebs/m-exporter:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGEvalue: "quay.io/openebs/m-exporter:1.5.0"- name: OPENEBS_IO_HELPER_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"# OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage# events to Google Analytics- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"# OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)# for periodic ping events sent to Google Analytics.# Default is 24h.# Minimum is 1h. You can convert this to weekly by setting 168h#- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL#  value: "24h"livenessProbe:exec:command:- /usr/local/bin/mayactl- versioninitialDelaySeconds: 30periodSeconds: 60readinessProbe:exec:command:- /usr/local/bin/mayactl- versioninitialDelaySeconds: 30periodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:name: maya-apiserver-servicenamespace: openebslabels:openebs.io/component-name: maya-apiserver-svc
spec:ports:- name: apiport: 5656protocol: TCPtargetPort: 5656selector:name: maya-apiserversessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-provisionernamespace: openebslabels:name: openebs-provisioneropenebs.io/component-name: openebs-provisioneropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-provisioneropenebs.io/component-name: openebs-provisionerreplicas: 1strategy:type: RecreaterollingUpdate: nulltemplate:metadata:labels:name: openebs-provisioneropenebs.io/component-name: openebs-provisioneropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisionerimagePullPolicy: IfNotPresentimage: quay.io/openebs/openebs-k8s-provisioner:1.5.0env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that provisioner should forward the volume create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"livenessProbe:exec:command:- pgrep- ".*openebs"initialDelaySeconds: 30periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-snapshot-operatornamespace: openebslabels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatoropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatoropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: snapshot-controllerimage: quay.io/openebs/snapshot-controller:1.5.0imagePullPolicy: IfNotPresentenv:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacelivenessProbe:exec:command:- pgrep- ".*controller"initialDelaySeconds: 30periodSeconds: 60# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that snapshot controller should forward the snapshot create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"- name: snapshot-provisionerimage: quay.io/openebs/snapshot-provisioner:1.5.0imagePullPolicy: IfNotPresentenv:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that snapshot provisioner  should forward the clone create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"livenessProbe:exec:command:- pgrep- ".*provisioner"initialDelaySeconds: 30periodSeconds: 60
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:name: openebs-ndm-confignamespace: openebslabels:openebs.io/component-name: ndm-config
data:# udev-probe is default or primary probe which should be enabled to run ndm# filterconfigs contails configs of filters - in their form fo include# and exclude comma separated stringsnode-disk-manager.config: |probeconfigs:- key: udev-probename: udev probestate: true- key: seachest-probename: seachest probestate: false- key: smart-probename: smart probestate: truefilterconfigs:- key: os-disk-exclude-filtername: os disk exclude filterstate: trueexclude: "/,/etc/hosts,/boot"- key: vendor-filtername: vendor filterstate: trueinclude: ""exclude: "CLOUDBYT,OpenEBS"- key: path-filtername: path filterstate: trueinclude: ""exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: openebs-ndmnamespace: openebslabels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-ndmopenebs.io/component-name: ndmupdateStrategy:type: RollingUpdatetemplate:metadata:labels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 1.5.0spec:# By default the node-disk-manager will be run on all kubernetes nodes# If you would like to limit this to only some nodes, say the nodes# that have storage attached, you could label those node and use# nodeSelector.## e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"#nodeSelector:#  "openebs.io/nodegroup": "storage-node"serviceAccountName: openebs-maya-operatorhostNetwork: truecontainers:- name: node-disk-managerimage: quay.io/openebs/node-disk-manager-amd64:v0.4.5imagePullPolicy: AlwayssecurityContext:privileged: truevolumeMounts:- name: configmountPath: /host/node-disk-manager.configsubPath: node-disk-manager.configreadOnly: true- name: udevmountPath: /run/udev- name: procmountmountPath: /host/procreadOnly: true- name: sparsepathmountPath: /var/openebs/sparseenv:# namespace in which NDM is installed will be passed to NDM Daemonset# as environment variable- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# pass hostname as env variable using downward API to the NDM container- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# specify the directory where the sparse files need to be created.# if not specified, then sparse files will not be created.- name: SPARSE_FILE_DIRvalue: "/var/openebs/sparse"# Size(bytes) of the sparse file to be created.- name: SPARSE_FILE_SIZEvalue: "10737418240"# Specify the number of sparse files to be created- name: SPARSE_FILE_COUNTvalue: "0"livenessProbe:exec:command:- pgrep- ".*ndm"initialDelaySeconds: 30periodSeconds: 60volumes:- name: configconfigMap:name: openebs-ndm-config- name: udevhostPath:path: /run/udevtype: Directory# mount /proc (to access mount file of process 1 of host) inside container# to read mount-point of disks and partitions- name: procmounthostPath:path: /proctype: Directory- name: sparsepathhostPath:path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-ndm-operatornamespace: openebslabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: node-disk-operatorimage: quay.io/openebs/node-disk-operator-amd64:v0.4.5imagePullPolicy: AlwaysreadinessProbe:exec:command:- stat- /tmp/operator-sdk-readyinitialDelaySeconds: 4periodSeconds: 10failureThreshold: 1env:- name: WATCH_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# the service account of the ndm-operator pod- name: SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPERATOR_NAMEvalue: "node-disk-operator"- name: CLEANUP_JOB_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-admission-servernamespace: openebslabels:app: admission-webhookopenebs.io/component-name: admission-webhookopenebs.io/version: 1.5.0
spec:replicas: 1strategy:type: RecreaterollingUpdate: nullselector:matchLabels:app: admission-webhooktemplate:metadata:labels:app: admission-webhookopenebs.io/component-name: admission-webhookopenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: admission-webhookimage: quay.io/openebs/admission-server:1.5.0imagePullPolicy: IfNotPresentargs:- -alsologtostderr- -v=2- 2>&1env:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: ADMISSION_WEBHOOK_NAMEvalue: "openebs-admission-server"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-localpv-provisionernamespace: openebslabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisionerreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisioner-hostpathimagePullPolicy: Alwaysimage: quay.io/openebs/provisioner-localpv:1.5.0env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"- name: OPENEBS_IO_HELPER_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"livenessProbe:exec:command:- pgrep- ".*localpv"initialDelaySeconds: 30periodSeconds: 60
---

分析上面的文件内容,我们可以看到,它需要从 quay.io 网站中 拉取这些 openebs 的镜像。但是这个网站被墙了,所以拉不下来。

      containers:- name: maya-apiserverimagePullPolicy: IfNotPresentimage: quay.io/openebs/m-apiserver:1.5.0ports:- containerPort: 5656————————————————————————————————————————————————containers:- name: openebs-provisionerimagePullPolicy: IfNotPresentimage: quay.io/openebs/openebs-k8s-provisioner:1.5.0————————————————————————————————————————————————containers:- name: snapshot-controllerimage: quay.io/openebs/snapshot-controller:1.5.0imagePullPolicy: IfNotPresent————————————————————————————————————————————————containers:- name: node-disk-managerimage: quay.io/openebs/node-disk-manager-amd64:v0.4.5imagePullPolicy: Always————————————————————————————————————————————————containers:- name: node-disk-operatorimage: quay.io/openebs/node-disk-operator-amd64:v0.4.5imagePullPolicy: Always————————————————————————————————————————————————containers:- name: admission-webhookimage: quay.io/openebs/admission-server:1.5.0imagePullPolicy: IfNotPresent————————————————————————————————————————————————
containers:- name: openebs-provisioner-hostpathimagePullPolicy: Alwaysimage: quay.io/openebs/provisioner-localpv:1.5.0————————————————————————————————————————————————

所以我们上 docker hub 网站上,看看有没有 openebs 的镜像。发现是有的。

在这里插入图片描述
然后点进去一个看看有没有 1.5.0 版本的, 发现还是有的。
在这里插入图片描述

于是我们把前面 openebs-operator-1.5.0.yaml 要用到的镜像全部使用 docker pull 的方式拉取到本地。直接复制下面的命令拉取。到时候哪个拉取失败,再去 docker hub 看看是否有对应的镜像。

docker pull openebs/m-apiserver:1.5.0docker pull openebs/openebs-k8s-provisioner:1.5.0docker pull openebs/snapshot-controller:1.5.0docker pull openebs/node-disk-manager-amd64:v0.4.5docker pull openebs/node-disk-operator-amd64:v0.4.5docker pull openebs/admission-server:1.5.0docker pull openebs/provisioner-localpv:1.5.0

上面的镜像全部拉取到本地之后,就修改 openebs-operator-1.5.0.yaml 文件,将所有的 image: quay.io/ 替换为 image:
让它从本地拉取镜像。
在这里插入图片描述

修改之后的 openebs-operator-1.5.0.yaml 文件内容:

# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:name: openebs-maya-operatornamespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: openebs-maya-operator
rules:
- apiGroups: ["*"]resources: ["nodes", "nodes/proxy"]verbs: ["*"]
- apiGroups: ["*"]resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]verbs: ["*"]
- apiGroups: ["*"]resources: ["statefulsets", "daemonsets"]verbs: ["*"]
- apiGroups: ["*"]resources: ["resourcequotas", "limitranges"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]resources: ["volumesnapshots", "volumesnapshotdatas"]verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]resources: ["customresourcedefinitions"]verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["*"]resources: [ "disks", "blockdevices", "blockdeviceclaims"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "castemplates", "runtasks"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"]verbs: ["*" ]
- apiGroups: ["*"]resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]resources: ["leases"]verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]verbs: ["get"]
- apiGroups: ["*"]resources: [ "upgradetasks"]verbs: ["*" ]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: openebs-maya-operator
subjects:
- kind: ServiceAccountname: openebs-maya-operatornamespace: openebs
roleRef:kind: ClusterRolename: openebs-maya-operatorapiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:name: maya-apiservernamespace: openebslabels:name: maya-apiserveropenebs.io/component-name: maya-apiserveropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: maya-apiserveropenebs.io/component-name: maya-apiserverreplicas: 1strategy:type: RecreaterollingUpdate: nulltemplate:metadata:labels:name: maya-apiserveropenebs.io/component-name: maya-apiserveropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: maya-apiserverimagePullPolicy: IfNotPresentimage: openebs/m-apiserver:1.5.0ports:- containerPort: 5656env:# OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s# based on this config. This is ignored if empty.# This is supported for maya api server version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"# OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s# based on this address. This is ignored if empty.# This is supported for maya api server version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://172.28.128.3:8080"# OPENEBS_NAMESPACE provides the namespace of this deployment as an# environment variable- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName# OPENEBS_MAYA_POD_NAME provides the name of this pod as# environment variable- name: OPENEBS_MAYA_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default# storageclass and storagepool will not be created.- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIGvalue: "true"# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be# configured as a part of openebs installation.# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOLvalue: "false"# OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath# to be used for saving the shared content between the side cars# of cstor volume pod.# The default path used is /var/openebs/sparse#- name: OPENEBS_IO_CSTOR_TARGET_DIR#  value: "/var/openebs/sparse"# OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath# to be used for saving the shared content between the side cars# of cstor pool pod. This ENV is also used to indicate the location# of the sparse devices.# The default path used is /var/openebs/sparse#- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR#  value: "/var/openebs/sparse"# OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath# to be used for default Jiva StoragePool loaded by OpenEBS# The default path used is /var/openebs# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true#- name: OPENEBS_IO_JIVA_POOL_DIR#  value: "/var/openebs"# OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath# to be used for default openebs-hostpath storageclass loaded by OpenEBS# The default path used is /var/openebs/local# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG# is set to true#- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR#  value: "/var/openebs/local"- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGEvalue: "quay.io/openebs/jiva:1.5.0"- name: OPENEBS_IO_JIVA_REPLICA_IMAGEvalue: "quay.io/openebs/jiva:1.5.0"- name: OPENEBS_IO_JIVA_REPLICA_COUNTvalue: "3"- name: OPENEBS_IO_CSTOR_TARGET_IMAGEvalue: "quay.io/openebs/cstor-istgt:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_IMAGEvalue: "quay.io/openebs/cstor-pool:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGEvalue: "quay.io/openebs/cstor-pool-mgmt:1.5.0"- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGEvalue: "quay.io/openebs/cstor-volume-mgmt:1.5.0"- name: OPENEBS_IO_VOLUME_MONITOR_IMAGEvalue: "quay.io/openebs/m-exporter:1.5.0"- name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGEvalue: "quay.io/openebs/m-exporter:1.5.0"- name: OPENEBS_IO_HELPER_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"# OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage# events to Google Analytics- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"# OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)# for periodic ping events sent to Google Analytics.# Default is 24h.# Minimum is 1h. You can convert this to weekly by setting 168h#- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL#  value: "24h"livenessProbe:exec:command:- /usr/local/bin/mayactl- versioninitialDelaySeconds: 30periodSeconds: 60readinessProbe:exec:command:- /usr/local/bin/mayactl- versioninitialDelaySeconds: 30periodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:name: maya-apiserver-servicenamespace: openebslabels:openebs.io/component-name: maya-apiserver-svc
spec:ports:- name: apiport: 5656protocol: TCPtargetPort: 5656selector:name: maya-apiserversessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-provisionernamespace: openebslabels:name: openebs-provisioneropenebs.io/component-name: openebs-provisioneropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-provisioneropenebs.io/component-name: openebs-provisionerreplicas: 1strategy:type: RecreaterollingUpdate: nulltemplate:metadata:labels:name: openebs-provisioneropenebs.io/component-name: openebs-provisioneropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisionerimagePullPolicy: IfNotPresentimage: openebs/openebs-k8s-provisioner:1.5.0env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that provisioner should forward the volume create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"livenessProbe:exec:command:- pgrep- ".*openebs"initialDelaySeconds: 30periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-snapshot-operatornamespace: openebslabels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatoropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-snapshot-operatoropenebs.io/component-name: openebs-snapshot-operatoropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: snapshot-controllerimage: openebs/snapshot-controller:1.5.0imagePullPolicy: IfNotPresentenv:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacelivenessProbe:exec:command:- pgrep- ".*controller"initialDelaySeconds: 30periodSeconds: 60# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that snapshot controller should forward the snapshot create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"- name: snapshot-provisionerimage: openebs/snapshot-provisioner:1.5.0imagePullPolicy: IfNotPresentenv:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,# that snapshot provisioner  should forward the clone create/delete requests.# If not present, "maya-apiserver-service" will be used for lookup.# This is supported for openebs provisioner version 0.5.3-RC1 onwards#- name: OPENEBS_MAYA_SERVICE_NAME#  value: "maya-apiserver-apiservice"livenessProbe:exec:command:- pgrep- ".*provisioner"initialDelaySeconds: 30periodSeconds: 60
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:name: openebs-ndm-confignamespace: openebslabels:openebs.io/component-name: ndm-config
data:# udev-probe is default or primary probe which should be enabled to run ndm# filterconfigs contails configs of filters - in their form fo include# and exclude comma separated stringsnode-disk-manager.config: |probeconfigs:- key: udev-probename: udev probestate: true- key: seachest-probename: seachest probestate: false- key: smart-probename: smart probestate: truefilterconfigs:- key: os-disk-exclude-filtername: os disk exclude filterstate: trueexclude: "/,/etc/hosts,/boot"- key: vendor-filtername: vendor filterstate: trueinclude: ""exclude: "CLOUDBYT,OpenEBS"- key: path-filtername: path filterstate: trueinclude: ""exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: openebs-ndmnamespace: openebslabels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-ndmopenebs.io/component-name: ndmupdateStrategy:type: RollingUpdatetemplate:metadata:labels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 1.5.0spec:# By default the node-disk-manager will be run on all kubernetes nodes# If you would like to limit this to only some nodes, say the nodes# that have storage attached, you could label those node and use# nodeSelector.## e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"#nodeSelector:#  "openebs.io/nodegroup": "storage-node"serviceAccountName: openebs-maya-operatorhostNetwork: truecontainers:- name: node-disk-managerimage: openebs/node-disk-manager-amd64:v0.4.5imagePullPolicy: AlwayssecurityContext:privileged: truevolumeMounts:- name: configmountPath: /host/node-disk-manager.configsubPath: node-disk-manager.configreadOnly: true- name: udevmountPath: /run/udev- name: procmountmountPath: /host/procreadOnly: true- name: sparsepathmountPath: /var/openebs/sparseenv:# namespace in which NDM is installed will be passed to NDM Daemonset# as environment variable- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# pass hostname as env variable using downward API to the NDM container- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# specify the directory where the sparse files need to be created.# if not specified, then sparse files will not be created.- name: SPARSE_FILE_DIRvalue: "/var/openebs/sparse"# Size(bytes) of the sparse file to be created.- name: SPARSE_FILE_SIZEvalue: "10737418240"# Specify the number of sparse files to be created- name: SPARSE_FILE_COUNTvalue: "0"livenessProbe:exec:command:- pgrep- ".*ndm"initialDelaySeconds: 30periodSeconds: 60volumes:- name: configconfigMap:name: openebs-ndm-config- name: udevhostPath:path: /run/udevtype: Directory# mount /proc (to access mount file of process 1 of host) inside container# to read mount-point of disks and partitions- name: procmounthostPath:path: /proctype: Directory- name: sparsepathhostPath:path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-ndm-operatornamespace: openebslabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: node-disk-operatorimage: openebs/node-disk-operator-amd64:v0.4.5imagePullPolicy: AlwaysreadinessProbe:exec:command:- stat- /tmp/operator-sdk-readyinitialDelaySeconds: 4periodSeconds: 10failureThreshold: 1env:- name: WATCH_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# the service account of the ndm-operator pod- name: SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPERATOR_NAMEvalue: "node-disk-operator"- name: CLEANUP_JOB_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-admission-servernamespace: openebslabels:app: admission-webhookopenebs.io/component-name: admission-webhookopenebs.io/version: 1.5.0
spec:replicas: 1strategy:type: RecreaterollingUpdate: nullselector:matchLabels:app: admission-webhooktemplate:metadata:labels:app: admission-webhookopenebs.io/component-name: admission-webhookopenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: admission-webhookimage: openebs/admission-server:1.5.0imagePullPolicy: IfNotPresentargs:- -alsologtostderr- -v=2- 2>&1env:- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: ADMISSION_WEBHOOK_NAMEvalue: "openebs-admission-server"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-localpv-provisionernamespace: openebslabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 1.5.0
spec:selector:matchLabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisionerreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 1.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisioner-hostpathimagePullPolicy: Alwaysimage: openebs/provisioner-localpv:1.5.0env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"- name: OPENEBS_IO_HELPER_IMAGEvalue: "quay.io/openebs/linux-utils:1.5.0"livenessProbe:exec:command:- pgrep- ".*localpv"initialDelaySeconds: 30periodSeconds: 60
---

上面的步骤完成后,使用 kubectl apply 命令安装。

[root@k8snode1 software]# kubectl apply -f openebs-operator-1.5.0.yaml
namespace/openebs created
serviceaccount/openebs-maya-operator created
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
deployment.apps/maya-apiserver created
service/maya-apiserver-service created
deployment.apps/openebs-provisioner created
deployment.apps/openebs-snapshot-operator created
configmap/openebs-ndm-config created
daemonset.apps/openebs-ndm created
deployment.apps/openebs-ndm-operator created
deployment.apps/openebs-admission-server created
deployment.apps/openebs-localpv-provisioner created

但是,使用 kubectl get pods --all-namespaces 查看,发现镜像还是报错ImagePullBackOff或者ErrImagePull 。

原因可以参考一下这篇文章《k8s所在的节点docker有镜像 为什么还要从远程拉取》。

因我只在 master 节点拉取了这些镜像,没有在 node 节点拉取,把上面的镜像再拉取一遍到 node 节点上就行了。


介绍另一种方式:

因为我昨天访问 docker hub, 结果 docker hub 也访问不了。即 docker pull 也拉取不了镜像的时候,我们可以在网上找到对应的 tar 镜像,使用 docker load 命令将 tar 镜像拉取到本地。

例子:

docker load -i openebs-k8s-provisioner_1.5.0.tar openebs-k8s-provisioner:1.5.0

可以参考这篇文章

在这里插入图片描述

最终效果:

在这里插入图片描述

最后,也可以将自己拉取下来的镜像推送到阿里云上去:
参考 阿里云文档
在这里插入图片描述

这篇关于安装OpenEBS,镜像总是报错ImagePullBackOff或者ErrImagePull的解决方法的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!


原文地址:
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.chinasem.cn/article/821253

相关文章

React 记忆缓存的三种方法实现

《React记忆缓存的三种方法实现》本文主要介绍了React记忆缓存的三种方法实现,包含React.memo、useMemo、useCallback,用于避免不必要的组件重渲染和计算,感兴趣的可以... 目录1. React.memo2. useMemo3. useCallback使用场景与注意事项在 Re

Docker多阶段镜像构建与缓存利用性能优化实践指南

《Docker多阶段镜像构建与缓存利用性能优化实践指南》这篇文章将从原理层面深入解析Docker多阶段构建与缓存机制,结合实际项目示例,说明如何有效利用构建缓存,组织镜像层次,最大化提升构建速度并减少... 目录一、技术背景与应用场景二、核心原理深入分析三、关键 dockerfile 解读3.1 Docke

在Android中使用WebView在线查看PDF文件的方法示例

《在Android中使用WebView在线查看PDF文件的方法示例》在Android应用开发中,有时我们需要在客户端展示PDF文件,以便用户可以阅读或交互,:本文主要介绍在Android中使用We... 目录简介:1. WebView组件介绍2. 在androidManifest.XML中添加Interne

Java中字符编码问题的解决方法详解

《Java中字符编码问题的解决方法详解》在日常Java开发中,字符编码问题是一个非常常见却又特别容易踩坑的地方,这篇文章就带你一步一步看清楚字符编码的来龙去脉,并结合可运行的代码,看看如何在Java项... 目录前言背景:为什么会出现编码问题常见场景分析控制台输出乱码文件读写乱码数据库存取乱码解决方案统一使

PHP轻松处理千万行数据的方法详解

《PHP轻松处理千万行数据的方法详解》说到处理大数据集,PHP通常不是第一个想到的语言,但如果你曾经需要处理数百万行数据而不让服务器崩溃或内存耗尽,你就会知道PHP用对了工具有多强大,下面小编就... 目录问题的本质php 中的数据流处理:为什么必不可少生成器:内存高效的迭代方式流量控制:避免系统过载一次性

C++右移运算符的一个小坑及解决

《C++右移运算符的一个小坑及解决》文章指出右移运算符处理负数时左侧补1导致死循环,与除法行为不同,强调需注意补码机制以正确统计二进制1的个数... 目录我遇到了这么一个www.chinasem.cn函数由此可以看到也很好理解总结我遇到了这么一个函数template<typename T>unsigned

Python一次性将指定版本所有包上传PyPI镜像解决方案

《Python一次性将指定版本所有包上传PyPI镜像解决方案》本文主要介绍了一个安全、完整、可离线部署的解决方案,用于一次性准备指定Python版本的所有包,然后导出到内网环境,感兴趣的小伙伴可以跟随... 目录为什么需要这个方案完整解决方案1. 项目目录结构2. 创建智能下载脚本3. 创建包清单生成脚本4

python获取指定名字的程序的文件路径的两种方法

《python获取指定名字的程序的文件路径的两种方法》本文主要介绍了python获取指定名字的程序的文件路径的两种方法,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要... 最近在做项目,需要用到给定一个程序名字就可以自动获取到这个程序在Windows系统下的绝对路径,以下

JavaScript中的高级调试方法全攻略指南

《JavaScript中的高级调试方法全攻略指南》什么是高级JavaScript调试技巧,它比console.log有何优势,如何使用断点调试定位问题,通过本文,我们将深入解答这些问题,带您从理论到实... 目录观点与案例结合观点1观点2观点3观点4观点5高级调试技巧详解实战案例断点调试:定位变量错误性能分

Python中 try / except / else / finally 异常处理方法详解

《Python中try/except/else/finally异常处理方法详解》:本文主要介绍Python中try/except/else/finally异常处理方法的相关资料,涵... 目录1. 基本结构2. 各部分的作用tryexceptelsefinally3. 执行流程总结4. 常见用法(1)多个e