k8s cephfs(动态pvc)

2024-05-29 06:28
文章标签 动态 云原生 k8s pvc cephfs

本文主要是介绍k8s cephfs(动态pvc),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

官方参考文档:
GitHub - ceph/ceph-csi at v3.9.0

测试版本

Ceph Version

Ceph CSI Version

Container Orchestrator Name

Version Tested

v17.2.7

v3.9.0

Kubernetes

v1.25.6

安装Ceph-csi

Step 1 Download GitHub - ceph/ceph-csi at v3.9.0

root@sd-k8s-master-1:~# wget https://github.com/ceph/ceph-csi/archive/refs/tags/v3.9.0.zip
root@sd-k8s-master-1:~# unzip v3.9.0.zip

Step 2 创建csidriver

root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat csidriver.yaml 
#
# /!\ DO NOT MODIFY THIS FILE
#
# This file has been automatically generated by Ceph-CSI yamlgen.
# The source for the contents can be found in the api/deploy directory, make
# your modifications there.
#
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:name: "cephfs.csi.ceph.com"namespace: cephfs
spec:attachRequired: falsepodInfoOnMount: falsefsGroupPolicy: FileseLinuxMount: true
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f csidriver.yaml

Step 3 为sidecar容器和节点插件部署RBAC:

root@sd-k8s-master-1:~# cd /root/ceph-csi-3.9.0/deploy/cephfs/kubernetes
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat csi-provisioner-rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:name: cephfs-csi-provisionernamespace: cephfs---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: cephfs-external-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["secrets"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["list", "watch", "create", "update", "patch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: [""]resources: ["persistentvolumeclaims/status"]verbs: ["update", "patch"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: ["snapshot.storage.k8s.io"]resources: ["volumesnapshots"]verbs: ["get", "list"]- apiGroups: ["snapshot.storage.k8s.io"]resources: ["volumesnapshots/status"]verbs: ["get", "list", "patch"]- apiGroups: ["snapshot.storage.k8s.io"]resources: ["volumesnapshotcontents"]verbs: ["get", "list", "watch", "update", "patch"]- apiGroups: ["snapshot.storage.k8s.io"]resources: ["volumesnapshotclasses"]verbs: ["get", "list", "watch"]- apiGroups: ["storage.k8s.io"]resources: ["csinodes"]verbs: ["get", "list", "watch"]- apiGroups: ["snapshot.storage.k8s.io"]resources: ["volumesnapshotcontents/status"]verbs: ["update", "patch"]- apiGroups: [""]resources: ["configmaps"]verbs: ["get"]- apiGroups: [""]resources: ["serviceaccounts"]verbs: ["get"]- apiGroups: [""]resources: ["serviceaccounts/token"]verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: cephfs-csi-provisioner-role
subjects:- kind: ServiceAccountname: cephfs-csi-provisionernamespace: cephfs
roleRef:kind: ClusterRolename: cephfs-external-provisioner-runnerapiGroup: rbac.authorization.k8s.io---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:# replace with non-cephfs namespace namenamespace: cephfsname: cephfs-external-provisioner-cfg
rules:- apiGroups: [""]resources: ["configmaps"]verbs: ["get", "list", "watch"]- apiGroups: ["coordination.k8s.io"]resources: ["leases"]verbs: ["get", "watch", "list", "delete", "update", "create"]---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: cephfs-csi-provisioner-role-cfg# replace with non-cephfs namespace namenamespace: cephfs
subjects:- kind: ServiceAccountname: cephfs-csi-provisioner# replace with non-cephfs namespace namenamespace: cephfs
roleRef:kind: Rolename: cephfs-external-provisioner-cfgapiGroup: rbac.authorization.k8s.io
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat csi-nodeplugin-rbac.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:name: cephfs-csi-nodepluginnamespace: cephfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: cephfs-csi-nodeplugin
rules:- apiGroups: [""]resources: ["secrets"]verbs: ["get"]- apiGroups: [""]resources: ["configmaps"]verbs: ["get"]- apiGroups: [""]resources: ["serviceaccounts"]verbs: ["get"]- apiGroups: [""]resources: ["serviceaccounts/token"]verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: cephfs-csi-nodeplugin
subjects:- kind: ServiceAccountname: cephfs-csi-nodeplugin# replace with non-cephfs namespace namenamespace: cephfs
roleRef:kind: ClusterRolename: cephfs-csi-nodepluginapiGroup: rbac.authorization.k8s.io
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f csi-provisioner-rbac.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f csi-nodeplugin-rbac.yaml

Step 4 编辑configmap文件

root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat csi-config-map.yaml
apiVersion: v1
kind: ConfigMap
data:config.json: |-[{"clusterID": "92ab6c78-7edc-11ee-aec4-5e807f521aec","monitors": ["10.220.9.13:6789","10.220.9.14:6789","10.220.9.15:6789"]}]
metadata:name: ceph-csi-confignamespace: cephfs
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat ceph-config.yaml 
apiVersion: v1
kind: ConfigMap
data:ceph.conf: |[global]auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephx# Workaround for http://tracker.ceph.com/issues/23446fuse_set_user_groups = false# ceph-fuse which uses libfuse2 by default has write buffer size of 2KiB# adding 'fuse_big_writes = true' option by default to override this limit# see https://github.com/ceph/ceph-csi/issues/1928fuse_big_writes = true# keyring is a required key and its value should be emptykeyring: |
metadata:name: ceph-confignamespace: cephfs
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f ceph-csi-config.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f ceph-config.yaml

Step 5 部署 CSI Sidecar 容器:
#注释ceph-csi-encryption-kms-config volume

root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# sed -i 's/registry.k8s.io/k8s.dockerproxy.com/g' csi-cephfsplugin-provisioner.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# sed -i 's/registry.k8s.io/k8s.dockerproxy.com/g' csi-cephfsplugin.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat csi-cephfsplugin-provisioner.yaml
---
kind: Service
apiVersion: v1
metadata:name: csi-cephfsplugin-provisionernamespace: cephfslabels:app: csi-metrics
spec:selector:app: csi-cephfsplugin-provisionerports:- name: http-metricsport: 8080protocol: TCPtargetPort: 8681---
kind: Deployment
apiVersion: apps/v1
metadata:name: csi-cephfsplugin-provisionernamespace: cephfs
spec:selector:matchLabels:app: csi-cephfsplugin-provisionerreplicas: 3template:metadata:labels:app: csi-cephfsplugin-provisionerspec:#      affinity:#        podAntiAffinity:#          requiredDuringSchedulingIgnoredDuringExecution:#            - labelSelector:#                matchExpressions:#                  - key: app#                    operator: In#                    values:#                      - csi-cephfsplugin-provisioner#              topologyKey: "kubernetes.io/hostname"serviceAccountName: cephfs-csi-provisionerpriorityClassName: system-cluster-criticalcontainers:- name: csi-provisionerimage: k8s.dockerproxy.com/sig-storage/csi-provisioner:v3.5.0args:- "--csi-address=$(ADDRESS)"- "--v=1"- "--timeout=150s"- "--leader-election=true"- "--retry-interval-start=500ms"- "--feature-gates=Topology=false"- "--feature-gates=HonorPVReclaimPolicy=true"- "--prevent-volume-mode-conversion=true"- "--extra-create-metadata=true"env:- name: ADDRESSvalue: unix:///csi/csi-provisioner.sockimagePullPolicy: "IfNotPresent"volumeMounts:- name: socket-dirmountPath: /csi- name: csi-resizerimage: k8s.dockerproxy.com/sig-storage/csi-resizer:v1.8.0args:- "--csi-address=$(ADDRESS)"- "--v=1"- "--timeout=150s"- "--leader-election"- "--retry-interval-start=500ms"- "--handle-volume-inuse-error=false"- "--feature-gates=RecoverVolumeExpansionFailure=true"env:- name: ADDRESSvalue: unix:///csi/csi-provisioner.sockimagePullPolicy: "IfNotPresent"volumeMounts:- name: socket-dirmountPath: /csi- name: csi-snapshotterimage: k8s.dockerproxy.com/sig-storage/csi-snapshotter:v6.2.2args:- "--csi-address=$(ADDRESS)"- "--v=1"- "--timeout=150s"- "--leader-election=true"- "--extra-create-metadata=true"env:- name: ADDRESSvalue: unix:///csi/csi-provisioner.sockimagePullPolicy: "IfNotPresent"volumeMounts:- name: socket-dirmountPath: /csi- name: csi-cephfspluginimage: quay.io/cephcsi/cephcsi:v3.9.0args:- "--nodeid=$(NODE_ID)"- "--type=cephfs"- "--controllerserver=true"- "--endpoint=$(CSI_ENDPOINT)"- "--v=5"- "--drivername=cephfs.csi.ceph.com"- "--pidlimit=-1"- "--enableprofiling=false"- "--setmetadata=true"env:- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP- name: NODE_IDvalueFrom:fieldRef:fieldPath: spec.nodeName- name: CSI_ENDPOINTvalue: unix:///csi/csi-provisioner.sock- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# - name: KMS_CONFIGMAP_NAME#   value: encryptionConfigimagePullPolicy: "IfNotPresent"volumeMounts:- name: socket-dirmountPath: /csi- name: host-sysmountPath: /sys- name: lib-modulesmountPath: /lib/modulesreadOnly: true- name: host-devmountPath: /dev- name: ceph-configmountPath: /etc/ceph/- name: ceph-csi-configmountPath: /etc/ceph-csi-config/- name: keys-tmp-dirmountPath: /tmp/csi/keys# - name: ceph-csi-encryption-kms-config
##                #   mountPath: /etc/ceph-csi-encryption-kms-config/- name: liveness-prometheusimage: quay.io/cephcsi/cephcsi:v3.9.0args:- "--type=liveness"- "--endpoint=$(CSI_ENDPOINT)"- "--metricsport=8681"- "--metricspath=/metrics"- "--polltime=60s"- "--timeout=3s"env:- name: CSI_ENDPOINTvalue: unix:///csi/csi-provisioner.sock- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIPvolumeMounts:- name: socket-dirmountPath: /csiimagePullPolicy: "IfNotPresent"volumes:- name: socket-diremptyDir: {medium: "Memory"}- name: host-syshostPath:path: /sys- name: lib-moduleshostPath:path: /lib/modules- name: host-devhostPath:path: /dev- name: ceph-configconfigMap:name: ceph-config- name: ceph-csi-configconfigMap:name: ceph-csi-config- name: keys-tmp-diremptyDir: {medium: "Memory"}# - name: ceph-csi-encryption-kms-config# configMap:# name: ceph-csi-encryption-kms-config
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat csi-cephfsplugin.yaml
---
kind: DaemonSet
apiVersion: apps/v1
metadata:name: csi-cephfspluginnamespace: cephfs
spec:selector:matchLabels:app: csi-cephfsplugintemplate:metadata:labels:app: csi-cephfspluginspec:serviceAccountName: cephfs-csi-nodepluginpriorityClassName: system-node-criticalhostNetwork: truehostPID: true# to use e.g. Rook orchestrated cluster, and mons' FQDN is# resolved through k8s service, set dns policy to cluster firstdnsPolicy: ClusterFirstWithHostNetcontainers:- name: driver-registrar# This is necessary only for systems with SELinux, where# non-privileged sidecar containers cannot access unix domain socket# created by privileged CSI driver container.securityContext:privileged: trueallowPrivilegeEscalation: trueimage: k8s.dockerproxy.com/sig-storage/csi-node-driver-registrar:v2.8.0args:- "--v=1"- "--csi-address=/csi/csi.sock"- "--kubelet-registration-path=/var/lib/kubelet/plugins/cephfs.csi.ceph.com/csi.sock"env:- name: KUBE_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamevolumeMounts:- name: socket-dirmountPath: /csi- name: registration-dirmountPath: /registration- name: csi-cephfspluginsecurityContext:privileged: truecapabilities:add: ["SYS_ADMIN"]allowPrivilegeEscalation: trueimage: quay.io/cephcsi/cephcsi:v3.9.0args:- "--nodeid=$(NODE_ID)"- "--type=cephfs"- "--nodeserver=true"- "--endpoint=$(CSI_ENDPOINT)"- "--v=5"- "--drivername=cephfs.csi.ceph.com"- "--enableprofiling=false"# If topology based provisioning is desired, configure required# node labels representing the nodes topology domain# and pass the label names below, for CSI to consume and advertise# its equivalent topology domain# - "--domainlabels=failure-domain/region,failure-domain/zone"env:- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP- name: NODE_IDvalueFrom:fieldRef:fieldPath: spec.nodeName- name: CSI_ENDPOINTvalue: unix:///csi/csi.sock- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# - name: KMS_CONFIGMAP_NAME#   value: encryptionConfigimagePullPolicy: "IfNotPresent"volumeMounts:- name: socket-dirmountPath: /csi- name: mountpoint-dirmountPath: /var/lib/kubelet/podsmountPropagation: Bidirectional- name: plugin-dirmountPath: /var/lib/kubelet/pluginsmountPropagation: "Bidirectional"- name: host-sysmountPath: /sys- name: etc-selinuxmountPath: /etc/selinuxreadOnly: true- name: lib-modulesmountPath: /lib/modulesreadOnly: true- name: host-devmountPath: /dev- name: host-mountmountPath: /run/mount- name: ceph-configmountPath: /etc/ceph/- name: ceph-csi-configmountPath: /etc/ceph-csi-config/- name: keys-tmp-dirmountPath: /tmp/csi/keys- name: ceph-csi-mountinfomountPath: /csi/mountinfo# - name: ceph-csi-encryption-kms-config#   mountPath: /etc/ceph-csi-encryption-kms-config/- name: liveness-prometheussecurityContext:privileged: trueallowPrivilegeEscalation: trueimage: quay.io/cephcsi/cephcsi:v3.9.0args:- "--type=liveness"- "--endpoint=$(CSI_ENDPOINT)"- "--metricsport=8681"- "--metricspath=/metrics"- "--polltime=60s"- "--timeout=3s"env:- name: CSI_ENDPOINTvalue: unix:///csi/csi.sock- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIPvolumeMounts:- name: socket-dirmountPath: /csiimagePullPolicy: "IfNotPresent"volumes:- name: socket-dirhostPath:path: /var/lib/kubelet/plugins/cephfs.csi.ceph.com/type: DirectoryOrCreate- name: registration-dirhostPath:path: /var/lib/kubelet/plugins_registry/type: Directory- name: mountpoint-dirhostPath:path: /var/lib/kubelet/podstype: DirectoryOrCreate- name: plugin-dirhostPath:path: /var/lib/kubelet/pluginstype: Directory- name: host-syshostPath:path: /sys- name: etc-selinuxhostPath:path: /etc/selinux- name: lib-moduleshostPath:path: /lib/modules- name: host-devhostPath:path: /dev- name: host-mounthostPath:path: /run/mount- name: ceph-configconfigMap:name: ceph-config- name: ceph-csi-configconfigMap:name: ceph-csi-config- name: keys-tmp-diremptyDir: {medium: "Memory"}- name: ceph-csi-mountinfohostPath:path: /var/lib/kubelet/plugins/cephfs.csi.ceph.com/mountinfotype: DirectoryOrCreate#- name: ceph-csi-encryption-kms-config#  configMap:#    name: ceph-csi-encryption-kms-config
---
# This is a service to expose the liveness metrics
apiVersion: v1
kind: Service
metadata:name: csi-metrics-cephfspluginnamespace: cephfslabels:app: csi-metrics
spec:ports:- name: http-metricsport: 8080protocol: TCPtargetPort: 8681selector:app: csi-cephfsplugin
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f csi-cephfsplugin-provisioner.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f csi-cephfsplugin.yaml

Step 6 Ceph创建cephfs

[root@ceph01 ~]# ceph osd pool cephfs
[root@ceph01 ~]# ceph auth get-key client.admin
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Step 7 创建ceph secret

root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat ceph-sc.yaml 
apiVersion: v1
kind: Secret
metadata:name: csi-cephfs-secretnamespace: cephfs
stringData:userID: adminuserKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxadminID: adminadminKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f ceph-sc.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl get secrets -n cephfs
NAME                TYPE     DATA   AGE
csi-cephfs-secret   Opaque   4      49m

Step 8 创建storageclass

root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat ceph-storage-class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:clusterID: 92ab6c78-7edc-11ee-aec4-5e807f521aec fsName: cephfs          #cephfs名称,上面创建pool: cephfs_data           #cephfs pool名称#  mounter: fuse       #挂载方式csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secretcsi.storage.k8s.io/provisioner-secret-namespace: cephfscsi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secretcsi.storage.k8s.io/controller-expand-secret-namespace: cephfscsi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secretcsi.storage.k8s.io/node-stage-secret-namespace: cephfs
reclaimPolicy: Delete
allowVolumeExpansion: true
#mountOptions:# - discard
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f ceph-storage-class.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl get storageclass
NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-cephfs-sc          cephfs.csi.ceph.com                             Delete          Immediate           true                   138m

Step 9 创建pvc测试

root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat test-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: cephfs-test-pvcnamespace: cephfs
spec:accessModes:- ReadWriteManyresources:requests:storage: 1GistorageClassName: csi-cephfs-sc
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f test-pvc.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl get pvc -n cephfs
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
cephfs-test-pvc   Bound    pvc-94236c61-a5b7-494d-89ad-e1e18eaad175   1Gi        RWX            csi-cephfs-sc   49m

Step 10 创建pod测试

root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# cat test-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:name: test-pd
spec:terminationGracePeriodSeconds: 0containers:- image: harbor.zetyun.cn/gcp/nginx:1.25.3name: test-containervolumeMounts:- mountPath: /cachename: cache-volumevolumes:- name: cache-volumepersistentVolumeClaim:claimName: cephfs-test-pvc1  #填写pvc名称
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl create -f test-pod.yaml
root@sd-k8s-master-1:~/ceph-csi-3.9.0/deploy/cephfs/kubernetes# kubectl get po test-pd 
NAME      READY   STATUS    RESTARTS   AGE
test-pd   1/1     Running   0          6m49s

这篇关于k8s cephfs(动态pvc)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!


原文地址:
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.chinasem.cn/article/1012896

相关文章

Java调用C#动态库的三种方法详解

《Java调用C#动态库的三种方法详解》在这个多语言编程的时代,Java和C#就像两位才华横溢的舞者,各自在不同的舞台上展现着独特的魅力,然而,当它们携手合作时,又会碰撞出怎样绚丽的火花呢?今天,我们... 目录方法1:C++/CLI搭建桥梁——Java ↔ C# 的“翻译官”步骤1:创建C#类库(.NET

MyBatis编写嵌套子查询的动态SQL实践详解

《MyBatis编写嵌套子查询的动态SQL实践详解》在Java生态中,MyBatis作为一款优秀的ORM框架,广泛应用于数据库操作,本文将深入探讨如何在MyBatis中编写嵌套子查询的动态SQL,并结... 目录一、Myhttp://www.chinasem.cnBATis动态SQL的核心优势1. 灵活性与可

Mybatis嵌套子查询动态SQL编写实践

《Mybatis嵌套子查询动态SQL编写实践》:本文主要介绍Mybatis嵌套子查询动态SQL编写方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录前言一、实体类1、主类2、子类二、Mapper三、XML四、详解总结前言MyBATis的xml文件编写动态SQL

SpringBoot实现Kafka动态反序列化的完整代码

《SpringBoot实现Kafka动态反序列化的完整代码》在分布式系统中,Kafka作为高吞吐量的消息队列,常常需要处理来自不同主题(Topic)的异构数据,不同的业务场景可能要求对同一消费者组内的... 目录引言一、问题背景1.1 动态反序列化的需求1.2 常见问题二、动态反序列化的核心方案2.1 ht

golang实现动态路由的项目实践

《golang实现动态路由的项目实践》本文主要介绍了golang实现动态路由项目实践,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习... 目录一、动态路由1.结构体(数据库的定义)2.预加载preload3.添加关联的方法一、动态路由1

Python Selenium动态渲染页面和抓取的使用指南

《PythonSelenium动态渲染页面和抓取的使用指南》在Web数据采集领域,动态渲染页面已成为现代网站的主流形式,本文将从技术原理,环境配置,核心功能系统讲解Selenium在Python动态... 目录一、Selenium技术架构解析二、环境搭建与基础配置1. 组件安装2. 驱动配置3. 基础操作模

慢sql提前分析预警和动态sql替换-Mybatis-SQL

《慢sql提前分析预警和动态sql替换-Mybatis-SQL》为防止慢SQL问题而开发的MyBatis组件,该组件能够在开发、测试阶段自动分析SQL语句,并在出现慢SQL问题时通过Ducc配置实现动... 目录背景解决思路开源方案调研设计方案详细设计使用方法1、引入依赖jar包2、配置组件XML3、核心配

springboot使用Scheduling实现动态增删启停定时任务教程

《springboot使用Scheduling实现动态增删启停定时任务教程》:本文主要介绍springboot使用Scheduling实现动态增删启停定时任务教程,具有很好的参考价值,希望对大家有... 目录1、配置定时任务需要的线程池2、创建ScheduledFuture的包装类3、注册定时任务,增加、删

SpringBoot基于配置实现短信服务策略的动态切换

《SpringBoot基于配置实现短信服务策略的动态切换》这篇文章主要为大家详细介绍了SpringBoot在接入多个短信服务商(如阿里云、腾讯云、华为云)后,如何根据配置或环境切换使用不同的服务商,需... 目录目标功能示例配置(application.yml)配置类绑定短信发送策略接口示例:阿里云 & 腾

MySQL中动态生成SQL语句去掉所有字段的空格的操作方法

《MySQL中动态生成SQL语句去掉所有字段的空格的操作方法》在数据库管理过程中,我们常常会遇到需要对表中字段进行清洗和整理的情况,本文将详细介绍如何在MySQL中动态生成SQL语句来去掉所有字段的空... 目录在mysql中动态生成SQL语句去掉所有字段的空格准备工作原理分析动态生成SQL语句在MySQL