一起来学ceph 01.ceph nautilus版 安装

2023-10-27 22:10
文章标签 安装 01 起来 ceph nautilus

本文主要是介绍一起来学ceph 01.ceph nautilus版 安装,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

ceph install

在这里插入图片描述

环境准备

  1. 双网卡
  2. 双硬盘

hosts

192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin
###官方要求所有节点内核版本要求4.10以上
uname -r
5.2.2-1.el7.elrepo.x86_64

时间同步

 yum -y install  chrony

统一与ceph-admin的时间同步

[root@ceph-admin ~]# vim /etc/chrony.conf 
....
#allow 192.168.0.0/16
allow 192.168.48.0/24
[root@ceph-admin ~]# systemctl enable chronyd
[root@ceph-admin ~]# systemctl start chronyd

ceph01,ceph02,ceph03,ceph04 删除其他server,只有一个server

vim /etc/chrony.conf...
server 192.168.48.15 iburst
systemctl enable chronyd
systemctl start chronyd
[root@ceph01 ~]# chronyc sources -vMS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ceph-admin                    3   6    17    12   +100us[ +136us] +/-   52ms

网络划分

cluster网络
192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin
public网络
192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin

硬盘划分

每个节点都准备2个10g硬盘,sdb,sdc

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   99G  0 part ├─centos-root 253:0    0   50G  0 lvm  /├─centos-swap 253:1    0    2G  0 lvm  └─centos-home 253:2    0   47G  0 lvm  /home
sdb               8:16   0   10G  0 disk 
sdc               8:32   0   10G  0 disk 

准备ceph yum源

vim /etc/yum.repos.d/ceph.repo[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md

准备epel yum源

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

准备专门的ceph用户账号 cephadm

useradd cephadm
echo "ceph" | passwd --stdin cephadm

sudo 无需密码

vim /etc/sudoers.d/cephadmcephadm  ALL=(root)  NOPASSWD: ALL

添加ssh认证

su - cephadm
ssh-keygen 
ssh-copy-id cephadm@ceph-admin
ssh-copy-id cephadm@ceph01
ssh-copy-id cephadm@ceph02
ssh-copy-id cephadm@ceph03
ssh-copy-id cephadm@ceph04

ceph-admin节点安装

[root@ceph-admin ~]# yum install ceph-deploy python-setuptools python2-subprocess32 ceph-common

除了ceph-admin其他节点安装ceph ceph-radosgw

yum -y install ceph ceph-radosgw

RADOS集群

在ceph-admin节点上以cephadm用户运行操作命令

创建一个ceph工作目录

[cephadm@ceph-admin ~]$ mkdir ceph-cluster
[cephadm@ceph-admin ~]$ cd ceph-cluster/

安装ceph集群文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy install  ceph01 ceph02 ceph03 ceph04  --no-adjust-repos

创建一个mon集群

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy new  --cluster-network 192.168.126.0/24 --public-network 192.168.48.0/24  ceph01 ceph02 ceph03
[cephadm@ceph-admin ceph-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
[cephadm@ceph-admin ceph-cluster]$ cat ceph.conf 
[global]
fsid = a384da5c-a9ae-464a-8a92-e23042e5d267
public_network = 192.168.48.0/24
cluster_network = 192.168.126.0/24
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.48.11,192.168.48.12,192.168.48.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

直接初始化monitor

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon create-initial

后续扩展监视器节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon add  主机名

给节点分配秘钥和配置文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy admin ceph01 ceph02 ceph03 ceph04 ceph-admin

所有节点执行赋予cephadm访问秘钥权限

setfacl -m u:cephadm:rw /etc/ceph/ceph.client.admin.keyring

配置Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph04

扩展Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph03

测试集群的健康状态

[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 29m)mgr: ceph04(active, since 30s), standbys: ceph03osd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:     

擦除磁盘数据

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdc

添加OSD

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdc

而后可使用”ceph-deploy osd list”命令列出指定节点上的OSD:

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd list ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy osd list ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fef80f60ea8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph01']
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fef813b1de8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph01...
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm list
[ceph01][DEBUG ] 
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.0 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       block uuid                sKPwP3-o1L3-xbBu-az3d-N0MB-XOq9-0psakY
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       osd id                    0
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdb
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.4 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       block uuid                1vdMB5-bjal-IKY2-PBzw-S0c1-48kV-4Hfszq
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       osd id                    4
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdc

事实上,管理员也可以使用ceph命令查看OSD的相关信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd stat
8 osds: 8 up (since 58s), 8 in (since 58s); epoch: e33

或者使用如下命令了解相关的信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd ls
0
1
2
3
4
5
6
7
[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 37m)mgr: ceph04(active, since 7m), standbys: ceph03osd: 8 osds: 8 up (since 115s), 8 in (since 115s)data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   8.0 GiB used, 64 GiB / 72 GiB availpgs:     [cephadm@ceph-admin ceph-cluster]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.07031 root default                            
-3       0.01758     host ceph01                         0   hdd 0.00879         osd.0       up  1.00000 1.00000 4   hdd 0.00879         osd.4       up  1.00000 1.00000 
-5       0.01758     host ceph02                         1   hdd 0.00879         osd.1       up  1.00000 1.00000 5   hdd 0.00879         osd.5       up  1.00000 1.00000 
-7       0.01758     host ceph03                         2   hdd 0.00879         osd.2       up  1.00000 1.00000 6   hdd 0.00879         osd.6       up  1.00000 1.00000 
-9       0.01758     host ceph04                         3   hdd 0.00879         osd.3       up  1.00000 1.00000 7   hdd 0.00879         osd.7       up  1.00000 1.00000 

这篇关于一起来学ceph 01.ceph nautilus版 安装的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/288723

相关文章

2025最新版Android Studio安装及组件配置教程(SDK、JDK、Gradle)

《2025最新版AndroidStudio安装及组件配置教程(SDK、JDK、Gradle)》:本文主要介绍2025最新版AndroidStudio安装及组件配置(SDK、JDK、Gradle... 目录原生 android 简介Android Studio必备组件一、Android Studio安装二、A

前端Visual Studio Code安装配置教程之下载、汉化、常用组件及基本操作

《前端VisualStudioCode安装配置教程之下载、汉化、常用组件及基本操作》VisualStudioCode是微软推出的一个强大的代码编辑器,功能强大,操作简单便捷,还有着良好的用户界面,... 目录一、Visual Studio Code下载二、汉化三、常用组件1、Auto Rename Tag2

win10安装及配置Gradle全过程

《win10安装及配置Gradle全过程》本文详细介绍了Gradle的下载、安装、环境变量配置以及如何修改本地仓库位置,通过这些步骤,用户可以成功安装并配置Gradle,以便进行项目构建... 目录一、Gradle下载1.1、Gradle下载地址1.2、Gradle下载步骤二、Gradle安装步骤2.1、安

python依赖管理工具UV的安装和使用教程

《python依赖管理工具UV的安装和使用教程》UV是一个用Rust编写的Python包安装和依赖管理工具,比传统工具(如pip)有着更快、更高效的体验,:本文主要介绍python依赖管理工具UV... 目录前言一、命令安装uv二、手动编译安装2.1在archlinux安装uv的依赖工具2.2从github

JDK8(Java Development kit)的安装与配置全过程

《JDK8(JavaDevelopmentkit)的安装与配置全过程》文章简要介绍了Java的核心特点(如跨平台、JVM机制)及JDK/JRE的区别,重点讲解了如何通过配置环境变量(PATH和JA... 目录Java特点JDKJREJDK的下载,安装配置环境变量总结Java特点说起 Java,大家肯定都

RabbitMQ 延时队列插件安装与使用示例详解(基于 Delayed Message Plugin)

《RabbitMQ延时队列插件安装与使用示例详解(基于DelayedMessagePlugin)》本文详解RabbitMQ通过安装rabbitmq_delayed_message_exchan... 目录 一、什么是 RabbitMQ 延时队列? 二、安装前准备✅ RabbitMQ 环境要求 三、安装延时队

linux系统上安装JDK8全过程

《linux系统上安装JDK8全过程》文章介绍安装JDK的必要性及Linux下JDK8的安装步骤,包括卸载旧版本、下载解压、配置环境变量等,强调开发需JDK,运行可选JRE,现JDK已集成JRE... 目录为什么要安装jdk?1.查看linux系统是否有自带的jdk:2.下载jdk压缩包2.解压3.配置环境

Python库 Django 的简介、安装、用法入门教程

《Python库Django的简介、安装、用法入门教程》Django是Python最流行的Web框架之一,它帮助开发者快速、高效地构建功能强大的Web应用程序,接下来我们将从简介、安装到用法详解,... 目录一、Django 简介 二、Django 的安装教程 1. 创建虚拟环境2. 安装Django三、创

linux安装、更新、卸载anaconda实践

《linux安装、更新、卸载anaconda实践》Anaconda是基于conda的科学计算环境,集成1400+包及依赖,安装需下载脚本、接受协议、设置路径、配置环境变量,更新与卸载通过conda命令... 目录随意找一个目录下载安装脚本检查许可证协议,ENTER就可以安装完毕之后激活anaconda安装更

Jenkins的安装与简单配置过程

《Jenkins的安装与简单配置过程》本文简述Jenkins在CentOS7.3上安装流程,包括Java环境配置、RPM包安装、修改JENKINS_HOME路径及权限、启动服务、插件安装与系统管理设置... 目录www.chinasem.cnJenkins安装访问并配置JenkinsJenkins配置邮件通知