一起来学ceph 01.ceph nautilus版 安装

2024-03-14 14:32
文章标签 安装 01 起来 ceph nautilus

本文主要是介绍一起来学ceph 01.ceph nautilus版 安装,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

ceph install

在这里插入图片描述

环境准备

  1. 双网卡
  2. 双硬盘

hosts

192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin
###官方要求所有节点内核版本要求4.10以上
uname -r
5.2.2-1.el7.elrepo.x86_64

时间同步

 yum -y install  chrony

统一与ceph-admin的时间同步

[root@ceph-admin ~]# vim /etc/chrony.conf 
....
#allow 192.168.0.0/16
allow 192.168.48.0/24
[root@ceph-admin ~]# systemctl enable chronyd
[root@ceph-admin ~]# systemctl start chronyd

ceph01,ceph02,ceph03,ceph04 删除其他server,只有一个server

vim /etc/chrony.conf...
server 192.168.48.15 iburst
systemctl enable chronyd
systemctl start chronyd
[root@ceph01 ~]# chronyc sources -vMS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ceph-admin                    3   6    17    12   +100us[ +136us] +/-   52ms

网络划分

cluster网络
192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin
public网络
192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin

硬盘划分

每个节点都准备2个10g硬盘,sdb,sdc

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   99G  0 part ├─centos-root 253:0    0   50G  0 lvm  /├─centos-swap 253:1    0    2G  0 lvm  └─centos-home 253:2    0   47G  0 lvm  /home
sdb               8:16   0   10G  0 disk 
sdc               8:32   0   10G  0 disk 

准备ceph yum源

vim /etc/yum.repos.d/ceph.repo[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md

准备epel yum源

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

准备专门的ceph用户账号 cephadm

useradd cephadm
echo "ceph" | passwd --stdin cephadm

sudo 无需密码

vim /etc/sudoers.d/cephadmcephadm  ALL=(root)  NOPASSWD: ALL

添加ssh认证

su - cephadm
ssh-keygen 
ssh-copy-id cephadm@ceph-admin
ssh-copy-id cephadm@ceph01
ssh-copy-id cephadm@ceph02
ssh-copy-id cephadm@ceph03
ssh-copy-id cephadm@ceph04

ceph-admin节点安装

[root@ceph-admin ~]# yum install ceph-deploy python-setuptools python2-subprocess32 ceph-common

除了ceph-admin其他节点安装ceph ceph-radosgw

yum -y install ceph ceph-radosgw

RADOS集群

在ceph-admin节点上以cephadm用户运行操作命令

创建一个ceph工作目录

[cephadm@ceph-admin ~]$ mkdir ceph-cluster
[cephadm@ceph-admin ~]$ cd ceph-cluster/

安装ceph集群文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy install  ceph01 ceph02 ceph03 ceph04  --no-adjust-repos

创建一个mon集群

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy new  --cluster-network 192.168.126.0/24 --public-network 192.168.48.0/24  ceph01 ceph02 ceph03
[cephadm@ceph-admin ceph-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
[cephadm@ceph-admin ceph-cluster]$ cat ceph.conf 
[global]
fsid = a384da5c-a9ae-464a-8a92-e23042e5d267
public_network = 192.168.48.0/24
cluster_network = 192.168.126.0/24
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.48.11,192.168.48.12,192.168.48.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

直接初始化monitor

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon create-initial

后续扩展监视器节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon add  主机名

给节点分配秘钥和配置文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy admin ceph01 ceph02 ceph03 ceph04 ceph-admin

所有节点执行赋予cephadm访问秘钥权限

setfacl -m u:cephadm:rw /etc/ceph/ceph.client.admin.keyring

配置Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph04

扩展Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph03

测试集群的健康状态

[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 29m)mgr: ceph04(active, since 30s), standbys: ceph03osd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:     

擦除磁盘数据

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdc

添加OSD

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdc

而后可使用”ceph-deploy osd list”命令列出指定节点上的OSD:

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd list ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy osd list ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fef80f60ea8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph01']
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fef813b1de8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph01...
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm list
[ceph01][DEBUG ] 
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.0 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       block uuid                sKPwP3-o1L3-xbBu-az3d-N0MB-XOq9-0psakY
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       osd id                    0
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdb
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.4 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       block uuid                1vdMB5-bjal-IKY2-PBzw-S0c1-48kV-4Hfszq
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       osd id                    4
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdc

事实上,管理员也可以使用ceph命令查看OSD的相关信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd stat
8 osds: 8 up (since 58s), 8 in (since 58s); epoch: e33

或者使用如下命令了解相关的信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd ls
0
1
2
3
4
5
6
7
[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 37m)mgr: ceph04(active, since 7m), standbys: ceph03osd: 8 osds: 8 up (since 115s), 8 in (since 115s)data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   8.0 GiB used, 64 GiB / 72 GiB availpgs:     [cephadm@ceph-admin ceph-cluster]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.07031 root default                            
-3       0.01758     host ceph01                         0   hdd 0.00879         osd.0       up  1.00000 1.00000 4   hdd 0.00879         osd.4       up  1.00000 1.00000 
-5       0.01758     host ceph02                         1   hdd 0.00879         osd.1       up  1.00000 1.00000 5   hdd 0.00879         osd.5       up  1.00000 1.00000 
-7       0.01758     host ceph03                         2   hdd 0.00879         osd.2       up  1.00000 1.00000 6   hdd 0.00879         osd.6       up  1.00000 1.00000 
-9       0.01758     host ceph04                         3   hdd 0.00879         osd.3       up  1.00000 1.00000 7   hdd 0.00879         osd.7       up  1.00000 1.00000 

这篇关于一起来学ceph 01.ceph nautilus版 安装的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/808717

相关文章

gitlab安装及邮箱配置和常用使用方式

《gitlab安装及邮箱配置和常用使用方式》:本文主要介绍gitlab安装及邮箱配置和常用使用方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1.安装GitLab2.配置GitLab邮件服务3.GitLab的账号注册邮箱验证及其分组4.gitlab分支和标签的

MySQL MCP 服务器安装配置最佳实践

《MySQLMCP服务器安装配置最佳实践》本文介绍MySQLMCP服务器的安装配置方法,本文结合实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下... 目录mysql MCP 服务器安装配置指南简介功能特点安装方法数据库配置使用MCP Inspector进行调试开发指

在Windows上使用qemu安装ubuntu24.04服务器的详细指南

《在Windows上使用qemu安装ubuntu24.04服务器的详细指南》本文介绍了在Windows上使用QEMU安装Ubuntu24.04的全流程:安装QEMU、准备ISO镜像、创建虚拟磁盘、配置... 目录1. 安装QEMU环境2. 准备Ubuntu 24.04镜像3. 启动QEMU安装Ubuntu4

Python UV安装、升级、卸载详细步骤记录

《PythonUV安装、升级、卸载详细步骤记录》:本文主要介绍PythonUV安装、升级、卸载的详细步骤,uv是Astral推出的下一代Python包与项目管理器,主打单一可执行文件、极致性能... 目录安装检查升级设置自动补全卸载UV 命令总结 官方文档详见:https://docs.astral.sh/

Nexus安装和启动的实现教程

《Nexus安装和启动的实现教程》:本文主要介绍Nexus安装和启动的实现教程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录一、Nexus下载二、Nexus安装和启动三、关闭Nexus总结一、Nexus下载官方下载链接:DownloadWindows系统根

Java SWT库详解与安装指南(最新推荐)

《JavaSWT库详解与安装指南(最新推荐)》:本文主要介绍JavaSWT库详解与安装指南,在本章中,我们介绍了如何下载、安装SWTJAR包,并详述了在Eclipse以及命令行环境中配置Java... 目录1. Java SWT类库概述2. SWT与AWT和Swing的区别2.1 历史背景与设计理念2.1.

安装centos8设置基础软件仓库时出错的解决方案

《安装centos8设置基础软件仓库时出错的解决方案》:本文主要介绍安装centos8设置基础软件仓库时出错的解决方案,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐... 目录安装Centos8设置基础软件仓库时出错版本 8版本 8.2.200android4版本 javas

Pytorch介绍与安装过程

《Pytorch介绍与安装过程》PyTorch因其直观的设计、卓越的灵活性以及强大的动态计算图功能,迅速在学术界和工业界获得了广泛认可,成为当前深度学习研究和开发的主流工具之一,本文给大家介绍Pyto... 目录1、Pytorch介绍1.1、核心理念1.2、核心组件与功能1.3、适用场景与优势总结1.4、优

conda安装GPU版pytorch默认却是cpu版本

《conda安装GPU版pytorch默认却是cpu版本》本文主要介绍了遇到Conda安装PyTorchGPU版本却默认安装CPU的问题,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的... 目录一、问题描述二、网上解决方案罗列【此节为反面方案罗列!!!】三、发现的根本原因[独家]3.1 p

windows系统上如何进行maven安装和配置方式

《windows系统上如何进行maven安装和配置方式》:本文主要介绍windows系统上如何进行maven安装和配置方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不... 目录1. Maven 简介2. maven的下载与安装2.1 下载 Maven2.2 Maven安装2.