存储节点osd 踢出集群格式化后重新加入集群——osd新加入集群同理

说明

现在处理的是stor07存储节点上的ceph-50
然后ceph-50这个节点需要重新加入集群。

osd信息记录

注意看命令后面注释信息

[root@stor07 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/rhel_stor07-root  500G  9.2G  491G   2% /
devtmpfs                       30G     0   30G   0% /dev
tmpfs                          31G     0   31G   0% /dev/shm
tmpfs                          31G   61M   31G   1% /run
tmpfs                          31G     0   31G   0% /sys/fs/cgroup
/dev/sdh1                     5.5T  4.2T  1.4T  76% /var/lib/ceph/osd/ceph-48
/dev/sdi1                     5.5T  4.0T  1.6T  73% /var/lib/ceph/osd/ceph-49
/dev/sdf1                     5.5T  4.0T  1.6T  72% /var/lib/ceph/osd/ceph-55
/dev/sda1                     5.5T  3.7T  1.8T  68% /var/lib/ceph/osd/ceph-50【记录盘序号:/dev/sda1】
/dev/sdc1                     5.5T  3.7T  1.8T  68% /var/lib/ceph/osd/ceph-52
/dev/sdd1                     5.5T  3.8T  1.8T  69% /var/lib/ceph/osd/ceph-53
/dev/sde1                     5.5T  3.5T  2.1T  63% /var/lib/ceph/osd/ceph-54
/dev/sdb1                     5.5T  4.1T  1.4T  75% /var/lib/ceph/osd/ceph-51
/dev/sdj2                    1014M  169M  846M  17% /boot
/dev/mapper/rhel_stor07-home   50G   33M   50G   1% /home
tmpfs                         6.2G     0  6.2G   0% /run/user/0
[root@stor07 ~]# cd /var/lib/ceph/osd/ceph-50 【进入需处理osd路径】
[root@stor07 ceph-50]# ls -l
total 80
-rw-r--r--.   1 root root   776 Jan 29  2018 activate.monmap
-rw-r--r--.   1 root root     3 Jan 29  2018 active
-rw-r--r--.   1 root root    37 Jan 29  2018 ceph_fsid
drwxr-xr-x. 508 root root 24576 Mar 28 13:49 current
-rw-r--r--.   1 root root    37 Jan 29  2018 fsid
lrwxrwxrwx    1 root root     9 Dec 29  2019 journal -> /dev/sdg3 【查看日志盘序号:/dev/sdg3】
-rw-------.   1 root root    57 Jan 29  2018 keyring
-rw-r--r--.   1 root root    21 Jan 29  2018 magic
-rw-r--r--.   1 root root     6 Jan 29  2018 ready
-rw-r--r--.   1 root root     4 Jan 29  2018 store_version
-rw-r--r--.   1 root root    53 Jan 29  2018 superblock
-rw-r--r--.   1 root root     0 Jan 29  2018 sysvinit
-rw-r--r--.   1 root root     3 Jan 29  2018 whoami
[root@stor07 ceph-50]# 

重建osd

这儿正在做重建osd.50操作

1、osd所属存储节点上操作【osd进入停机维护】

  • [root@stor07 ceph-50]# ceph osd set noout【停机维护 OSD 时让 CRUSH 停止自动重均衡】

  • [root@stor07 ceph-50]# ceph osd set nodeep-scrub【有时候在集群恢复时,scrub操作会影响到恢复的性能,和noscrub一起设置来停止scrub。】

  • [root@stor07 ceph-50]# ceph osd tree【查看信息】

  • 停止osd服务【已停止可跳过,这儿的已停止是指之前这osd已经失效的】
    [root@stor07 ceph-50]# ceph osd stop osd.50

2、在任意mon节点上操作【踢出集群】

如果不知道mon节点是什么的,看这篇博客:openstack查看mon节点方法

  • 踢出集群故障的osd.50
[root@stor02 ~]# ceph osd out osd.50
[root@stor02 ~]# ceph osd crush remove osd.50
[root@stor02 ~]# ceph auth del osd.50
[root@stor02 ~]# ceph osd rm osd.50

【即使osd down了,以上操作也会同步数据】

3、在osd所属节点上操作

[root@stor07 ceph-50]# umount /var/lib/ceph/osd/ceph-50 【卸载 osd50】
[root@stor07 ceph-50]# ceph -s 【查看】
[root@stor07 ceph-50]# ceph osd tree【查看】


对日志盘进行分区【跳过,不要做】

这只是演示,如果误删了日志盘后怎么弄而已。
还有就是日志盘符可能会发生变化,日志盘符发生变化了,该osd就up不起来,具体的看这篇博客的说明把:https://cuichongxin.blog.csdn.net/article/details/111516678

在osd所属节点操作
lsblk【记录日志信息,如下图】
在这里插入图片描述

dd if=/dev/zero of=/dev/sdg bs=1M count=10 oflag=sync 【重建,能正常使用最好不要执行该命令】
parted -s /dev/sdg mklabel gpt【格式化】
parted -s /dev/sdg mkpart primary 2048s 20G 【开始分区,一个日志盘20G】
parted -s /dev/sdg mkpart primary 20G 40G 【多少个硬盘,就执行多少次,一个sdg对应一个硬盘的日志盘,创建后用lsblk查看会自动成为sdg1,sdg2…】

在这里插入图片描述

对日志盘10个分区依次设置标签,下面以sdg1 ,2为例【跳过】
sgdisk --typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 /dev/sdg【除了=后面的序号和最后面的/dev/要变,其他是固定的格式内容】
sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 /dev/sdg【除了=后面的序号和最后面的/dev/要变,其他是固定的格式内容】
【设置好之后输入partprobe 分区设备名刷新下磁盘信息,或重启下系统】

删除日志盘记录信息及恢复

一般没人删除这玩意,误删除后计算前后的结束扇区,然后用下面命令创建吧;
比如误删了sdg3的日志盘,计算方法是sdg2的结束扇区+1,sdg4的起始扇区-1;

【如果要删除日志盘,先使用fdisk查看指定的分区起始和结束扇区,然后用下面命令创建】【osd所属存储节点执行】

parted -s /dev/sdg mkpart primary 206176256s 374865919s 日志盘创建分区【按指定分区位置和大小创建】


4、先注释osd.50以前的分区分区信息

在osd所属存储节点操作

[root@stor07 ~]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Jan 10 20:18:28 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel_stor07-root /                       xfs     defaults        0 0
UUID=397e7ef7-6de7-4058-860f-353109622220 /boot                   xfs     defaults        0 0
/dev/mapper/rhel_stor07-home /home                   xfs     defaults        0 0
/dev/mapper/rhel_stor07-swap swap                    swap    defaults        0 0
UUID=203285cc-6edc-4f96-9784-728a6dc701e7 /var/lib/ceph/osd/ceph-48 xfs defaults 0 0
UUID=1afa842a-fca3-42f3-8ad0-f89c4e77a998 /var/lib/ceph/osd/ceph-49 xfs defaults 0 0
#UUID=43be7d9d-7484-4fa9-b15a-cb55211e1222 /var/lib/ceph/osd/ceph-50 xfs defaults 0 0【注释】

5、对节点上osd.50磁盘进行分区

在osd所属存储节点操作

[root@stor07 ~]# lsblk
[root@stor07 ~]# dd if=/dev/zero of=/dev/sda bs=1M count=10 oflag=sync【会删除原分区,注意/dev/sda切换为需要格式化的盘,别误操作了】
[root@stor07 ~]# parted /dev/sda mklabel gpt 【格式化】
[root@stor07 ~]# parted /dev/sda mkpart primary 2048s 100%【创建大小】
[root@stor07 ~]# mkfs.xfs /dev/sda1 【格式化】
[root@stor07 ~]# blkid /dev/sda1【查看UUID】
/dev/sda1: UUID="8ab9c12a-363b-4d98-9202-cfd64d52abc8" TYPE="xfs" PARTLABEL="primary" PARTUUID="ebf17b0a-eb90-479b-af0d-f97ac405e4e4" 
[root@stor07 ~]#  vi /etc/fstab 【修改之前注释的osd信息】
UUID=8ab9c12a-363b-4d98-9202-cfd64d52abc8 /var/lib/ceph/osd/ceph-50 xfs defaults 0 0 【替换/etc/fstab内以前的UUID】
[root@stor07 ~]# sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d /dev/sda【设置格式,除了typecode=后面的1(分区序号,1表示第一个分区)和/dev/会变,其他是固定的】】
The operation has completed successfully.

6、存储mon节点上执行【加入集群】

注:加入现在操作的是stor07上的osd.50,重新加入集群以后,osd.50名称就变了,从0开始,系统自动给未使用的编号,如:osd.50重新加入集群后可能就变成osd.0,或者osd.1了,这是正常的,这个序号如果不重新加入集群是永久有效的。

[root@stor02 ~]# pwd
/root
[root@stor02 ~]# ceph-de【然后tab一下,确定有没有该命令】
ceph-debugpack  ceph-dencoder   ceph-deploy  02 ~]#
[root@stor02 ~]#ceph-deploy --overwrite-conf osd prepare stor07:/dev/sda1:/dev/sdg3【加入集群,前面是固定格式,最后一行分是要变动的硬盘信息,分别表示为:osd所属存储主机名(如果没添加解析需要换成IP):故障盘(格式化后的):日志盘】
[root@stor02 ~]#ceph-deploy --overwrite-conf osd activate stor07:/dev/sda1:/dev/sdg3  【加入集群,前面是固定格式,最后一行分是要变动的硬盘信息,分别表示为:osd所属存储主机名(如果没添加解析需要换成IP):故障盘(格式化后的):日志盘】
[root@stor02 ~]#ceph -s 【查看是否开始同步】

7、取消集群标签 【同步完成之后再取消】

集群内任意存储节点上都可以执行【包括mon节点】

[root@stor02 ~]# ceph osd unset noout
[root@stor02 ~]# ceph osd unset nodeep-scrub

上面步骤6的全部命令过程

【mount节点执行】【操作6】

[root@controller01 ~]# ssh 【mon节点IP】
root@10.'s password: 
Last login: Mon Mar 29 21:06:43 2021 from controller01
 Authorized users only. All activity may be monitored and reported 
[root@stor02 ~]# pwd
/root
[root@stor02 ~]# ceph-deploy --overwrite-conf osd prepare stor07:/dev/sda1:/dev/sdg3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy --overwrite-conf osd prepare stor07:/dev/sda1:/dev/sdg3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('stor07', '/dev/sda1', '/dev/sdg3')]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x3fff8442e5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x3fff84424cf8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks stor07:/dev/sda1:/dev/sdg3
 Authorized users only. All activity may be monitored and reported 
 Authorized users only. All activity may be monitored and reported 
[stor07][DEBUG ] connected to host: stor07 
[stor07][DEBUG ] detect platform information from remote host
[stor07][DEBUG ] detect machine type
[stor07][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.3 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to stor07
[stor07][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host stor07 disk /dev/sda1 journal /dev/sdg3 activate False
[stor07][INFO  ] Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sda1 /dev/sdg3
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[stor07][WARNIN] DEBUG:ceph-disk:Journal /dev/sdg3 is a partition
[stor07][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/blkid -p -o udev /dev/sdg3
[stor07][WARNIN] WARNING:ceph-disk:Journal /dev/sdg3 was not prepared with ceph-disk. Symlinking directly.
[stor07][WARNIN] DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition
[stor07][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sda1
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs -f -f -- /dev/sda1
[stor07][DEBUG ] meta-data=/dev/sda1              isize=512    agcount=32, agsize=45780928 blks
[stor07][DEBUG ]          =                       sectsz=4096  attr=2, projid32bit=1
[stor07][DEBUG ]          =                       crc=1        finobt=0, sparse=0
[stor07][DEBUG ] data     =                       bsize=4096   blocks=1464989696, imaxpct=5
[stor07][DEBUG ]          =                       sunit=64     swidth=64 blks
[stor07][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
[stor07][DEBUG ] log      =internal log           bsize=4096   blocks=521728, version=2
[stor07][DEBUG ]          =                       sectsz=4096  sunit=1 blks, lazy-count=1
[stor07][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[stor07][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.N0i3Ne with options 

rw,noexec,nodev,noatime,nodiratime,nobarrier
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o rw,noexec,nodev,noatime,nodiratime,nobarrier -- /dev/sda1 

/var/lib/ceph/tmp/mnt.N0i3Ne
[stor07][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.N0i3Ne
[stor07][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.N0i3Ne/journal -> /dev/sdg3
[stor07][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.N0i3Ne
[stor07][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.N0i3Ne
[stor07][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sda1
[stor07][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sda1
[stor07][WARNIN] partx: /dev/sda: error adding partition 1
[stor07][INFO  ] checking OSD status...
[stor07][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[stor07][WARNIN] there are 10 OSDs down
[stor07][WARNIN] there are 10 OSDs out
[ceph_deploy.osd][DEBUG ] Host stor07 is now ready for osd use.
[root@stor02 ~]# ceph-deploy --overwrite-conf osd activate stor07:/dev/sda1:/dev/sdg3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy --overwrite-conf osd activate stor07:/dev/sda1:/dev/sdg3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x3fff9c60e5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x3fff9c604cf8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('stor07', '/dev/sda1', '/dev/sdg3')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks stor07:/dev/sda1:/dev/sdg3
 Authorized users only. All activity may be monitored and reported 
 Authorized users only. All activity may be monitored and reported 
[stor07][DEBUG ] connected to host: stor07 
[stor07][DEBUG ] detect platform information from remote host
[stor07][DEBUG ] detect machine type
[stor07][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.3 Maipo
[ceph_deploy.osd][DEBUG ] activating host stor07 disk /dev/sda1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[stor07][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sda1
[stor07][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sda1
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[stor07][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.o3FDsc with options 

rw,noexec,nodev,noatime,nodiratime,nobarrier
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o rw,noexec,nodev,noatime,nodiratime,nobarrier -- /dev/sda1 

/var/lib/ceph/tmp/mnt.o3FDsc
[stor07][WARNIN] DEBUG:ceph-disk:Cluster uuid is f5bf95c8-94ee-4a95-8e18-1e7f4a1db07a
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[stor07][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[stor07][WARNIN] DEBUG:ceph-disk:OSD uuid is 24ba59e8-b124-4769-8365-10b54d9fc559
[stor07][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring 

/var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 24ba59e8-b124-4769-8365-10b54d9fc559
[stor07][WARNIN] DEBUG:ceph-disk:OSD id is 0
[stor07][WARNIN] DEBUG:ceph-disk:Initializing OSD...
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring 

/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.o3FDsc/activate.monmap
[stor07][WARNIN] got monmap epoch 3
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap 

/var/lib/ceph/tmp/mnt.o3FDsc/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.o3FDsc --osd-journal 

/var/lib/ceph/tmp/mnt.o3FDsc/journal --osd-uuid 24ba59e8-b124-4769-8365-10b54d9fc559 --keyring /var/lib/ceph/tmp/mnt.o3FDsc/keyring
[stor07][WARNIN] SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0d 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 

00 00 00 00 00 00
[stor07][WARNIN] 2021-03-29 21:21:37.064642 3fffb7a8b130 -1 journal check: ondisk fsid e471d76e-5421-41b7-bcdc-ea1d7f45b22d doesn't 

match expected 24ba59e8-b124-4769-8365-10b54d9fc559, invalid (someone else's?) journal
[stor07][WARNIN] SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0d 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 

00 00 00 00 00 00
[stor07][WARNIN] SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0d 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 

00 00 00 00 00 00
[stor07][WARNIN] SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0d 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 

00 00 00 00 00 00
[stor07][WARNIN] 2021-03-29 21:21:37.818211 3fffb7a8b130 -1 filestore(/var/lib/ceph/tmp/mnt.o3FDsc) could not find 

23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[stor07][WARNIN] 2021-03-29 21:21:38.747692 3fffb7a8b130 -1 created object store /var/lib/ceph/tmp/mnt.o3FDsc journal 

/var/lib/ceph/tmp/mnt.o3FDsc/journal for osd.0 fsid f5bf95c8-94ee-4a95-8e18-1e7f4a1db07a
[stor07][WARNIN] 2021-03-29 21:21:38.747744 3fffb7a8b130 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.o3FDsc/keyring: can't 

open /var/lib/ceph/tmp/mnt.o3FDsc/keyring: (2) No such file or directory
[stor07][WARNIN] 2021-03-29 21:21:38.747878 3fffb7a8b130 -1 created new key in keyring /var/lib/ceph/tmp/mnt.o3FDsc/keyring
[stor07][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[stor07][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring 

/var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /var/lib/ceph/tmp/mnt.o3FDsc/keyring osd allow * mon allow profile osd
[stor07][WARNIN] added key for osd.0
[stor07][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.o3FDsc
[stor07][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
[stor07][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o rw,noexec,nodev,noatime,nodiratime,nobarrier -- /dev/sda1 

/var/lib/ceph/osd/ceph-0
[stor07][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.o3FDsc
[stor07][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.0
[stor07][DEBUG ] === osd.0 === 
[stor07][WARNIN] create-or-move updating item name 'osd.0' weight 5.46 at location {host=stor07,root=default} to crush map
[stor07][DEBUG ] Starting Ceph osd.0 on stor07...
[stor07][WARNIN] Running as unit ceph-osd.0.1617024100.005826609.service.
[stor07][INFO  ] checking OSD status...
[stor07][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[stor07][WARNIN] there are 11 OSDs down
[stor07][WARNIN] there are 11 OSDs out
[stor07][INFO  ] Running command: systemctl enable ceph
[stor07][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.
[stor07][WARNIN] Executing /sbin/chkconfig ceph on
[root@stor02 ~]# ceph -s
    cluster f5bf95c8-94ee-4a95-8e18-1e7f4a1db07a
     health HEALTH_WARN
            113 pgs backfill
            1 pgs backfill_toofull
            143 pgs backfilling
            1 pgs degraded
            146 pgs peering
            104 pgs stuck inactive
            417 pgs stuck unclean
            recovery 2/148340327 objects degraded (0.000%)
            recovery 3454292/148340327 objects misplaced (2.329%)
            1 near full osd(s)
            noout,nodeep-scrub flag(s) set
     monmap e3: 5 mons at 

{stor02=10.:6789/0,stor03=10.:6789/0,stor04=.204:6789/0,stor05=.205:6789/0,stor06=10:6789/0}
            election epoch 13278, quorum 0,1,2,3,4 stor02,stor03,stor04,stor05,stor06
     osdmap e56053: 160 osds: 149 up, 149 in; 362 remapped pgs
            flags noout,nodeep-scrub
      pgmap v57715095: 14848 pgs, 5 pools, 190 TB data, 47631 kobjects
            571 TB used, 241 TB / 812 TB avail
            2/148340327 objects degraded (0.000%)
            3454292/148340327 objects misplaced (2.329%)
               14361 active+clean
                 143 active+remapped+backfilling
                 118 peering
                 112 active+remapped+wait_backfill
                  42 activating
                  28 remapped+peering
                  25 activating+remapped
                  10 inactive
                   3 active+clean+scrubbing
                   2 active+clean+scrubbing+deep
                   2 remapped
                   1 activating+degraded
                   1 active+remapped+wait_backfill+backfill_toofull
[root@stor02 ~]# ceph osd tree |more
ID  WEIGHT    TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 -1 873.59863 root default                                      
 -2  43.67993     host stor01                                   
  2   5.45999         osd.2        up  1.00000          1.00000 
  3   5.45999         osd.3      down        0          1.00000 
  4   5.45999         osd.4        up  1.00000          1.00000 
  5   5.45999         osd.5      down        0          1.00000 
  6   5.45999         osd.6      down        0          1.00000 
  7   5.45999         osd.7      down        0          1.00000 
160   5.45999         osd.160      up  1.00000          1.00000 
161   5.45999         osd.161    down        0          1.00000 
 -3  43.67993     host stor02                                   
  8   5.45999         osd.8        up  1.00000          1.00000 
  9   5.45999         osd.9        up  1.00000          1.00000 
 10   5.45999         osd.10       up  1.00000          1.00000 
 11   5.45999         osd.11       up  1.00000          1.00000 
 12   5.45999         osd.12       up  1.00000          1.00000 
 13   5.45999         osd.13       up  1.00000          1.00000 
 14   5.45999         osd.14       up  1.00000          1.00000 
 15   5.45999         osd.15       up  1.00000          1.00000 
 -4  43.67993     host stor03                                   
 16   5.45999         osd.16       up  1.00000          1.00000 
 17   5.45999         osd.17     down        0          1.00000 
 18   5.45999         osd.18       up  1.00000          1.00000 
 19   5.45999         osd.19       up  1.00000          1.00000 
 20   5.45999         osd.20       up  1.00000          1.00000 
 21   5.45999         osd.21       up  1.00000          1.00000 
 22   5.45999         osd.22       up  1.00000          1.00000 
 23   5.45999         osd.23       up  1.00000          1.00000 
 -5  43.67993     host stor04                                   
 24   5.45999         osd.24       up  1.00000          1.00000 
 25   5.45999         osd.25       up  1.00000          1.00000 
 26   5.45999         osd.26       up  1.00000          1.00000 
 27   5.45999         osd.27       up  1.00000          1.00000 
 28   5.45999         osd.28       up  1.00000          1.00000 
 29   5.45999         osd.29       up  1.00000          1.00000 
 30   5.45999         osd.30       up  1.00000          1.00000 
 31   5.45999         osd.31       up  1.00000          1.00000 
 -6  43.67993     host stor05                                   
 32   5.45999         osd.32       up  1.00000          1.00000 
 33   5.45999         osd.33       up  1.00000          1.00000 
 34   5.45999         osd.34       up  1.00000          1.00000 
 35   5.45999         osd.35       up  1.00000          1.00000 
 36   5.45999         osd.36       up  1.00000          1.00000 
 37   5.45999         osd.37       up  1.00000          1.00000 
 38   5.45999         osd.38       up  1.00000          1.00000 
 39   5.45999         osd.39       up  1.00000          1.00000 
 -7  43.67993     host stor06                                   
 40   5.45999         osd.40       up  1.00000          1.00000 
 41   5.45999         osd.41       up  1.00000          1.00000 
 42   5.45999         osd.42       up  1.00000          1.00000 
 43   5.45999         osd.43       up  1.00000          1.00000 
 44   5.45999         osd.44       up  1.00000          1.00000 
 45   5.45999         osd.45       up  1.00000          1.00000 
 46   5.45999         osd.46       up  1.00000          1.00000 
 47   5.45999         osd.47       up  1.00000          1.00000 
 -8  43.67993     host stor07                                   
 48   5.45999         osd.48       up  1.00000          1.00000 
 49   5.45999         osd.49       up  1.00000          1.00000 
 51   5.45999         osd.51       up  1.00000          1.00000 
 52   5.45999         osd.52       up  1.00000          1.00000 
 53   5.45999         osd.53       up  1.00000          1.00000 
 54   5.45999         osd.54       up  1.00000          1.00000 
 55   5.45999         osd.55       up  1.00000          1.00000 
  0   5.45999         osd.0      down        0          1.00000 
 -9  43.67993     host stor08                                   
 56   5.45999         osd.56       up  1.00000          1.00000 
 57   5.45999         osd.57       up  1.00000          1.00000 
 58   5.45999         osd.58       up  1.00000          1.00000 
 59   5.45999         osd.59       up  1.00000          1.00000 
 60   5.45999         osd.60       up  1.00000          1.00000 
 61   5.45999         osd.61       up  1.00000          1.00000 
 62   5.45999         osd.62       up  1.00000          1.00000 
 63   5.45999         osd.63       up  1.00000          1.00000 
-10  43.67993     host stor09                                   
 64   5.45999         osd.64       up  1.00000          1.00000 
 65   5.45999         osd.65       up  1.00000          1.00000 
 66   5.45999         osd.66       up  1.00000          1.00000 
 67   5.45999         osd.67       up  1.00000          1.00000 
 68   5.45999         osd.68       up  1.00000          1.00000 
 69   5.45999         osd.69       up  1.00000          1.00000 
 70   5.45999         osd.70       up  1.00000          1.00000 
[root@stor02 ~]# ceph osd tree |more
ID  WEIGHT    TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 -1 873.59863 root default                                      
 -2  43.67993     host stor01                                   
  2   5.45999         osd.2        up  1.00000          1.00000 
  3   5.45999         osd.3      down        0          1.00000 
  4   5.45999         osd.4        up  1.00000          1.00000 
  5   5.45999         osd.5      down        0          1.00000 
  6   5.45999         osd.6      down        0          1.00000 
  7   5.45999         osd.7      down        0          1.00000 
160   5.45999         osd.160      up  1.00000          1.00000 
161   5.45999         osd.161    down        0          1.00000 
 -3  43.67993     host stor02                                   
  8   5.45999         osd.8        up  1.00000          1.00000 
  9   5.45999         osd.9        up  1.00000          1.00000 
 10   5.45999         osd.10       up  1.00000          1.00000 
 11   5.45999         osd.11       up  1.00000          1.00000 
 12   5.45999         osd.12       up  1.00000          1.00000 
 13   5.45999         osd.13       up  1.00000          1.00000 
 14   5.45999         osd.14       up  1.00000          1.00000 
 15   5.45999         osd.15       up  1.00000          1.00000 
 -4  43.67993     host stor03                                   
 16   5.45999         osd.16       up  1.00000          1.00000 
 17   5.45999         osd.17     down        0          1.00000 
 18   5.45999         osd.18       up  1.00000          1.00000 
 19   5.45999         osd.19       up  1.00000          1.00000 
 20   5.45999         osd.20       up  1.00000          1.00000 
 21   5.45999         osd.21       up  1.00000          1.00000 
 22   5.45999         osd.22       up  1.00000          1.00000 
 23   5.45999         osd.23       up  1.00000          1.00000 
 -5  43.67993     host stor04                                   
 24   5.45999         osd.24       up  1.00000          1.00000 
 25   5.45999         osd.25       up  1.00000          1.00000 
 26   5.45999         osd.26       up  1.00000          1.00000 
 27   5.45999         osd.27       up  1.00000          1.00000 
 28   5.45999         osd.28       up  1.00000          1.00000 
 29   5.45999         osd.29       up  1.00000          1.00000 
 30   5.45999         osd.30       up  1.00000          1.00000 
 31   5.45999         osd.31       up  1.00000          1.00000 
 -6  43.67993     host stor05                                   
 32   5.45999         osd.32       up  1.00000          1.00000 
 33   5.45999         osd.33       up  1.00000          1.00000 
 34   5.45999         osd.34       up  1.00000          1.00000 
 35   5.45999         osd.35       up  1.00000          1.00000 
 36   5.45999         osd.36       up  1.00000          1.00000 
 37   5.45999         osd.37       up  1.00000          1.00000 
 38   5.45999         osd.38       up  1.00000          1.00000 
 39   5.45999         osd.39       up  1.00000          1.00000 
 -7  43.67993     host stor06                                   
 40   5.45999         osd.40       up  1.00000          1.00000 
 41   5.45999         osd.41       up  1.00000          1.00000 
 42   5.45999         osd.42       up  1.00000          1.00000 
 43   5.45999         osd.43       up  1.00000          1.00000 
 44   5.45999         osd.44       up  1.00000          1.00000 
 45   5.45999         osd.45       up  1.00000          1.00000 
 46   5.45999         osd.46       up  1.00000          1.00000 
 47   5.45999         osd.47       up  1.00000          1.00000 
 -8  43.67993     host stor07                                   
 48   5.45999         osd.48       up  1.00000          1.00000 
 49   5.45999         osd.49       up  1.00000          1.00000 
 51   5.45999         osd.51       up  1.00000          1.00000 
 52   5.45999         osd.52       up  1.00000          1.00000 
 53   5.45999         osd.53       up  1.00000          1.00000 
 54   5.45999         osd.54       up  1.00000          1.00000 
 55   5.45999         osd.55       up  1.00000          1.00000 
  0   5.45999         osd.0        up  1.00000          1.00000 
 -9  43.67993     host stor08                                   
 56   5.45999         osd.56       up  1.00000          1.00000 
 57   5.45999         osd.57       up  1.00000          1.00000 
 58   5.45999         osd.58       up  1.00000          1.00000 
 59   5.45999         osd.59       up  1.00000          1.00000 
 60   5.45999         osd.60       up  1.00000          1.00000 
 61   5.45999         osd.61       up  1.00000          1.00000 
 62   5.45999         osd.62       up  1.00000          1.00000 
 63   5.45999         osd.63       up  1.00000          1.00000 
-10  43.67993     host stor09                                   
 64   5.45999         osd.64       up  1.00000          1.00000 
 65   5.45999         osd.65       up  1.00000          1.00000 
 66   5.45999         osd.66       up  1.00000          1.00000 
 67   5.45999         osd.67       up  1.00000          1.00000 
 68   5.45999         osd.68       up  1.00000          1.00000 
 69   5.45999         osd.69       up  1.00000          1.00000 
 70   5.45999         osd.70       up  1.00000          1.00000 
 71   5.45999         osd.71       up  1.00000          1.00000 
-11  43.67993     host stor10                                   
 72   5.45999         osd.72       up  1.00000          1.00000 
 73   5.45999         osd.73       up  1.00000          1.00000 
 74   5.45999         osd.74       up  1.00000          1.00000 
 75   5.45999         osd.75       up  1.00000          1.00000 
 76   5.45999         osd.76       up  1.00000          1.00000 
 77   5.45999         osd.77       up  1.00000          1.00000 
 78   5.45999         osd.78     down        0          1.00000 
 79   5.45999         osd.79       up  1.00000          1.00000 
-12  43.67993     host stor11                                   
 80   5.45999         osd.80       up  1.00000          1.00000 
 81   5.45999         osd.81       up  1.00000          1.00000 
 82   5.45999         osd.82       up  1.00000          1.00000 
 83   5.45999         osd.83       up  1.00000          1.00000 
 84   5.45999         osd.84       up  1.00000          1.00000 
 85   5.45999         osd.85       up  1.00000          1.00000 
 86   5.45999         osd.86       up  1.00000          1.00000 
 87   5.45999         osd.87       up  1.00000          1.00000 
-13  43.67993     host stor12                                   
 88   5.45999         osd.88       up  1.00000          1.00000 
 89   5.45999         osd.89       up  1.00000          1.00000 
 90   5.45999         osd.90       up  1.00000          1.00000 
 91   5.45999         osd.91       up  1.00000          1.00000 
 92   5.45999         osd.92       up  1.00000          1.00000 
 93   5.45999         osd.93       up  1.00000          1.00000 
 94   5.45999         osd.94       up  1.00000          1.00000 
 95   5.45999         osd.95       up  1.00000          1.00000 
-14  43.67993     host stor13                                   
 96   5.45999         osd.96       up  1.00000          1.00000 
 97   5.45999         osd.97       up  1.00000          1.00000 
 98   5.45999         osd.98       up  1.00000          1.00000 
 99   5.45999         osd.99       up  1.00000          1.00000 
100   5.45999         osd.100      up  1.00000          1.00000 
101   5.45999         osd.101      up  1.00000          1.00000 
102   5.45999         osd.102    down        0          1.00000 
103   5.45999         osd.103      up  1.00000          1.00000 
-15  43.67993     host stor14                                   
104   5.45999         osd.104      up  1.00000          1.00000 
105   5.45999         osd.105      up  1.00000          1.00000 
106   5.45999         osd.106      up  1.00000          1.00000 
[root@stor02 ~]#
相关推荐
©️2020 CSDN 皮肤主题: 博客之星2020 设计师:CY__ 返回首页