ceph - 更改 ceph journal 位置

简介: 目的分离 ceph data 与 journal 位置环境参考下面 ceph 架构[root@ceph-gw-209214 ~]# ceph osd tree# id weight type name up/down reweight-1 12 root default-2 3

目的

分离 ceph data 与 journal 位置

环境

参考下面 ceph 架构

[root@ceph-gw-209214 ~]# ceph osd tree
# id    weight  type name       up/down reweight
-1      12      root default
-2      3               host ceph-gw-209214
0       1                       osd.0   up      1
1       1                       osd.1   up      1
2       1                       osd.2   up      1
-4      3               host ceph-gw-209216
6       1                       osd.6   up      1
7       1                       osd.7   up      1
8       1                       osd.8   up      1
-5      3               host ceph-gw-209217
9       1                       osd.9   up      1
10      1                       osd.10  up      1
11      1                       osd.11  up      1
-6      3               host ceph-gw-209219
3       1                       osd.3   up      1
4       1                       osd.4   up      1
5       1                       osd.5   up      1

参考对应磁盘分区

192.168.209.214
/dev/sdb1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-0
/dev/sdc1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-1
/dev/sdd1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-2
192.168.209.219
/dev/sdb1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-3
/dev/sdc1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-4
/dev/sdd1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-5
192.168.209.216
/dev/sdc1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-7
/dev/sdd1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-8
/dev/sdb1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-6
192.168.209.217
/dev/sdc1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-10
/dev/sdb1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-9
/dev/sdd1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-11

查询当前日志位置

[root@ceph-gw-209214 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep osd_journal
  "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-0\/journal",

[root@ceph-gw-209214 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal
  "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-1\/journal",

默认状态下, ceph journal 都存放到 /var/run/ceph/ceph-osd.$num/journal 位置中

修改 journal 方法:

创建目录

脚本

#!/bin/bash
LANG=en_US
num=0
for ip in  $ips
do
        diskpart=`ssh $ip "fdisk -l  | grep Linux | grep -v sda" | awk '{print $1}' | sort`
        for partition in $diskpart
        do
                ssh $ip "mkdir /var/log/ceph-$num"
                let num++
        done
done

效果如下:

192.168.209.214 
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-0
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-1
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-2
192.168.209.219
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-3
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-4
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-5
192.168.209.216
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-6
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-7
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-8
192.168.209.217
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-10
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-11
drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-9

修改配置文件

/etc/ceph/ceph.conf 中添加下面配置

[osd]
osd journal = /var/log/$cluster-$id/journal

替换 OSD 步骤

验证当前 journal 位置

[root@ceph-gw-209214 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal
  "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-1\/journal",

设 noout, 并停止 osd

[root@ceph-gw-209214 ceph]# ceph osd set noout
set noout
[root@ceph-gw-209214 ceph]# /etc/init.d/ceph  stop osd.1
=== osd.1 ===
Stopping Ceph osd.1 on ceph-gw-209214...kill 2744...kill 2744...done

手动移动日志

[root@ceph-gw-209214 ceph]# mv /var/lib/ceph/osd/ceph-1/journal  /var/log/ceph-1/

启动 osd

[root@ceph-gw-209214 ceph]# /etc/init.d/ceph  start osd.1
=== osd.1 ===
create-or-move updated item name 'osd.1' weight 0.05 at location {host=ceph-gw-209214,root=default} to crush map
Starting Ceph osd.1 on ceph-gw-209214...
Running as unit run-13260.service.
[root@ceph-gw-209214 ceph]# ceph osd unset noout
unset noout

验证

[root@ceph-gw-209214 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal
  "osd_journal": "\/var\/log\/ceph-1\/journal",
  "osd_journal_size": "1024",

[root@ceph-gw-209214 ceph]# ceph -s
    cluster 1237dd6a-a4f6-43e0-8fed-d9bcc8084bf1
     health HEALTH_OK
     monmap e1: 2 mons at {ceph-gw-209214=192.168.209.214:6789/0,ceph-gw-209216=192.168.209.216:6789/0}, election epoch 8, quorum 0,1 ceph-gw-209214,ceph-gw-209216
     osdmap e189: 12 osds: 12 up, 12 in
      pgmap v6474: 4560 pgs, 10 pools, 1755 bytes data, 51 objects
            10725 MB used, 589 GB / 599 GB avail
                4560 active+clean
目录
相关文章
|
10月前
|
存储 算法 关系型数据库
【CEPH-初识篇】ceph详细介绍+“ 一 ” 篇解决ceph集群搭建, “ 三 ” 大(对象、块、文件)存储使用(上)
【CEPH-初识篇】ceph详细介绍+“ 一 ” 篇解决ceph集群搭建, “ 三 ” 大(对象、块、文件)存储使用
909 0
|
10月前
|
存储 文件存储 对象存储
CEPH-初识篇】ceph详细介绍+“ 一 ” 篇解决ceph集群搭建, “ 三 ” 大(对象、块、文件)存储使用(下)
CEPH-初识篇】ceph详细介绍+“ 一 ” 篇解决ceph集群搭建, “ 三 ” 大(对象、块、文件)存储使用(下)
270 0
|
存储 安全 关系型数据库
如何从 Ceph (Luminous) 集群中安全移除 OSD
OSD.png 工作中需要从 Ceph 的集群中移除一台存储服务器,挪作他用。Ceph 存储空间即使在移除该存储服务器后依旧够用,所以操作是可行的,但集群已经运行了很长时间,每个服务器上都存储了很多数据,在数据无损的情况下移除,看起来也不简单。
1660 0
|
存储 关系型数据库 块存储
|
存储 关系型数据库 块存储
|
存储 关系型数据库 块存储
ceph - pg 常见状态
说明 pg ( placement group ) 是数据存储的重要单位 在使用 ceph 的时候, pg 会经常发生状态的变化, 参考下面例子 当创建池的时候, 将会创建相应的 pg, 那么可以看到 pg creating 状态 当部分 pg 创建成功后, 将会发现 pg 会进入 peering 状态 当所有 pg peering 完成后, 将可见到状态变成 a
2514 0
|
存储 块存储