ceph-objectstore-tool
工具,能够操作到ceph最底层的数据,包括pg,对象层级。它能够对底层pg以及对象相关数据进行获取、修改。并能够对一些问题pg和对象进行简单修复。所以使用该工具进行操作的时候需要谨慎(涉及到修改的操作最好备份一份数据),防止误操作造成数据丢失。
该工具的使用主要针对对象
和PG
,所以这里我们的使用主要为以下两种
PG
的相关操作info #查看pg的信息
log #查看pg的元数据信息
remove #从当前osd移除pg
mkfs #重新初始化osd
fsck #检查bluefs数据正确性
export #导出pg元数据
import #导入pg元数据
list #列出pg内部对象或者当前osd所有对象
fix-lost #修复pg丢失的对象
list-pgs #列出当前osd所有pg
rm-past-intervals
dump-journal #针对filestore,打印日志
dump-super #打印osd的超级块信息
meta-list #打印元数据信息列表
get-osdmap #获取osdmap
set-osdmap #设置osd map
get-inc-osdmap #从当前osd 获取inc 信息
set-inc-osdmap #将inc设置进入当前osd信息
mark-complete #标记为complete,让bluestore认为可以pg可以选举出权威日志
对象
的相关操作,这里主要列出常用的对象操作 list-attrs #列出对象的一些属性list-omap #列出对象的omp信息remove|removeall #移除对象或者移除所有对象dump #打印对象元数据信息
基本命令使用如下:
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore xxx
一般前半部分如左,指定osd路径,指定存储引擎(bluestore
或者filestore
)
PS:使用之前需要停止当前操作的osd,否则会报错
检查bluestore的bluefs是否文件系统被损坏,并尝试修复
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --op fsck
列出当前osd所有对象
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --op list
获取当前osd的osdmap信息,并指定对应的输出文件
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --op get-osdmap --file 1.txt
[root@node1 ceph]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --op get-osdmap --file 1.txt
osdmap#1457 exported.
导入一个osdmap信息到当前osd
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --op set-osdmap --file 1.txt
[root@node1 ceph]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --op set-osdmap --file 1.txt
Wrote osdmap.145
获取当前osd的超级块信息
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --op dump-super
输出如下:
{
"cluster_fsid": "fa27f041-0ce9-4df1-a4bd-5e37678834bd",
"osd_fsid": "c03c2fdb-cfd4-42bd-8e32-61e359281078",
"whoami": 8,
"current_epoch": 1457,
"oldest_map": 784,
"newest_map": 1457,
"weight": 0.000000,
"compat": { "compat": { },"ro_compat": { },"incompat": { "feature_1": "initial feature set(~v.18)","feature_2": "pginfo object","feature_3": "object locator","feature_4": "last_epoch_clean","feature_5": "categories","feature_6": "hobjectpool","feature_7": "biginfo","feature_8": "leveldbinfo","feature_9": "leveldblog","feature_10": "snapmapper","feature_11": "sharded objects","feature_12": "transaction hints","feature_13": "pg meta object","feature_14": "explicit missing set","feature_15": "fastinfo pg attr","feature_16": "deletes in missing set"}
},
"clean_thru": 1457,
"last_epoch_mounted": 1456
}
获取当前osd的所有pg
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --op list-pgs
其中--op
表示对当前osd或者当前pg的操作
17.es1
17.as0
17.8s1
17.4s2
17.3s1
17.3fs0
17.3ds0
17.3cs2
17.23s2
17.2es0
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op info
输出如下"pgid": "17.es1",
"last_update": "1380'5116",
"last_complete": "1380'5116",
"log_tail": "1376'4854",
"last_user_version": 5118,
"last_backfill": "MAX",
"last_backfill_bitwise": 0,
"purged_snaps": [],
"history": { "epoch_created": 1290,"epoch_pool_created": 1290,"last_epoch_started": 1457,"last_interval_started": 1456,"last_epoch_clean": 1350,"last_interval_clean": 1349,"last_epoch_split": 0,"last_epoch_marked_full": 0,"same_up_since": 1456,"same_interval_since": 1456,"same_primary_since": 1290,"last_scrub": "0'0","last_scrub_stamp": "2019-08-12 11:14:04.515869","last_deep_scrub": "0'0","last_deep_scrub_stamp": "2019-08-12 11:14:04.515869","last_clean_scrub_stamp": "2019-08-12 11:14:04.515869"...
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op log
pg epoch
和对应的pg小版本"pg_log_t": { "head": "1380'5116","tail": "1376'4854","log": [{ "op": "modify","object": "17:713bbf48:::rbd_data.18.cf83c74b0dc51.0000000000023ba0:head","version": "1376'4855","prior_version": "0'0","reqid": "client.852676.0:164001","extra_reqids": [],"mtime": "2019-08-12 14:40:08.571459","return_code": 0,"mod_desc": { "object_mod_desc": { "can_local_rollback": true,"rollback_info_completed": true,"ops": [{ "code": "CREATE"}]}}},...
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op meta-list
["meta",{ "oid":"osdmap.1372","key":"","snapid":0,"hash":168878088,"max":0,"pool":-1,"namespace":"","max":0}]
["meta",{ "oid":"osdmap.1053","key":"","snapid":0,"hash":168945672,"max":0,"pool":-1,"namespace":"","max":0}]
["meta",{ "oid":"osdmap.1266","key":"","snapid":0,"hash":168892424,"max":0,"pool":-1,"namespace":"","max":0}]
["meta",{ "oid":"osdmap.1101","key":"","snapid":0,"hash":168920072,"max":0,"pool":-1,"namespace":"","max":0}]
...
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op export --file /mnt/test.obj
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op remove
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op import --file /mnt/test.obj
incomplete
的pg标记为complete
,这里需要根据集群的副本数将相关的所有osd上的pg进行标记ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op mark-complete
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op list
["17.es1",{ "oid":"rbd_data.18.cf83c74b0dc51.000000000001bda5","key":"","snapid":-2,"hash":1725693966,"max":0,"pool":17,"namespace":"","shard_id":1,"max":0}]
["17.es1",{ "oid":"rbd_data.18.cf83c74b0dc51.0000000000033137","key":"","snapid":-2,"hash":1782448142,"max":0,"pool":17,"namespace":"","shard_id":1,"max":0}]
["17.es1",{ "oid":"rbd_data.18.cf83c74b0dc51.000000000003f128","key":"","snapid":-2,"hash":1246789646,"max":0,"pool":17,"namespace":"","shard_id":1,"max":0
...
...
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore rbd_data.18.cf83c74b0dc51.000000000003925a dump
{
"id": { "oid": "rbd_data.18.cf83c74b0dc51.000000000003925a","key": "","snapid": -2,"hash": 1962672078,"max": 0,"pool": 17,"namespace": "","shard_id": 1,"max": 0
},
"info": { "oid": { "oid": "rbd_data.18.cf83c74b0dc51.000000000003925a","key": "","snapid": -2,"hash": 1962672078,"max": 0,"pool": 17,"namespace": ""},...
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore
[root@node1 ceph]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore rbd_data.18.cf83c74b0dc51.000000000003925a list-attrs
_
hinfo_key
snapset
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore rbd_data.18.cf83c74b0dc51.000000000003925a remove
请慎重测试,如有pg备份,则可以进行测试[root@node1 ceph]# ceph-objectstore-tool-bak --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore rbd_data.18.cf83c74b0dc51.000000000003925a remove
remove 1#17:73ffdf2e:::rbd_data.18.cf83c74b0dc51.000000000003925a:head#
查看pg中是否还有该对象ceph-objectstore-tool-bak --data-path /var/lib/ceph/osd/ceph-8/ --type bluestore --pgid 17.es1 --op list|grep rbd_data.18.cf83c74b0dc51.000000000003925a
并无输出由于该工具是对Ceph底层数据以及元数据进行操作,所以使用需谨慎,处处要备份。如有兴趣,可以对该工具的具体操作源码进行阅读
src/tools/ceph_objectstore_tool.cc
路径,修改对应代码之后需要同时将libceph-common.so.0
以及libceph-common.so.0
拷贝到测试设备才能够正常使用
文章目录1. 解决问题2. 应用场景3. 实现方式C++实现C语言实现4. 缺点5. 和其他三种创建模式的对比(单例,工厂,建造者) 1. 解决问题 如果对象的创建成本较大,而同一个类的不同对象之间的差别不大(大部分字段相同),在这种情况下,我们可以利用已有对象(原型)进行赋值(拷贝)的方式,创建新的对象,从而达到节省对象创...
混合osd的部署 先部署所有的ssd 在/etc/ceph.conf中最后添加ssd做osd的block大小如下: 比如部署中有两个ssd,则添加 [osd.0] bluestore_block_size = xxxx [osd.1] bluestore_block_size = xxx 如上的size大小计算如下,如ssd容量...
文章目录Pool创建ec pool创建副本pool创建Pool参数创建根故障域及添加osd其他命令Tier相关 Pool创建 ec pool创建 创建profile ceph osd erasure-code-profile set $profile_name k=$k m=$m crush-failure-domain...
文章目录ceph版本:环境配置:异常问题:问题解决:总结 ceph版本: ceph 12.2.1 环境配置: tier_pool 16个分区大小800G 的osd容量 3副本 data_pool 32个4T盘 3副本 异常问题: ps:在分布式存储中遇到任何问题都不要先去通过重设存储节点,清除磁盘数据来解决,一定要...
本文主要是在梳理cephfs内核方式挂载的内核代码逻辑所做的准备 环境:Centos 7.5 内核源码版本:3.10.0-862.el7.x86_64 打开ceph模块的debug信息 单独编译ceph模块的.ko文件 ceph在内核的通用模块主要有三个: ceph.ko 模块路径:/usr/src/kernels/3.10....
文章目录安装使用使用`ceph tell`产生堆栈信息文使用`pprof`工具分析内存及`ceph tell`释放内存火焰图`FlameGraph`可视化进程堆栈信息 pprof是一个google开发的支持可视化、且可分析profile文件而达到对内存的分析。并且能够输出文本和图像来支持分析过程,pprof源码 安装...
目前有两种动态修改的方式来让ceph中各个组件的配置生效,所以介绍如下两种方式方便我们进行功能或者代码的调试 使用ceph daemon方式修改 ceph daemon osd.0 help用于osd的daemon服务ceph daemon mon.ceph-node1 help用于mon的admin socket命令、ceph...
我们内核挂载的前提是:看到centos7.5 中默认内核3.10.0-862.11.6.el7.x86_64的挂载fs执行文件读写性能更优良,所以尝试将3.10.0-862.11.6.el7.x86_64模块中与ceph fs挂载相关的ceph.ko,libceph.ko,dns_resolver.ko,libcrc32c.ko拷贝...