分布式存儲系統之Ceph集群存儲池操作

前文我們了解了ceph的存儲池、PG、CRUSH、客戶端IO的簡要工作過程、Ceph客戶端計算PG_ID的步驟的相關話題,回顧請參考https://www.cnblogs.com/qiuhom-1874/p/16733806.html;今天我們來聊一聊在ceph上操作存儲池相關命令的用法和說明;
在ceph上操作存儲池不外乎就是查看列出、創建、重命名和刪除等操作,常用相關的工具都是“ceph osd pool”的子命令 , ls、create、rename和rm等;
1、創建存儲池
副本型存儲池創建命令格式
ceph osd pool create <pool-name> <pg-num> [pgp-num] [replicated] [crush-rule-name] [expected-num-objects]提示:創建副本型存儲池上面的必要選項有存儲池的名稱和PG的數量,后面可以不用跟pgp和replicated來指定存儲池的pgp的數量和類型為副本型;即默認創建不指定存儲池類型,都是創建的是副本池;
糾刪碼池存儲池創建命令格式
ceph osd pool create <pool-name> <pg-num> <pgp-num> erasure [erasure-code-profile] [crush-rule-name] [expected-num-objects]提示:創建糾刪碼池存儲池,需要給定存儲池名稱、PG的數量、PGP的數量已經明確指定存儲池類型為erasure;這里解釋下PGP,所謂PGP(Placement Group for Placement purpose)就是用于歸置的PG數量,其值應該等于PG的數量; crush-ruleset-name是用于指定此存儲池所用的CRUSH規則集的名稱,不過,引用的規則集必須事先存在;
erasure-code-profile參數是用于指定糾刪碼池配置文件;未指定要使用的糾刪編碼配置文件時,創建命令會為其自動創建一個,并在創建相關的CRUSH規則集時使用到它;默認配置文件自動定義k=2和m=1,這意味著Ceph將通過三個OSD擴展對象數據,并且可以丟失其中一個OSD而不會丟失數據,因此,在冗余效果上,它相當于一個大小為2的副本池,不過 , 其存儲空間有效利用率為2/3而非1/2 。
示例:創建一個副本池

分布式存儲系統之Ceph集群存儲池操作

文章插圖
示例:創建一個糾刪碼池
分布式存儲系統之Ceph集群存儲池操作

文章插圖
2、獲取存儲池的相關信息
列出存儲池:ceph osd pool ls [detail]
[cephadm@ceph-admin ~]$ ceph osd pool lstestpoolrbdpool.rgw.rootdefault.rgw.controldefault.rgw.metadefault.rgw.logcephfs-metadatpoolcephfs-datapoolreppoolerasurepool[cephadm@ceph-admin ~]$ ceph osd pool ls detailpool 1 'testpool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 42 flags hashpspool stripe_width 0pool 2 'rbdpool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 81 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbdremoved_snaps [1~3]pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 84 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgwpool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 87 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgwpool 5 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 89 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgwpool 6 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 91 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgwpool 7 'cephfs-metadatpool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 99 flags hashpspool stripe_width 0 application cephfspool 8 'cephfs-datapool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 99 flags hashpspool stripe_width 0 application cephfspool 9 'reppool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 126 flags hashpspool stripe_width 0pool 10 'erasurepool' erasure size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 130 flags hashpspool stripe_width 8192[cephadm@ceph-admin ~]$提示:后面接detail表示列出存儲池的詳細信息;
獲取存儲池的統計數據:ceph osd pool stats [pool-name]
[cephadm@ceph-admin ~]$ ceph osd pool stats reppoolpool reppool id 9nothing is going on[cephadm@ceph-admin ~]$ ceph osd pool statspool testpool id 1nothing is going onpool rbdpool id 2nothing is going onpool .rgw.root id 3nothing is going onpool default.rgw.control id 4nothing is going onpool default.rgw.meta id 5nothing is going onpool default.rgw.log id 6nothing is going onpool cephfs-metadatpool id 7nothing is going onpool cephfs-datapool id 8nothing is going onpool reppool id 9nothing is going onpool erasurepool id 10nothing is going on[cephadm@ceph-admin ~]$

推薦閱讀