Fork me on GitHub
Suzf  Blog

How-to deal with Ceph Monitor DB compaction?

License: Attribution-NonCommercial-ShareAlike 4.0 International

本文出自 Suzf Blog。 如未注明,均为 SUZF.NET 原创。

转载请注明:http://suzf.net/post/503

Issue

Ceph Monitors DB compaction
mon.ceph1 store is getting too big! 48031 MB >= 15360 MB -- 62% avail
mon.ceph2 store is getting too big! 47424 MB >= 15360 MB -- 63% avail
mon.ceph3 store is getting too big! 46524 MB >= 15360 MB -- 63% avail

In Three Monitor nodes each one have ~50GB of store.db:
du -sch /var/lib/ceph/mon/ceph-ceph1/store.db/
47G     /var/lib/ceph/mon/ceph-ceph1/store.db/
47G     total

We've set the following in our ceph.conf:
[mon]
mon compact on start = true
Then we restart one of the monitor to trigger the compact process.
Noticed that size of store.db increase more (and is still increasing) but it should decrease.

 

However

If mon compact on start is set true.

The larger the database, the longer the compaction would take. there by increasing the time for a node to join cluster / form quorum. <on the procuction, i restart one mon service. it costs more than one hour. It's soo long! >

This probably need a review alongside any other existing cluster-level heartbeats/failover process for safety if this approach is selected.

Clearly we don't want this on by default, but having the option to turn it on via auto-manage-soft might be nice.

Note you can also tell a monitor to run compaction on the fly with

sudo ceph tell mon.{id} compact

 

「一键投喂 软糖/蛋糕/布丁/牛奶/冰阔乐!」

Suzf Blog

(๑>ڡ<)☆ 谢谢 ~

使用微信扫描二维码完成支付