Cheat Sheet
Cheat Sheet
ceph config
ceph config set {who} {option} specifies a configuration option in the monitor configuration database (for example ceph config set osd.0
{value} debug_ms 20)
ceph config show {who} shows runtime settings for a running daemon (to see all settings use: ceph config show-with-defaults)
ceph config assimilate-conf -i ingests a configuration file from inpute file and moves any valid options into the monitor configuration
{input_file} -o {output_file} database
ceph config help {option} (-f json-p‐ to get help for particular option (not optional).
retty)
ceph tell {who} config set {option} set temporarily other settings (for example ceph tell osd.123 config set debug_osd 20) You can also
{value} specify wildcards: osd.* (to change settings for all OSDs.)
ceph is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at https://docs.ce‐
ph.com for more information.
ceph mon dumps formatted monmap - if ceph mgr will report metadata about all ceph balancer enable / disable ceph
dump integer is given, then you'll get metadata manager daemons or if the off / on balancer
{int0-} mon-map from epoch {integer] {name} name is specified a single ceph balancer set balancer mode to
ceph mon adds new monitor named manager daemon mode crush-‐ crush-compat or upmap
add {name} at {addr} ceph mgr will report a count of running compat or (default)
{name} versions daemon versions upmap
{IPaddr[:‐ ceph mgr will report a count of any ceph -s / -- see actual ceph status
port]} count-‐ daemon metadata field status
ceph mon gets monmap (from specified metadata ceph df detail shows data usage in raw
getmap epoch) {field} storage and pools
{int0-}
ceph-volume shows all disks
ceph mon removes monitor named Miscellaneous
lvm list or
remove {name} ceph tell mon.<id> Cause a specific ceph device ls
{name} quorum enter|exit MON to enter or exit
ceph crash ls shows all mgr module
ceph mon summarizes monitor status quorum.
/ ls-new crash dumps (or only list
stat ceph quorum‐ Reports status of new crash dumps with ls-
_status monitor quorum. new)
ceph mgr
ceph report {<tags> Reports full status of ceph crash shows exact information
ceph mgr dumps latest MgrMap, which [<tags>...]} cluster, optional title info {crashid} for crashdump with
dump describes the active & standby tag strings. specific crashid
manager daemons ceph status Shows cluster ceph crash archive all crash dumps
ceph mgr will mark a manager daemon status. archive-all
fail as failed, removing it from ceph tell <name Sends a command ceph crash rm removes crash dump with
{name} managerr map (type.id)> <comma‐ to a specific {crashid} specific id
ceph mgr will list currently enableld nd> [options...] daemon.
ceph crash List the timestamp/uuid
module ls manager modules (plugins) ceph tell <name List all available stat crashids for all newcrash
ceph mgr will enable a manager modules. (type.id)> help commands. info.
module available modules are included ceph version Show mon daemon ceph crash Remove saved crashes
enable in MgrMap and visible via mgr version prune {keep} older than ‘keep’ days.
{module} dump
ceph fs dump get MDS Map {keep} must be an integer.
ceph mgr will disable an active manager
module module
ceph balancer get status of ceph
disable
status balancer
{module}
ceph crash Archive a crash report so that ceph pg dump {all|summ‐ shows ceph pg repair starts repair on <pgid>.
archive it is no longer considered for ary|sum|delta|pools|o‐ human-rea‐ <pgid>
{crashid} the RECENT_CRASH health sds|pgs|pgs_brief} [{all|‐ dable ceph pg scrub starts scrub on <pgid>.
check and does not appear in summary|sum|delta|poo‐ versions of pg <pgid>
the crash ls-new output (it will ls|osds|pgs|pgs_brief...]} map (only ‘all’
ceph pg stat shows placement group
still appear in the crash ls valid with
status.
output). plain).
https://docs.ceph.com/en/quincy/rados/o‐
ceph pg dump_json {all|s‐ shows
perations/pg-states/
crushtool - decompile crushmap (ceph ummary|sum|delta|pool‐ human-rea‐
https://docs.ceph.com/en/latest/rados/trou‐
d {compi‐ osd getcrushmap -o {file}) to s|osds|pgs|pgs_brief} [{all|‐ dable version
bleshooting/troubleshooting-pg/
led-crusm‐ readable format. Now you summary|sum|delta|poo‐ of pg map in
ap-file} -o can open it with every ls|osds|pgs|pgs_brief...]} json only.
ceph osd
{output_d‐ common texteditor (vim, ceph pg dump_pools_json shows pg
ecomp-cru‐ nano, vi) or read with cat / ceph osd add {addr] to blocklist
pools info in
shmap-file} less blocklist add
json only.
{EntityAddr}
crushtool - recompile crushmap after ceph pg dump_stuck shows inform‐
{<floa‐
c {modif‐ modifying to output file (-o) {inactive|unclean|stale|‐ ation about
t[0.0-]>}
ied-crush‐ undersized|degraded stuck pgs.
map-fi‐ ceph osd show blocklisted clients
[inactive|unclean|stale|‐
lename} -o blocklist ls
undersized|degraded...]}
{modified- {<int>} ceph osd remove {addr] from blocklist
compiled- blocklist rm
ceph pg getmap gets binary pg
crushmap- {EntityAddr}
map to -
file}
o/stdout. ceph osd prints a histogram of which
ceph osd set new crushmap from file blocked-by OSDs are blocking their
ceph pg ls {<int>} {<pg-s‐ lists pg with
setcru‐ peers
tate> [<pg-state>...]} specific pool,
shmap -i
osd, state ceph osd To create a new OSD or
{modified-
new {<uuid‐ recreate a previously
ceph pg ls-by-osd <os‐ lists pg on osd
compiled-
>} {<id>} -i destroyed OSD with specific
dname (id|osd.id)> {<int>} [osd]
crushmap-
{<params.j‐ id. Please look up Docume‐
{<pg-state> [<pg-stat‐
file}
son>} ntation if you're planning to
e>...]}
use this command.
ceph pg ceph pg ls-by-pool <pools‐ lists pg with
ceph osd adds or updates crushmap
tr> {<int>} {<pg-state> pool =
ceph pg debug unfound_o‐ shows crush add position and weight for <na‐
[<pg-state>...]} [poolname]
bjects_exist|degraded_pg‐ debug info <osdname me> with <weight> and
s_exist about pgs. ceph pg ls-by-primary lists pg with
(id|osd.id)> location <args>.
<osdname (id|osd.id)> {<i‐ primary =
ceph pg deep-scrub <pgid> starts deep- <float[0.0-]>
nt>} {<pg-state> [<pg-stat‐ [osd]
scrub on <args> [<a‐
e>...]}
<pgid>. rgs>...]
ceph pg map <pgid> shows
ceph osd dds no-parent (probably
mapping of pg
crush add- root) crush bucket <name>
to osds.
bucket <na‐ of type <type>.
me> <type>
ceph osd crush creates entry or ceph osd creates crush rule <name> to ceph osd sets osd(s) <id> [<id>…]
create-or-move moves existing entry crush rule start from <root>, replicate down <id‐ down.
<osdname (id|osd.i‐ for <name> <weigh‐ create- across buckets of type <ty‐ s> [<i‐
d)> <float[0.0-]> t> at/to location <ar‐ simple pe>, using a choose mode of ds>...]
<args> [<args>...] gs>. <name> <firstn|indep> (default firstn; ceph osd prints summary of OSD map.
ceph osd crush dumps crush map. <root> <ty‐ indep best for erasure pools). dump
dump pe> {first‐
ceph osd find osd <id> in the CRUSH
n|indep}
ceph osd crush link links existing entry find <in‐ map and shows its location.
<name> <args> for <name> under ceph osd dumps crush rule <name> t[0-]>
[<args>...] location <args>. crush rule (default all).
ceph osd gets CRUSH map.
dump {<n‐
ceph osd crush moves existing entry getcru‐
ame>}
move <name> <ar‐ for <name> to shmap
gs> [<args>...] location <args>. ceph osd lists crush rules.
ceph osd gets OSD map.
crush rule
ceph osd crush removes <name> getmap
ls
remove <name> from crush map ceph osd shows largest OSD id
{<ancestor>} (everywhere, or just ceph osd removes crush rule <name>
getmaxosd
at <ancestor>). crush rule
ceph osd sets osd(s) <id> [<id>…] in.
rm <na‐
ceph osd crush renames bucket <sr‐ in <ids>
me>
rename-bucket <sr‐ cname> to <dstna‐ [<ids>...]
cname> <dstname> me> ceph osd set with osdname/osd.id
ceph osd marks osd as permanently
crush set update crushmap position
ceph osd crush change <name>’s lost <int[0- lost. THIS DESTROYS DATA
<osdname and weight for <name> to
reweight <name> weight to <weight> ]> {--yes-i-‐ IF NO MORE REPLICAS
(id|osd.id)> <weight> with location <ar‐
<float[0.0-]> in crush map. really-me‐ EXIST, BE CAREFUL.
<float[0.0- gs>.
ceph osd crush recalculate the an-it}
]> <args>
reweight-all weights for the tree ceph osd shows all OSD ids.
[<args>...]
to ensure they sum ls
ceph osd shows current crush
correctly
crush tunables. ceph osd lists pools
eph osd crush changes all leaf lspools
show-t‐
reweight-subtree items beneath <na‐
unables ceph osd finds pg for <object> in <po‐
<name> <weight> me> to <weight> in
ceph osd ows the crush buckets and map <po‐ ol>.
crush map
crush tree items in a tree view. olname>
ceph osd crush rm removes <name> <objectna‐
ceph osd unlinks <name> from crush
<name> {<ancesto‐ from crush map me>
crush map (everywhere, or just at
r>} (everywhere, or just
unlink <na‐ <ancestor>). ceph osd fetches metadata for osd <id‐
at <ancestor>).
me> {<a‐ metadata >.
ceph osd crush rule creates crush rule {int[0-]}
ncestor>}
create-erasure <na‐ <name> for erasure (default all)
ceph osd shows OSD utilization
me> {<profile>} coded pool created
df ceph osd sets osd(s) <id> [<id>…] out.
with <profile>
{plain|tree} out <ids>
(default default).
[<ids>...]
ceph osd initiates deep scrub on
deep-scrub specified osd.
<who>
ceph osd checks whether the list of ceph osd pool delete <po‐ deletes pool. ceph osd pool set <pooln‐ sets pool
ok-to- OSD(s) can be stopped without olname> {<poolname>} {-- (DATA LOSS ame> size|min_size|pg_num| parameter
stop <id> immediately making data yes-i-really-really-mean-it} BE pgp_num|crush_rule|hashp‐ <var> to
[<ids>...] unavailable. That is, all data CAREFUL!) spool|nodelete|nopgchange|n‐ <val>.
[--max should remain readable and ceph osd pool get <pooln‐ gets pool osizechange| hit_set_type|hi‐
<num>] writeable, although data ame> size|min_size|pg_n‐ parameter t_set_period|hit_set_count|hit‐
redundancy may be reduced as um|pgp_num|crush_r‐ <var> _set_fpp|debug_fake_ec_pool|
some PGs may end up in a ule|write_fadvise_dontneed target_max_bytes|target_max‐
degraded (but active) state. It _objects|cache_target_dirty‐
eph osd pool get <pooln‐ to get all pool
will return a success code if it _ratio| cache_target_dirty_hi‐
ame> all parameters
is okay to stop the OSD(s), or gh_ratio| cache_target_full_rat‐
that apply to
an error code and informative io|cache_min_flush_age|cach‐
the pool’s
message if it is not or if no e_min_evict_age| min_read_‐
type:
conclusion can be drawn at the recency_for_promote|write_f‐
ceph osd pool get-quota obtains object
current time. advise_dontneed|hit_set_gra‐
<poolname> or byte limits
ceph osd pauses osd. de_decay_rate| hit_set_sear‐
for pool.
pause ch_last_n <val> {--yes-i-rea‐
ceph osd pool ls {detail} list pools lly-mean-it}
ceph osd rints dump of OSD perf
ceph osd pool mksnap makes ceph osd pool set-quota <po‐ sets
perf summary stats.
<poolname> <snap> snapshot olname> max_objects|max‐ object or
ceph osd forces creation of pg <pgid>.
<snap> in _bytes <val> byte limit
force-cre‐
<pool>. on pool.
ate-pg
ceph osd pool rename renames <sr‐ ceph osd pool stats {<name>} obtain
<pgid>
<poolname> <poolname> cpool> to stats from
ceph osd creates pool.
<destpool>. all pools,
pool
ceph osd pool rmsnap removes or from
create
<poolname> <snap> snapshot specified
<pooln‐
<snap> from pool.
ame>
<pool>. ceph osd repair <who> initiates
{<int[0-]‐
>} {<int[‐ repair on
0-]>} a
{replicat‐ specified
ed|era‐ osd.
sure}
{<eras‐
ure_co‐
de_profil‐
e>} {<r‐
ule>} {<i‐
nt>} {--
autosc‐
ale-mo‐
de=‐
<on,of‐
f,warn>}
ceph reweight OSDs by PG distri‐ ceph marks OSD id as destroyed, ceph osd checks whether it is safe to
osd bution [overload-percentage-for- osd removing its cephx entity’s keys safe-to-d‐ remove or destroy an OSD
reweig‐ consideration, default 120]. destroy and all of its dm-crypt and estroy <id> without reducing overall data
ht-by- <id> {-- daemon-private config key [<ids>...] redundancy or durability. It
pg {<i‐ yes-i-‐ entries. This command will not will return a success code if it
nt[100- really- remove the OSD from crush, nor is definitely safe, or an error
]>} {<p‐ mean- will it remove the OSD from the code and informative
ool‐ it} OSD map. Instead, once the message if it is not or if no
name> command successfully conclusion can be drawn at
[<pool‐ completes, the OSD will show the current time.
nam‐ marked as destroyed. In order to ceph osd initiates scrub on specified
e...]} {-- mark an OSD as destroyed, the scrub <wh‐ osd.
no-inc‐ OSD must first be marked as o>
rea‐ lost.
ceph osd sets cluster-wide <flag> by
sing} ceph performs a combination of osd set pause|‐ updating OSD map. The full
ceph reweights OSDs by utilization. It osd destroy, osd rm and osd crush noup|nodo‐ flag is not honored anymore
osd only reweights outlier OSDs purge remove. wn|noout|‐ since the Mimic release, and
reweig‐ whose utilization exceeds the <id> {-- noin|noba‐ ceph osd set full is not
ht-by-‐ average, eg. the default 120% yes-i-‐ ckfill| supported in the Octopus
utiliz‐ limits reweight to those OSDs really- norebalan‐ release.
ation that are more than 20% over the mean- ce|noreco‐
{<int[‐ average. [overload-threshold, it} ver|noscr‐
100-]> default 120 [max_weight_change, ub|nod‐
{<floa‐ default 0.05 [max_osds_to_ad‐ eep-sc‐
t[0.0-]> just, default 4]]] rub|notie‐
{<int[0- ragent
]>}}} {--
ceph osd sets crush map from input
no-inc‐
setcru‐ file.
rea‐
shmap
sing}
ceph osd sets new maximum osd
ceph removes osd(s) <id> [<id>…]
setmaxosd value.
osd rm from the OSD map.
<int[0-]>
<ids>
[<i‐
ds>...]