Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

Ceph is not picking up LVs on NixOS #16841

@daniel-naegele

Description

@daniel-naegele

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:
The osd-prepare pod is not recognizing and adding an OSD based on a LV.

How to reproduce it (minimal and precise):
Create a NixOS node with LVM and a LV for Ceph. The LV will not be picked up. The disk configuration is created declaratively from a Nix configuration using disko. lsblk output:

sda              8:0    0 76.3G  0 disk  
├─sda1           8:1    0    2M  0 part  
├─sda2           8:2    0    1G  0 part  /boot
└─sda3           8:3    0 75.3G  0 part  
  └─crypted    254:0    0 75.3G  0 crypt 
    ├─pool-osd 254:1    0   30G  0 lvm   
    └─pool-os  254:2    0 45.3G  0 lvm   /

File(s) to submit:

Storage configuration from Ceph Cluster Spec:

      storage:
        useAllNodes: true
        nodes:
          - name: de-fsn1-01
            devices:
              - name: /dev/disk/by-id/dm-name-pool-osd

Logs to submit:
Logs from the osd-prepare pod:

2025-12-12 19:16:48.273069 D | cephosd: &{Name:dm-1 Parent: HasChildren:false DevLinks:/dev/disk/by-id/dm-name-pool-osd /dev/disk/by-id/dm-uuid-LVM-x9VPTFchLmNnEn7C4F5jCl8rWZ1Y7A
NPAuHmF0cCk7T6MwCBkejdJbeMZIPmxZtZ /dev/mapper/pool-osd /dev/pool/osd /dev/disk/by-diskseq/5 Size:32212254720 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[]
 Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/pool-osd KernelName:dm-1 Encrypted:false}                       
...
2025-12-12 19:16:48.891309 D | exec: Running command: lsblk /dev/mapper/pool-osd --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-12-12 19:16:48.895333 D | sys: lsblk output: "SIZE=\"32212254720\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/pool-osd\" KNAME=\"/dev/dm-1\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-12-12 19:16:48.895357 D | exec: Running command: dmsetup info -c --noheadings -o name /dev/mapper/pool-osd
2025-12-12 19:16:48.896687 D | exec: Running command: dmsetup splitname --noheadings pool-osd
2025-12-12 19:16:48.897944 D | exec: Running command: ceph-volume lvm list --format json pool/osd
2025-12-12 19:16:49.107797 I | cephosd: skipping device "dm-1": failed to check if the device "dm-1" is available. failed to determine if the device was available. failed to determine if the device "/dev/mapper/pool-osd" is available. failed to execute ceph-volume lvm list on LV "pool/osd". exit status 1.

The ceph-volume lvm list command fails, however I can execute the following on the host without problems:

[nix-shell:~]$ sudo ceph-volume lvm list --format json pool/osd
{}
[nix-shell:~]$ echo $?
0

Environment:

  • OS: NixOS 25.05
  • Kernel (e.g. uname -a): 6.12.48
  • Rook version: v1.18.8
  • Storage backend versio: 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
  • Kubernetes version: v1.32.7+k3s1
  • Kubernetes cluster type: K3s

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions