rook-ceph Reset OSD
#11861
Replies: 2 comments
-
|
Talos doesn't mark disks for Ceph in any way. You can use Also moving |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
I think is a rook issue. I've found a workaround : # Debug pod
$ kubectl -n kube-system debug node/myworker1 --image ubuntu --profile sysadmin -it
# Prerequisites : install ceph-volume + ceph-bluestore-tools
# Wipe disk
$ ceph-volume lvm zap /host/dev/sda
stderr: Unknown device "/host/dev/sda": No such device
--> Zapping: /host/dev/sda
--> --destroy was not specified, but zapping a whole device will remove the partition table
--> Removing all BlueStore signature on /host/dev/sda if any...
Running command: /usr/bin/ceph-bluestore-tool zap-device --dev /host/dev/sda --yes-i-really-really-mean-it
Running command: /usr/bin/dd if=/dev/zero of=/host/dev/sda bs=1M count=10 conv=fsync
stderr: 10+0 records in
10+0 records out
stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0680949 s, 154 MB/s
--> Zapping successful for: <Raw Device: /host/dev/sda>
# Restart operator
$ k rollout restart deploy/rook-ceph-operatorPerhaps this is specific to SAS disks? Or is it an incorrect deletion of the Ceph cluster before recreating it? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I can't wipe my SAS disks (non-RAID devices of type megaraid_sas) after delete and reinstall rook-ceph. I don't know if it's Talos problem or just rook-ceph ?
But NVME is compatible with this sanitization procedure.
Procedure:
Some metadata resists this sanitization procedure for SAS disks :
I've commented this issue, but maybe it's talos ? or maybe rook issue : rook/rook#15937
I'm on bare metal, obviously I can't delete the disks...
Thanks for your help :)
Beta Was this translation helpful? Give feedback.
All reactions