'ceph-deploy osd prepare' fails on re-deploy if xfs filesystem is not zapped
Bug #1317296 reported by
Dmitry Borodaenko
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Committed
|
High
|
Ryan Moe |
Bug Description
Environment: CentOS, Ceph, re-reploy ceph-osd on the same node using the same partition size.
Based on a comment to an older related bug:
https:/
Symptoms are almost identical but the root cause is different: since we only zap the beginning of the disk, xfs file system remains in place, so when a partition is created at the same spot during provisioning, ceph automount udev rule picks it up and mounts it, which then causes ceph-deploy to fail.
A fix is to zap the partitions before wiping partition table.
tags: | added: backports-4.1.1 |
Changed in fuel: | |
milestone: | 5.0 → 4.1.1 |
status: | Fix Committed → In Progress |
Changed in fuel: | |
milestone: | 4.1.1 → 5.0 |
tags: | removed: backports-4.1.1 |
Changed in fuel: | |
status: | In Progress → Fix Committed |
To post a comment you must log in.
Fix proposed to branch: master /review. openstack. org/92737
Review: https:/