Comment 2 for bug 1323343

Revision history for this message
Aleksandr Didenko (adidenko) wrote : Re: ceph-deploy osd prepare node-5:/dev/vdb2 returned 1 instead of one of [0]

Reproduced several times on bare-metal (DELL)

{
    "api": "1.0",
    "astute_sha": "a7eac46348dc77fc2723c6fcc3dbc66cc1a83152",
    "build_id": "2014-05-26_18-06-28",
    "build_number": "24",
    "fuellib_sha": "2f79c0415159651fc1978d99bd791079d1ae4a06",
    "fuelmain_sha": "d7f86968880a484d51f99a9fc439ef21139ea0b0",
    "mirantis": "yes",
    "nailgun_sha": "bd09f89ef56176f64ad5decd4128933c96cb20f4",
    "ostf_sha": "89bbddb78132e2997d82adc5ae5db9dcb7a35bcd",
    "production": "docker",
    "release": "5.0"
}

Environment:
multinode, 1 controller+ceph-osd, 1 compute+ceph-osd, 1 mongodb.
        "volumes_lvm": False,
        "volumes_ceph": True,
        "images_ceph": True,
        "murano": True,
        "sahara": True,
        "ceilometer": True,
        "net_provider": 'neutron',
        "net_segment_type": 'gre',
        "libvirt_type": "kvm"

Sporadic errors during deployments:
2014-05-27T10:14:01.414383+00:00 err: ceph-deploy osd prepare node-2:/dev/sda5 returned 1 instead of one of [0]

More detailed logs:
2014-05-27T10:14:01.398761+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sda5
2014-05-27T10:14:01.399020+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] Traceback (most recent call last):
2014-05-27T10:14:01.399728+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 126, in prepare_disk
2014-05-27T10:14:01.400017+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] File "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line 10, in inne
r
2014-05-27T10:14:01.400017+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] def inner(*args, **kwargs):
2014-05-27T10:14:01.400265+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] File "/usr/lib/python2.7/dist-packages/ceph_deploy/util/wrappers.py", line 6, in remote_
call
2014-05-27T10:14:01.401206+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] This allows us to only remote-execute the actual calls, not whole functions.
2014-05-27T10:14:01.401420+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] File "/usr/lib/python2.7/subprocess.py", line 511, in check_call
2014-05-27T10:14:01.401420+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] raise CalledProcessError(retcode, cmd)
2014-05-27T10:14:01.401519+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] CalledProcessError: Command '['ceph-disk-prepare', '--fs-type', 'xfs', '--cluster', 'ceph'
, '--', '/dev/sda5']' returned non-zero exit status 1
2014-05-27T10:14:01.402453+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] Traceback (most recent call last):
2014-05-27T10:14:01.402676+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] File "/usr/sbin/ceph-disk", line 2579, in <module>
2014-05-27T10:14:01.402910+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] main()
2014-05-27T10:14:01.403715+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] File "/usr/sbin/ceph-disk", line 2557, in main
2014-05-27T10:14:01.403905+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] args.func(args)
2014-05-27T10:14:01.403905+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] File "/usr/sbin/ceph-disk", line 1290, in main_prepare
2014-05-27T10:14:01.404096+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] verify_not_in_use(args.data)
2014-05-27T10:14:01.405038+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] File "/usr/sbin/ceph-disk", line 491, in verify_not_in_use
2014-05-27T10:14:01.405228+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] raise Error('Device is in use by a device-mapper mapping (dm-crypt?)' % dev, ','.join(
holders))
2014-05-27T10:14:01.405228+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [node-2][ERROR ] TypeError: not all arguments converted during string formatting
2014-05-27T10:14:01.405228+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev
/sda5
2014-05-27T10:14:01.406363+00:00 notice: (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs