> [500] Error running RPC method granular_deploy: 83e35741-5ea2-4381-8e76-8cb51055aedb: MCollective call failed in agent 'puppetd', method 'runonce', failed nodes: > ID: 1 - Reason: Lock file and PID file exist; puppet is running.
The error has nothing to do with ceph, it looks like astute (or nailgun) bug.
> Controller+CephOSD+Ironic > Controller+CephOSD+Ironic > Controller+CephOSD+Ironic
> ceph osd out 2
In such a small cluster it's better to reweight the node being removed before marking it `out'
ceph osd crush reweight osd.2 0
See the official documentation [1] for more details
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#take-the-osd-out-of-the-cluster
> [500] Error running RPC method granular_deploy: 83e35741- 5ea2-4381- 8e76-8cb51055ae db: MCollective call failed in agent 'puppetd', method 'runonce', failed nodes:
> ID: 1 - Reason: Lock file and PID file exist; puppet is running.
The error has nothing to do with ceph, it looks like astute (or nailgun) bug.
> Controller+ CephOSD+ Ironic CephOSD+ Ironic CephOSD+ Ironic
> Controller+
> Controller+
> ceph osd out 2
In such a small cluster it's better to reweight the node being removed before marking it `out'
ceph osd crush reweight osd.2 0
See the official documentation [1] for more details
http:// docs.ceph. com/docs/ master/ rados/operation s/add-or- rm-osds/ #take-the- osd-out- of-the- cluster