Mcollective is launched more than once
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
mcollective (Ubuntu) |
Invalid
|
Medium
|
Marc Cluet |
Bug Description
Using mcollective 1.2.1+dfsg-2ubuntu1 in Ubuntu Precise:
root@wrk4 (staging) (wrk-c2c):~# ps axuww | grep mco
root 9753 0.0 0.0 11380 928 pts/0 S+ 13:42 0:00 grep mco
root@wrk4 (staging) (wrk-c2c):~# service mcollective start
mcollective start/running, process 9759
root@wrk4 (staging) (wrk-c2c):~# ps axuww | grep mco
root 9762 4.4 0.7 109916 29528 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9766 4.8 0.7 109920 29532 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9771 5.0 0.7 109960 29532 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9789 4.8 0.7 109916 29524 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9812 4.8 0.7 109960 29532 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9837 4.6 0.7 109960 29528 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9859 6.2 0.7 109956 29528 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9878 5.5 0.7 109920 29536 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9890 6.5 0.7 109964 29532 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9913 6.0 0.7 109924 29540 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9921 5.2 0.7 109960 29532 ? Sl 13:42 0:00 ruby /usr/sbin/
root 9969 0.0 0.0 11380 928 pts/0 S+ 13:42 0:00 grep mco
The init script comes from the package:
root@wrk4 (staging) (wrk-c2c):~# cat /etc/init/
description "mcollective daemon"
author "Marc Cluet <email address hidden>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
env RUBYLIB=
exec /usr/sbin/
Launching the exec line manually only starts one instance of mcollective.
Upstart is version 1.5-0ubuntu1.
Changed in mcollective (Ubuntu): | |
status: | New → Invalid |
assignee: | nobody → Marc Cluet (lynxman) |
Hi Raphael,
This is due to a misconfiguration in mcollective's server.cfg
The way upstart behaves with scripts that fork themselves (as is the case with mcollective) makes it very difficult for upstart to track the process once it has been forked.
This is why the default configuration in mcollective server.cfg on the package specifies daemonize=0 so upstart can track the process properly.
This behaviour shown on the bug means that the user has set daemonize=1 on server.cfg, which is not supported by the upstart script.
Marking as invalid.