update-status hook failed with "su: Cannot fork user shell" - number of collect_rabbitmq_stats.sh processes increases over time

Bug #1716854 reported by Nobuto Murata
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack RabbitMQ Server Charm
Fix Released
Undecided
Hua Zhang

Bug Description

collect_rabbitmq_stats.sh taking some time or even long time should be ok. That's just a monitoring issue. However, collect_rabbitmq_stats.sh is somehow ignoring lock mechanism and cron spawns multiple collect_rabbitmq_stats.sh processes over time. Then, it causes a catastrophic impact of running OpenStack system such as rabbitmq connection error because of running out of resources.

How to (forcibly) reproduce:

$ juju deploy rabbitmq-server
$ juju deploy nrpe
$ juju deploy nagios

$ juju add-relation nrpe:nrpe-external-master rabbitmq-server:nrpe-external-master
$ juju add-relation nagios:monitors nrpe:monitors

Then, replace "/usr/sbin/rabbitmqctl -q list_queues" line in the /usr/local/bin/collect_rabbitmq_stats.sh file with "sleep 30m".

Edit /etc/cron.d/rabbitmq-stats to make it one minute interval instead of 5.
*/1 * * * * root /usr/local/bin/collect_rabbitmq_stats.sh

17:20 0:00 \_ /usr/sbin/CRON -f
17:20 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:20 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:20 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:20 0:00 | \_ sleep 30m
17:25 0:00 \_ /usr/sbin/CRON -f
17:25 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:25 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:25 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:25 0:00 | \_ sleep 30m
17:30 0:00 \_ /usr/sbin/CRON -f
17:30 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:30 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:30 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:30 0:00 | \_ sleep 30m
17:35 0:00 \_ /usr/sbin/CRON -f
17:35 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:35 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:35 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:35 0:00 | \_ sleep 30m
17:37 0:00 \_ /usr/sbin/CRON -f
17:37 0:00 \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:37 0:00 \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:37 0:00 \_ lockfile-create -r2 --lock-name /var/lock/rabbitmq-gather-metrics.lock

Revision history for this message
Nobuto Murata (nobuto) wrote :

lsof output right now. But not sure about the exact cause yet.

$ sudo lsof -n | grep -c rabbitmq
lsof: WARNING: can't stat() tracefs file system /sys/kernel/debug/tracing
      Output information may be incomplete.
427029

Revision history for this message
Nobuto Murata (nobuto) wrote :

RabbitMQ service itself is ok.

# systemctl status rabbitmq-server.service
● rabbitmq-server.service - RabbitMQ Messaging Server
   Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2017-09-12 13:27:06 UTC; 24h ago
 Main PID: 15353 (rabbitmq-server)
    Tasks: 130
   Memory: 3.4G
      CPU: 8h 30min 38.876s
   CGroup: /system.slice/rabbitmq-server.service
...

Looks like Nagios/nrpe jobs are culprits here.

# pgrep -af /usr/local/bin/collect_rabbitmq_stats.sh | wc -l
489

That should be running with only one instance.

Revision history for this message
Nobuto Murata (nobuto) wrote :

Hmm, /usr/local/bin/collect_rabbitmq_stats.sh takes time. That should be ok, but what I don't understand is that the number of running process of collect_rabbitmq_stats.sh kicked by /usr/sbin/CRON is 163. Even if one job taking time, the second job should exit with unable to obtain lock...

# pgrep -af '/bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh' | wc -l
163

[kicking by hand]
# bash -x /usr/local/bin/collect_rabbitmq_stats.sh
+ LOCK=/var/lock/rabbitmq-gather-metrics.lock
+ lockfile-create -r2 --lock-name /var/lock/rabbitmq-gather-metrics.lock
+ '[' 0 -ne 0 ']'
+ trap 'rm -f /var/lock/rabbitmq-gather-metrics.lock > /dev/null 2>&1' exit
+ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/snap/bin:/sbin/
+ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/snap/bin:/sbin/
+ '[' -f /var/lib/rabbitmq/pids ']'
+ '[' -f /var/run/rabbitmq/pid ']'
+ '[' -f /<email address hidden> ']'
++ cat /<email address hidden>
+ RABBIT_PID=15360
+ DATA_DIR=/var/lib/rabbitmq/data
++ hostname -s
+ DATA_FILE=/var/lib/rabbitmq/data/juju-a32313-0-lxd-6_queue_stats.dat
+ LOG_DIR=/var/lib/rabbitmq/logs
++ hostname -s
+ RABBIT_STATS_DATA_FILE=/var/lib/rabbitmq/data/juju-a32313-0-lxd-6_general_stats.dat
++ date +%s
+ NOW=1505310965
++ hostname -s
+ HOSTNAME=juju-a32313-0-lxd-6
++ du -sm /var/lib/rabbitmq/mnesia
++ cut -f1
+ MNESIA_DB_SIZE=6
++ ps -p 15360 -o rss=
+ RABBIT_RSS=' 1712'
+ '[' '!' -d /var/lib/rabbitmq/data ']'
+ '[' '!' -d /var/lib/rabbitmq/logs ']'
+ echo '#Vhost Name Messages_ready Messages_unacknowledged Messages Consumers Memory Time'
+ /usr/sbin/rabbitmqctl -q list_vhosts
+ read VHOST
+ /usr/sbin/rabbitmqctl -q list_queues -p / name messages_ready messages_unacknowledged messages consumers memory
++ date +%s
+ awk '{print "/ " $0 " 1505310966 "}'
+ read VHOST
+ /usr/sbin/rabbitmqctl -q list_queues -p nagios-rabbitmq-server-0 name messages_ready messages_unacknowledged messages consumers memory
++ date +%s
+ awk '{print "nagios-rabbitmq-server-0 " $0 " 1505310966 "}'
+ read VHOST
+ /usr/sbin/rabbitmqctl -q list_queues -p nagios-rabbitmq-server-1 name messages_ready messages_unacknowledged messages consumers memory
++ date +%s
+ awk '{print "nagios-rabbitmq-server-1 " $0 " 1505310967 "}'

<<< stuck here

[kick the second one by hand]
# bash -x /usr/local/bin/collect_rabbitmq_stats.sh
+ LOCK=/var/lock/rabbitmq-gather-metrics.lock
+ lockfile-create -r2 --lock-name /var/lock/rabbitmq-gather-metrics.lock
+ '[' 4 -ne 0 ']'
+ echo 'Failed to create lockfile: /var/lock/rabbitmq-gather-metrics.lock.'
Failed to create lockfile: /var/lock/rabbitmq-gather-metrics.lock.
+ exit 1

Revision history for this message
Nobuto Murata (nobuto) wrote :
Download full text (8.5 KiB)

Hmm, somehow the lock mechanism didn't work.

root 235697 0.0 0.0 47280 2864 ? S Sep12 0:00 \_ /usr/sbin/CRON -f
root 235698 0.0 0.0 4508 800 ? Ss Sep12 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
root 235699 0.0 0.0 11244 3100 ? S Sep12 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
root 235711 0.0 0.0 11244 2020 ? S Sep12 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
root 236762 0.0 0.0 4508 1640 ? S Sep12 0:00 | \_ /bin/sh /usr/sbin/rabbitmqctl -q list_queues -p openstack name messages_ready messages_unacknowledged messages consumers memory
root 236781 0.0 0.0 51004 3428 ? S Sep12 0:00 | | \_ su rabbitmq -s /bin/sh -c /usr/lib/rabbitmq/bin/rabbitmqctl "-q" "list_queues" "-p" "openstack" "name" "messages_ready" "messages_unacknowledged" "messages" "consumers" "memory"
rabbitmq 236782 0.0 0.0 4508 716 ? Ss Sep12 0:00 | | \_ sh -c /usr/lib/rabbitmq/bin/rabbitmqctl "-q" "list_queues" "-p" "openstack" "name" "messages_ready" "messages_unacknowledged" "messages" "consumers" "memory"
rabbitmq 236783 0.0 0.0 1084196 242700 ? Sl Sep12 0:02 | | \_ /usr/lib/erlang/erts-7.3/bin/beam.smp -- -root /usr/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.5.7/sbin/../ebin -noshell -noinput -hidden -boot start_clean -sasl errlog_type error -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@juju-a32313-0-lxd-6" -s rabbit_control_main -nodename rabbit@juju-a32313-0-lxd-6 -extra -q list_queues -p openstack name messages_ready messages_unacknowledged messages consumers memory
rabbitmq 236972 0.0 0.0 7504 1004 ? Ss Sep12 0:00 | | \_ inet_gethost 4
rabbitmq 236973 0.0 0.0 9624 1648 ? S Sep12 0:00 | | \_ inet_gethost 4
root 236763 0.0 0.0 23476 1444 ? S Sep12 0:00 | \_ awk {print "openstack " $0 " 1505224803 "}
root 246281 0.0 0.0 47280 2864 ? S Sep12 0:00 \_ /usr/sbin/CRON -f
root 246282 0.0 0.0 4508 712 ? Ss Sep12 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
root 246283 0.0 0.0 11244 2964 ? S Sep12 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
root 246295 0.0 0.0 11244 2024 ? S Sep12 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
root 247345 0.0 0.0 4508 1604 ? S Sep12 0:00 | \_ /bin/sh /usr/sbin/rabbitmqctl -q list_queues -p openstack name messages_ready messages_unacknowledged messages consumers memory
root 247364 0.0 0.0 51004 3488 ? S Sep12 0:00 | | \_ su rabbitmq -s /bin/sh -c /usr/lib/rabbitmq/bin/rabbitmqctl "-q" "list_queues" "-p" "openstack" "name" "messages_ready" "messages_unacknowledged" "messages" "consumers" "memory"
rabbitmq 247365 0.0 0.0 4508 ...

Read more...

Nobuto Murata (nobuto)
summary: - update-status hook failed with "su: Cannot fork user shell"
+ update-status hook failed with "su: Cannot fork user shell" - number of
+ collect_rabbitmq_stats.sh processes increases over time
description: updated
Revision history for this message
Nobuto Murata (nobuto) wrote :

I suspect the lock mechanism, and maybe we can reproduce the issue by replacing "/usr/sbin/rabbitmqctl -q list_queues" in the /usr/local/bin/collect_rabbitmq_stats.sh file with something like "sleep 30m".

tags: added: onsite
Revision history for this message
Ryan Beisner (1chb1n) wrote :

Can you please add the juju status and ideally the sanitized bundle which was used to deploy this? Also, the juju unit logs from the affected units. We'll need to know things like unit series, charm rev, charm config. Thank you.

Changed in charm-rabbitmq-server:
status: New → Incomplete
Revision history for this message
Nobuto Murata (nobuto) wrote :

My gut feeling is that we might be able to reproduce it by the following step.

$ juju deploy rabbitmq-server
$ juju deploy nrpe
$ juju deploy nagios

$ juju add-relation nrpe-lxd:nrpe-external-master rabbitmq-server:nrpe-external-master
$ juju add-relation nagios:monitors nrpe-lxd:monitors

Then, replace "/usr/sbin/rabbitmqctl -q list_queues" line in the /usr/local/bin/collect_rabbitmq_stats.sh file with "sleep 30m".

Since I already redeployed the environment, logs are not available. But putting some info here as it might be helpful.

  rabbitmq-server:
    annotations:
      gui-x: '500'
      gui-y: '250'
    charm: cs:rabbitmq-server
    bindings:
      '': *oam-space
      amqp: *internal-space
      cluster: *internal-space
    options:
      min-cluster-size: 3
      cluster-partition-handling: pause_minority
    num_units: 3
    to:
    - lxd:neutron-gateway/0
    - lxd:neutron-gateway/1
    - lxd:neutron-gateway/2

  nrpe-lxd:
    annotations:
      gui-x: '250'
      gui-y: '750'
    charm: cs:nrpe
    bindings:
      '': *oam-space
      monitors: *oam-space
    options:
      nagios_host_context: boostack-XYZ
      nagios_hostname_type: unit
      sub_postfix: ''

  nagios:
    annotations:
      gui-x: '0'
      gui-y: '750'
    charm: cs:nagios
    num_units: 1
    constraints: tags=nagios
    bindings:
      '': *oam-space
    options:
      enable_livestatus: true
      load_monitor: '50!40!30!100!60!40'
      nagios_host_context: bootstack-XYZ

- - nagios:monitors
  - nrpe-lxd:monitors

- - nrpe-lxd:nrpe-external-master
  - rabbitmq-server:nrpe-external-master

Revision history for this message
Nobuto Murata (nobuto) wrote :

Model Controller Cloud/Region Version SLA
openstack maas-controller maas 2.2.3 unsupported

App Version Status Scale Charm Store Rev OS Notes
nrpe-lxd active 3 nrpe jujucharms 30 ubuntu
rabbitmq-server 3.5.7 active 3 rabbitmq-server jujucharms 65 ubuntu

Unit Workload Agent Machine Public address Ports Message
rabbitmq-server/0* active idle 0/lxd/6 10.10.21.111 5672/tcp Unit is ready and clustered
  nrpe-lxd/7 active idle 10.10.21.111 ready
rabbitmq-server/1 active idle 1/lxd/6 10.10.21.50 5672/tcp Unit is ready and clustered
  nrpe-lxd/23 active idle 10.10.21.50 ready
rabbitmq-server/2 active idle 2/lxd/6 10.10.21.42 5672/tcp Unit is ready and clustered
  nrpe-lxd/8 active idle 10.10.21.42 ready

Revision history for this message
Nobuto Murata (nobuto) wrote :

Look like "sleep 30m" is a kind of reproducer.

$ juju deploy rabbitmq-server
$ juju deploy nrpe
$ juju deploy nagios

$ juju add-relation nrpe:nrpe-external-master rabbitmq-server:nrpe-external-master
$ juju add-relation nagios:monitors nrpe:monitors

Then, replace "/usr/sbin/rabbitmqctl -q list_queues" line in the /usr/local/bin/collect_rabbitmq_stats.sh file with "sleep 30m".

Edit /etc/cron.d/rabbitmq-stats to make it one minute interval instead of 5.
*/1 * * * * root /usr/local/bin/collect_rabbitmq_stats.sh

So now I can see two "sleep 30m" processes which should not happen with lock file.

root 319 0.0 0.0 26068 2480 ? Ss 17:00 0:00 /usr/sbin/cron -f
root 28737 0.0 0.0 47280 2856 ? S 17:20 0:00 \_ /usr/sbin/CRON -f
root 28738 0.0 0.0 4508 744 ? Ss 17:20 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
root 28739 0.0 0.0 11244 3080 ? S 17:20 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
root 28751 0.0 0.0 11244 2004 ? S 17:20 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
root 28849 0.0 0.0 6012 676 ? S 17:20 0:00 | \_ sleep 30m
root 29743 0.0 0.0 47280 2856 ? S 17:25 0:00 \_ /usr/sbin/CRON -f
root 29744 0.0 0.0 4508 760 ? Ss 17:25 0:00 \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
root 29745 0.0 0.0 11244 3072 ? S 17:25 0:00 \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
root 29757 0.0 0.0 11244 2008 ? S 17:25 0:00 \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
root 29855 0.0 0.0 6012 672 ? S 17:25 0:00 \_ sleep 30m

Changed in charm-rabbitmq-server:
status: Incomplete → New
Revision history for this message
Nobuto Murata (nobuto) wrote :

Somehow ignoring the lock happens 5 min interval even cron is one minute interval.

17:20 0:00 \_ /usr/sbin/CRON -f
17:20 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:20 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:20 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:20 0:00 | \_ sleep 30m
17:25 0:00 \_ /usr/sbin/CRON -f
17:25 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:25 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:25 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:25 0:00 | \_ sleep 30m
17:30 0:00 \_ /usr/sbin/CRON -f
17:30 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:30 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:30 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:30 0:00 | \_ sleep 30m
17:35 0:00 \_ /usr/sbin/CRON -f
17:35 0:00 | \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:35 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:35 0:00 | \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:35 0:00 | \_ sleep 30m
17:37 0:00 \_ /usr/sbin/CRON -f
17:37 0:00 \_ /bin/sh -c /usr/local/bin/collect_rabbitmq_stats.sh
17:37 0:00 \_ /bin/bash /usr/local/bin/collect_rabbitmq_stats.sh
17:37 0:00 \_ lockfile-create -r2 --lock-name /var/lock/rabbitmq-gather-metrics.lock

description: updated
Nobuto Murata (nobuto)
description: updated
Xav Paice (xavpaice)
tags: added: canonical-bootstack
tags: added: cpec
Revision history for this message
Nobuto Murata (nobuto) wrote :

By default, "rabbitmqctl list_queues" used in the cron job has infinite timeout. Setting a timeout less than 5 minutes should suffice as a workaround. Because taking 5 minutes of "rabbitmqctl list_queues" is already something wrong.

====
rabbitmqctl [-n <node>] [-t <timeout>] [-q] <command> [<command options>]

Options:
    -n node
    -q
    -t timeout

...

Operation timeout in seconds. Only applicable to "list" commands. Default is
"infinity".
====

Chris Gregan (cgregan)
tags: added: cpe-onsite
removed: onsite
Revision history for this message
Edward Hope-Morley (hopem) wrote :

I've had a go at trying to repro this but have been unsuccessful i.e. i clearly see that if subsequent runs start while the first is still running they bomb out as a result of the lockfile being taken. Perhaps it would be sensible to allow for an optionally configurable timeout for the entire job e.g.

*/5 * * * * root timeout -2 300 /usr/local/bin/collect_rabbitmq_stats.sh

and while we're at it we could also fix logging so that all output goes to syslog by adding:

  ... | logger -p local0.notice

In terms of a charm change we could either add a new config option that takes a timeout value (no value == no timeout) or automatically just set the timeout to equal the cron quanta.

Revision history for this message
Hua Zhang (zhhuabj) wrote :

I can't reproduce the problem by hand as well as Ed said, but I have reproduced the problem by using cron to run it continuously for more longer time.

And just now I also saw the same problem even I used timeout and flock to run the test:

*/1 * * * * root timeout -s SIGINT 300 flock -xn /var/lock/collect_rabbitmq_stats.flock -c '/usr/local/bin/collect_rabbitmq_stats.sh >> /tmp/cron.log'

Revision history for this message
Hua Zhang (zhhuabj) wrote :
Felipe Reyes (freyes)
tags: added: backport-potential sts
Changed in charm-rabbitmq-server:
assignee: nobody → Hua Zhang (zhhuabj)
Changed in charm-rabbitmq-server:
milestone: none → 17.11
Changed in charm-rabbitmq-server:
status: New → In Progress
Ante Karamatić (ivoks)
tags: removed: cpec
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-rabbitmq-server (master)

Reviewed: https://review.openstack.org/506097
Committed: https://git.openstack.org/cgit/openstack/charm-rabbitmq-server/commit/?id=ff4da882a21233c3c67d7cf74f3f028c78328e14
Submitter: Jenkins
Branch: master

commit ff4da882a21233c3c67d7cf74f3f028c78328e14
Author: Zhang Hua <email address hidden>
Date: Tue Sep 26 14:45:24 2017 +0800

    Support timeout for stats capture cron job

    In this charm we run a cron job to check rabbitmq status and it is
    possible that the commands run could fail or hang if e.g. rabbit
    is not healthy. Currently the cron will never timeout and could
    hang forever so we add a new timeout config option 'cron-timeout'
    which, when set, will result in the a SIGINT being sent to the
    application and if that fails to exit within 10s a SIGKILL is sent.
    We also fix logging so that all output goes to syslog local0.notice.

    Change-Id: I0bb8780c5cc64a24384648f00c8068d5d666d28c
    Closes-Bug: 1716854

Changed in charm-rabbitmq-server:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-rabbitmq-server (stable/17.08)

Fix proposed to branch: stable/17.08
Review: https://review.openstack.org/511512

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-rabbitmq-server (stable/17.08)

Reviewed: https://review.openstack.org/511512
Committed: https://git.openstack.org/cgit/openstack/charm-rabbitmq-server/commit/?id=c4619e04965f1cee12388c9dde31a531369522b7
Submitter: Zuul
Branch: stable/17.08

commit c4619e04965f1cee12388c9dde31a531369522b7
Author: Zhang Hua <email address hidden>
Date: Tue Sep 26 14:45:24 2017 +0800

    Support timeout for stats capture cron job

    In this charm we run a cron job to check rabbitmq status and it is
    possible that the commands run could fail or hang if e.g. rabbit
    is not healthy. Currently the cron will never timeout and could
    hang forever so we add a new timeout config option 'cron-timeout'
    which, when set, will result in the a SIGINT being sent to the
    application and if that fails to exit within 10s a SIGKILL is sent.
    We also fix logging so that all output goes to syslog local0.notice.

    Change-Id: I0bb8780c5cc64a24384648f00c8068d5d666d28c
    Closes-Bug: 1716854
    (cherry picked from commit ff4da882a21233c3c67d7cf74f3f028c78328e14)

James Page (james-page)
Changed in charm-rabbitmq-server:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.