keepalived fails to restart cleanly due to the wrong systemd settings

Bug #1644530 reported by Yutani
28
This bug affects 3 people
Affects Status Importance Assigned to Milestone
keepalived (Debian)
Fix Released
Unknown
keepalived (Ubuntu)
Fix Released
Undecided
Unassigned
Xenial
Fix Released
Medium
Christian Ehrhardt 
systemd (Ubuntu)
Opinion
Undecided
Unassigned
Xenial
Opinion
Undecided
Unassigned

Bug Description

[Impact]

 * Restarts of keepalived can leave stale processes with the old configuration around.

 * The systemd detection of the MainPID is suboptimal, and combined with not waiting on signals being handled it can fail on second restart killing the (still) remaining process of the first start.

 * Upstream has a PIDFile statement, this has proven to avoid the issue in the MainPID guessing code of systemd.

[Test Case]

 * Set up keepalived, the more complex the config is the "bigger" is the reace window, below in the description is a trivial sample config that works well.

 * As a test run the loop restarting the service head-to-head while staying under the max-restart limit
$ for j in $(seq 1 20); do sleep 11s; time for i in $(seq 1 5); do sudo systemctl restart keepalived; sudo systemctl status keepalived | egrep 'Main.*exited'; done; done

 Expectation: no output other than timing
 Without fix: sometimes MainPIDs do no more exist, in these cases the child processes are the "old" ones from last execution with the old config.

[Regression Potential]

 * Low because
   * A PIDFile statement is recommended by systemd for type=forking services anyway.
   * Upstream keepalived has this statement in their service file
   * By the kind of change, it should have no functional impact to other parts of the service other than for the PID detection of the job by Systemd.

 * Yet regression potential is never zero. There might be the unlikely case, which were considered working before due to a new config not properly being picked up. After the fix they will behave correctly and might show up as false-positives then if e.g. config was bad.

[Other Info]

 * Usually a fix has to be in at least the latest Development release before SRUing it. But as I outlined below in later Releases than Xenial systemd seems to have improved making this change not-required. We haven't identified the bits for this (there is a bug task here), and they might as well be very complex. I think it is correct to fix Xenial in this regard with the simple change to the service file for now.

 * To eventually match I created a Debian bug task to ask them for the inclusion of the PIDFile so it can slowly tickle back down to newer Ubuntu Releases - also there more often people run backports where the issue might occur on older systemd versions (just as it does for us on Xenial)

---

Because "PIDFile=" directive is missing in the systemd unit file, keepalived sometimes fails to kill all old processes. The old processes remain with old settings and cause unexpected behaviors. The detail of this bug is described in this ticket in upstream: https://github.com/acassen/keepalived/issues/443.

The official systemd unit file is available since version 1.2.24 by this commit:

https://github.com/acassen/keepalived/commit/635ab69afb44cd8573663e62f292c6bb84b44f15

This includes "PIDFile" directive correctly:

PIDFile=/var/run/keepalived.pid

We should go the same way.

I am using Ubuntu 16.04.1, kernel 4.4.0-45-generic.

Package: keepalived
Version: 1.2.19-1

=======================================================================

How to reproduce:

I used the two instances of Ubuntu 16.04.2 on DigitalOcean:

Configurations
--------------

MASTER server's /etc/keepalived/keepalived.conf:

  vrrp_script chk_nothing {
     script "/bin/true"
     interval 2
  }

  vrrp_instance G1 {
    interface eth1
    state BACKUP
    priority 100

    virtual_router_id 123
    unicast_src_ip <primal IP>
    unicast_peer {
      <secondal IP>
    }
    track_script {
      chk_nothing
    }
  }

BACKUP server's /etc/keepalived/keepalived.conf:

  vrrp_script chk_nothing {
     script "/bin/true"
     interval 2
  }

  vrrp_instance G1 {
    interface eth1
    state MASTER
    priority 200

    virtual_router_id 123
    unicast_src_ip <secondal IP>
    unicast_peer {
      <primal IP>
    }
    track_script {
      chk_nothing
    }
  }

Loop based probing for the Error to exist:
------------------------------------------
After the setup above start keepalived on both servers:
    $ sudo systemctl start keepalived.service
Then run the following loop
    $ for j in $(seq 1 20); do sleep 11s; time for i in $(seq 1 5); do sudo systemctl restart keepalived; sudo systemctl status keepalived | egrep 'Main.*exited'; done; done

Expected: no error, only time reports
Error case: Showing Main PID exited, details below

Step by Step Procedures
-----------------------

1) Start keepalived on both servers

  $ sudo systemctl start keepalived.service

2) Restart keepalived on either one

  $ sudo systemctl restart keepalived.service

3) Check status and PID

  $ systemctl status -n0 keepalived.service

Result
------

0) Before restart

Main PID is 3402 and the subprocesses' PIDs are 3403-3406. So far so good.

  root@ubuntu-2gb-sgp1-01:~# systemctl status -n0 keepalived
  ● keepalived.service - Keepalive Daemon (LVS and VRRP)
     Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2017-03-04 01:37:12 UTC; 14min ago
    Process: 3402 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 3403 (keepalived)
      Tasks: 3
     Memory: 1.7M
        CPU: 1.900s
     CGroup: /system.slice/keepalived.service
             ├─3403 /usr/sbin/keepalived
             ├─3405 /usr/sbin/keepalived
             └─3406 /usr/sbin/keepalived

1) First restart

Now Main PID is 3403, which was one of the previous subprocesses and is actually exited. Something is wrong. Yet, the previous processes are all exited; we are not likely to see no weird behaviors here.

  root@ubuntu-2gb-sgp1-01:~# systemctl restart keepalived
  root@ubuntu-2gb-sgp1-01:~# systemctl status -n0 keepalived
  ● keepalived.service - Keepalive Daemon (LVS and VRRP)
     Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2017-03-04 01:51:45 UTC; 1s ago
    Process: 4782 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 3403 (code=exited, status=0/SUCCESS)
      Tasks: 3
     Memory: 1.7M
        CPU: 11ms
     CGroup: /system.slice/keepalived.service
             ├─4783 /usr/sbin/keepalived
             ├─4784 /usr/sbin/keepalived
             └─4785 /usr/sbin/keepalived

2) Second restart

Now Main PID is 4783 and subprocesses' PIDs are 4783-4785. This is problematic as 4783 is the old process, which should have exited before new processes arose. Therefore, keepalived remains in old settings while users believe it uses the new setting.

  root@ubuntu-2gb-sgp1-01:~# systemctl restart keepalived
  root@ubuntu-2gb-sgp1-01:~# systemctl status -n0 keepalived
  ● keepalived.service - Keepalive Daemon (LVS and VRRP)
     Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2017-03-04 01:51:49 UTC; 1s ago
    Process: 4796 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 4783 (keepalived)
      Tasks: 3
     Memory: 1.7M
        CPU: 6ms
     CGroup: /system.slice/keepalived.service
             ├─4783 /usr/sbin/keepalived
             ├─4784 /usr/sbin/keepalived
             └─4785 /usr/sbin/keepalived

Yutani (hiroaki-yutani)
description: updated
Revision history for this message
Yutani (hiroaki-yutani) wrote :

This patch should fix the issue.

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in keepalived (Ubuntu):
status: New → Confirmed
Revision history for this message
Yutani (hiroaki-yutani) wrote :

Sorry, I repost my patch.

Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

The attachment "add PIDFile= directive in systemd unit file" seems to be a patch. If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]

tags: added: patch
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Thank you for your report and your help to make Ubuntu better.
1.2.24 is already in the coming Ubuntu relase, so that is fixed there.

To consider an SRU [1] there should be some clear steps to reproduce this issue.
Did you find on your debugging a reliable way to trigger this issue so it can be tested and verified?

1: https://wiki.ubuntu.com/StableReleaseUpdates

Changed in keepalived (Ubuntu):
status: Confirmed → Fix Released
Changed in keepalived (Ubuntu Xenial):
status: New → Confirmed
importance: Undecided → Medium
Revision history for this message
Yutani (hiroaki-yutani) wrote :

Thank you for your comment!

> 1.2.24 is already in the coming Ubuntu relase, so that is fixed there.

I'm afraid not. The issue still exists since the upstream continues to use their own systemd unit file. I've created the same patch to the very upstream:

https://github.com/formorer/pkg-keepalived/pull/3

> Did you find on your debugging a reliable way to trigger this issue so it can be tested and verified?

Yes, the issue was reproduced many times and was fixed by this patch, in our development environment.

Yutani (hiroaki-yutani)
description: updated
Revision history for this message
Yutani (hiroaki-yutani) wrote :

Hi,

As I have pointed above, this problem is NOT solved! Even the newest version of keepalived package (1.3.2) still lacks "PIDFile=" directive.

$ wget http://launchpadlibrarian.net/296728051/keepalived_1.3.2-1_amd64.deb
$ dpkg-deb --extract keepalived_1.3.2-1_amd64.deb keepalived
$ cat keepalived/lib/systemd/system/keepalived.service

[Unit]
Description=Keepalive Daemon (LVS and VRRP)
After=network-online.target
Wants=network-online.target
# Only start if there is a configuration file
ConditionFileNotEmpty=/etc/keepalived/keepalived.conf

[Service]
Type=forking
KillMode=process
# Read configuration variable file if it is present
EnvironmentFile=-/etc/default/keepalived
ExecStart=/usr/sbin/keepalived $DAEMON_ARGS
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target

Revision history for this message
Yutani (hiroaki-yutani) wrote :

I've provided the minimal reproducible example in Bug Description (sorry for this late). Can you confirm?

Yutani (hiroaki-yutani)
description: updated
Yutani (hiroaki-yutani)
Changed in keepalived (Ubuntu):
status: Fix Released → Confirmed
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi Yutani,
thank you so much.
No need to excuse for adding the repro late, I OTOH missed to subscribe back then and threby missed to see your reply on the upstream PR in time.

Anyway - lets see what we can do to get your great work and analysis to help all of the Ubunut community. I'll start reproducing now and let you know on the next steps ...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

On Zesty (still without the pidfile in the service) it seems to work fine (Version 1:1.3.2-1).

Test:
2x zesty KVM guests
sudo apt-get install keepalived
Set up as above, but with ens3 (virtual ethenet device).

I think the name of backup/master conf names are interchanged in the example above, but it works fine.

0) pre restart
  Process: 2173 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS
 Main PID: 2183 (keepalived)
           ├─2183 /usr/sbin/keepalived
           ├─2184 /usr/sbin/keepalived
           └─2185 /usr/sbin/keepalived

1) First restart
  Process: 2393 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS
 Main PID: 2404 (keepalived)
           ├─2404 /usr/sbin/keepalived
           ├─2408 /usr/sbin/keepalived
           └─2409 /usr/sbin/keepalived

I see the backup server properly detect, transtion to take over and back to backup.
I was restarting on the master above, now trying on backup.

0) pre restart
  Process: 2137 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS
 Main PID: 2147 (keepalived)
           ├─2147 /usr/sbin/keepalived
           ├─2148 /usr/sbin/keepalived
           └─2149 /usr/sbin/keepalived

1) First restart
  Process: 2765 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS
 Main PID: 2776 (keepalived)
           ├─2776 /usr/sbin/keepalived
           ├─2777 /usr/sbin/keepalived
           └─2778 /usr/sbin/keepalived

Note: since they did not add the PIDFILE we might want/need to understand what was done to fix it here (to backport that as SRU)
Next: I'm trying to reproduce on Xenial as reported

Changed in keepalived (Ubuntu):
status: Confirmed → Fix Released
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Note: I compared the upstream and Debian/Ubuntu Service file.
Differences are:
1. Service Name (don't care)
2. After=syslog.target which might be reasonable to pick
3. PIDFile=/var/run/keepalived.pid (which is not bad and we thought would be the fix here, but it seems is not required as my zesty test worked)
4. EnvironmentFile this uses different paths and option variables, transitioning this would be an unwanted conffile change

Ubuntu makes no Delta to Debian here, so we should address these there.
I might file a bug for 2&3 there later on, but first I'll continue with my a analysis.

Next: Repro on Xenial
TODO: Understand why not leaving stale PIDs on Zesty anymore (despite not having PIDFile statement)

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Same setup as above in two Xenial VMs
Again I see the backup taking over and reverting to backup once the restart is complete.

Note: the restart is much faster now, like it is just sending a signal or so while on zesty it was feeling like waiting for completion.

0) pre restart
  Process: 2416 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS
 Main PID: 2418 (keepalived)
           ├─2418 /usr/sbin/keepalived
           ├─2419 /usr/sbin/keepalived
           └─2420 /usr/sbin/keepalived

1) First restart
  Process: 2559 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS
 Main PID: 2560 (keepalived)
           ├─2560 /usr/sbin/keepalived
           ├─2561 /usr/sbin/keepalived
           └─2562 /usr/sbin/keepalived

WTF ??

Your repro seems so complete I thought I might have missed something.
So I tried around and I think I found something which relates to my note above (on newer evrsion waiting)

I was able to reproduce your case by restarting and then restarting very soon (i.e. faster than the wait on zesty was).
Now I got your case:
 Main PID: 2848 (code=exited, status=0/SUCCESS)
Being the old one, but of course gone.
And after the next restart the childs being the "old ones"

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

That makes some sort of sense.
The action for reload is "Just" a signal to MAINPID
ExecReload=/bin/kill -HUP $MAINPID

I think the failure is a second -HUP signal before the former reload is fully complete. And on a more complex keepalived setup this might also be a longer time-window to trigger.

Yet the service files in Xenial and Zesty are identical, and I'd not know one can wait for a signal to be handled.
Might even be a systemd fix in Zesty that now "waits" for such to complete.

So lessons learned, I seem to be able to reliably trigger it with a head to head restart:
"sudo systemctl restart keepalived; sudo systemctl restart keepalived;"

Going to my Zesty env to confirm that there.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

The -HUP is only on reload, the issue is reported on restart.
Which is really killing AFAIK, that makes even more sense (PIDs not fully updated on next kill)

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

First check with the head-to-head restart is confirming my former assumption on Zesty it is:
1. not failing
2. taking longer each restart

I wanted to go further but a trivial test comparing with this:
$ time for i in $(seq 1 200); do sudo systemctl restart keepalived; sudo systemctl status keepalived | egrep 'Main.*exited'; done
Failed for the default start limit kicking in.

Well, lets start softer and go with reload first.
$ time for i in $(seq 1 200); do sudo systemctl reload keepalived; sudo systemctl status keepalived | egrep 'Main.*exited'; done

That works fine on both (but also the HUP just forces a reload and the PIDs stay). So this by-design does not fall into the same fault.

Ok, so with that lets obey the default start-limit of 5 starts per 10 seconds and compare again.
Xenial:
$ time for i in $(seq 1 5); do sudo systemctl restart keepalived; sudo systemctl status keepalived | egrep 'Main.*exited'; done
 Main PID: 8800 (code=exited, status=0/SUCCESS)
 Main PID: 8836 (code=exited, status=0/SUCCESS)

real 0m0.156s
user 0m0.008s
sys 0m0.000s

Note: 2 cases is all we can get, restart #1 works, #2 is too early triggering the issue, #3 cleans up, #4 triggering again, #5 cleaning up again.

Zesty:
$ time for i in $(seq 1 5); do sudo systemctl restart keepalived; sudo systemctl status keepalived | egrep 'Main.*exited'; done

real 0m2,258s
user 0m0,012s
sys 0m0,000s

So that seems to be the repro we need:
- head to head restarts with no time in between
- showing the error on Xenial
- showing it is not occuring on Zesty
- showing something on Zesty makes it "wait" which is what avoids the issue

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Note: with my loop test above I could show that on Xenial this triggers "often" but not 100%, more like 75%. So to be sure on a fix one should run that loop with 10 seconds sleeps in between multiple times.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

What I think is happening in our case:
Since no ExecStop= was specified, systemd will send SIGTERM [...]
Details: https://www.freedesktop.org/software/systemd/man/systemd.kill.html#

KillMode is "process" in the service file.
That means "If set to process, only the main process itself is killed."

So in this case it relies on that being forwarded to the child processes.
That takes time.
If not waiting for it to be "complete" the following restart will send the next SIGTERM and this eliminates the (already in cleanup) main proccess before it can distribute the TERM to its childs/siblings. This is our error state.

In this broken state
 Main PID: 10600 (code=exited, status=0/SUCCESS)
Our mode of KillMode=process might have special handling and kill all of them (since there is no main to kill). That is the cleanup, which gets it back to work again.

Since the service files in both (X/Z) cases are the same I wonder if there is a systemd change which fixes this by some sort of waiting for the signal to be handled (e.g. waiting for the MainPid to go away on its own).

Systemd versions:
Xenial: 229-4ubuntu16
Zesty: 232-18ubuntu1

description: updated
Changed in systemd (Ubuntu):
milestone: none → ubuntu-17.03
assignee: nobody → Dimitri John Ledkov (xnox)
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I can confirm that the adding of a PIDFile as suggested makes it survive the looped test.
We should try to understand the changes behind it working in zesty.

But sooner or later adding the PIDFile might be the less invasive option to make it working in an SRU.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

@Yutani - would you mind reporting on the two diffs to upstream to Debian and mention the bug number here?
- After=syslog.target which might be reasonable to pick
- PIDFile=/var/run/keepalived.pid (which is not bad and we thought would

If you are unwilling or unable let me know, but it would be a great help if you could do so.

For SRU later this might get interesting. Usually an SUR requires the fix to be in the latest Release - but it is "fixed" in Zesty, yet not with the code that we might SRU back then (PIDFile).
If possible I'm waiting for the systemd info to show up here to get the bigger picture.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

While I'd like to understand what systemd changed, I still think adding the PIDFile is correct as a fix for keepalived. Understanding systemd might just open up to fix more than just this service.

Never the less for type forking (that is the case here) PIDFile is recommended (https://www.freedesktop.org/software/systemd/man/systemd.service.html). Various cases are listed when the MainPID guessing might fail otherwise.
Also it is known proven to fix the issue, as well as being the upstream systemd file.

I'll next week schedule a fix and as well file one with Debian to not keep that Delta forever.

Changed in keepalived (Ubuntu Xenial):
status: Confirmed → Triaged
assignee: nobody → ChristianEhrhardt (paelzer)
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Reported the request to pick up the PIDFile statement to Debian.
Linking up the debbug here.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I tested yakkety to check where an SRU makes sense and can confirm that there with systemd 231-9ubuntu3 it already is slow on the loop restarts and working without the issue this bug is about.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Added SRU Template for Xenial
- Yakkety/Zesty not showing the issue (likely due to systemd version)
- Trusty is pre systemd and even the old version uses a --pidfile in its call to start-stop-daemon

description: updated
Changed in keepalived (Debian):
status: Unknown → New
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Fix prepared via bileto at https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/2564/
I verified the fix against the ppa and that we need no extra systemd-daemon reload.
Also in the meantime all dep8 tests finished.

That said SRU template ready, fix ready, Tests ok => syncing into the SRU approval queue.

Revision history for this message
Brian Murray (brian-murray) wrote : Please test proposed package

Hello Yutani, or anyone else affected,

Accepted keepalived into xenial-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/keepalived/1:1.2.19-1ubuntu0.2 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed.Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in keepalived (Ubuntu Xenial):
status: Triaged → Fix Committed
tags: added: verification-needed
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Tested on Xenial
pre-Proposed: 15 hits
with-Proposed: 0 hits

Also for general regression things seem to work normal.
Setting verification-done.

Yutani - if you could also verify Proposed that would make it even better!

tags: added: verification-done
removed: verification-needed
Changed in systemd (Ubuntu):
assignee: Dimitri John Ledkov (xnox) → nobody
milestone: ubuntu-17.03 → none
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package keepalived - 1:1.2.19-1ubuntu0.2

---------------
keepalived (1:1.2.19-1ubuntu0.2) xenial; urgency=medium

  * Add PIDFile to avoid misdetection of MainPID on restart (LP: #1644530).

 -- Christian Ehrhardt <email address hidden> Mon, 13 Mar 2017 13:23:47 +0100

Changed in keepalived (Ubuntu Xenial):
status: Fix Committed → Fix Released
Revision history for this message
Steve Langasek (vorlon) wrote : Update Released

The verification of the Stable Release Update for keepalived has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Anything else needed on src:systemd side of things?

Changed in systemd (Ubuntu):
status: New → Incomplete
Changed in systemd (Ubuntu Xenial):
status: New → Incomplete
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

> Anything else needed on src:systemd side of things?

Hi xnox,
Well, in a perfect world and with a time machine to help you could look what in zesty's systemd fixed it to work reliably even with the suboptimal service file.
But then you have more important tasks all around you and it is fixed where it was broken. Until then I think "opinion" is more appropriate than "incomplete" - yet at whatever state it is ok to just let it hang around for now.

If similar issues come up for other services we might reconsider.

Changed in systemd (Ubuntu):
status: Incomplete → Opinion
Changed in systemd (Ubuntu Xenial):
status: Incomplete → Opinion
Changed in keepalived (Debian):
status: New → Fix Released
Revision history for this message
xavier (xavier-o) wrote :

Affects aswell 17.10 and 18.04.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi xavier,
while it is true that the PIDFile directive is not in those nwer releases as outlined by yutani before in comment #6 the issue itself does not show up, see comment #10

I ran the test in Bionic again today and can confirm that it still does not trigger.
So I have to assume it is fixed via something else (newer systemd?) in there.

If you have a case that can reproduce this please outline the steps to do so and we can consider porting the fix to 17.10/18.04 as well.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.