health ping timed out on controller

Bug #1845019 reported by Ashley Lai
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Expired
Medium
Unassigned

Bug Description

On our solutions qa run, creating a controller on an existing openstack gives an error below.

ERROR:root:ERROR health ping timed out after 30s
ERROR connection is shut down

It's not clear what juju is trying to ping and juju status shows 3 nodes are active.

juju status:
model:
  name: controller
  type: iaas
  controller: foundation-openstack
  cloud: openstack_cloud
  region: RegionOne
  version: 2.6.8
  model-status:
    current: available
    since: 21 Sep 2019 16:53:33Z
  sla: unsupported
machines:
  "0":
    juju-status:
      current: started
      since: 21 Sep 2019 16:53:45Z
      version: 2.6.8
    dns-name: 10.244.32.227
    ip-addresses:
    - 10.244.32.227
    - 172.16.0.226
    - 252.226.0.1
    instance-id: 8dcbfa60-4103-4923-9e67-a8356de309c4
    machine-status:
      current: running
      message: ACTIVE
      since: 21 Sep 2019 16:53:53Z
    modification-status:
      current: idle
      since: 21 Sep 2019 16:53:35Z
    series: bionic
    network-interfaces:
      ens3:
        ip-addresses:
        - 172.16.0.226
        mac-address: fa:16:3e:93:19:de
        gateway: 172.16.0.1
        is-up: true
      fan-252:
        ip-addresses:
        - 252.226.0.1
        mac-address: f2:1d:8e:fd:17:fd
        is-up: true
    constraints: mem=3584M
    hardware: arch=amd64 cores=2 mem=4096M root-disk=40960M availability-zone=nova
    controller-member-status: has-vote
  "1":
    juju-status:
      current: started
      since: 21 Sep 2019 16:55:02Z
      version: 2.6.8
    dns-name: 10.244.32.53
    ip-addresses:
    - 10.244.32.53
    - 172.16.0.13
    - 252.13.0.1
    instance-id: 719ed928-4659-46f8-891e-536c5327f978
    machine-status:
      current: running
      message: ACTIVE
      since: 21 Sep 2019 16:54:29Z
    modification-status:
      current: idle
      since: 21 Sep 2019 16:53:48Z
    series: bionic
    network-interfaces:
      ens3:
        ip-addresses:
        - 172.16.0.13
        mac-address: fa:16:3e:ef:67:62
        gateway: 172.16.0.1
        is-up: true
      fan-252:
        ip-addresses:
        - 252.13.0.1
        mac-address: 8a:37:f3:9d:17:e5
        is-up: true
    hardware: arch=amd64 cores=1 mem=2048M root-disk=20480M availability-zone=nova
  "2":
    juju-status:
      current: started
      since: 21 Sep 2019 16:55:01Z
      version: 2.6.8
    dns-name: 10.244.32.48
    ip-addresses:
    - 10.244.32.48
    - 172.16.0.229
    - 252.229.0.1
    instance-id: 13ac4f9d-5280-42bb-b56d-b4072d5c4c20
    machine-status:
      current: running
      message: ACTIVE
      since: 21 Sep 2019 16:54:36Z
    modification-status:
      current: idle
      since: 21 Sep 2019 16:53:49Z
    series: bionic
    network-interfaces:
      ens3:
        ip-addresses:
        - 172.16.0.229
        mac-address: fa:16:3e:83:4e:cb
        gateway: 172.16.0.1
        is-up: true
      fan-252:
        ip-addresses:
        - 252.229.0.1
        mac-address: 6a:15:03:b7:ed:cb
        is-up: true
    hardware: arch=amd64 cores=1 mem=2048M root-disk=20480M availability-zone=nova
applications:
  juju-introspection-proxy:
    charm: cs:~juju-introspection-proxy-charmers/juju-introspection-proxy-2
    series: bionic
    os: ubuntu
    charm-origin: jujucharms
    charm-name: juju-introspection-proxy
    charm-rev: 2
    exposed: false
    application-status:
      current: active
      message: serving metrics
      since: 21 Sep 2019 16:55:26Z
    relations:
      container:
      - ubuntu
    subordinate-to:
    - ubuntu
    endpoint-bindings:
      container: ""
  ubuntu:
    charm: cs:ubuntu-12
    series: bionic
    os: ubuntu
    charm-origin: jujucharms
    charm-name: ubuntu
    charm-rev: 12
    exposed: false
    application-status:
      current: active
      message: ready
      since: 21 Sep 2019 16:54:44Z
    relations:
      juju-info:
      - juju-introspection-proxy
    units:
      ubuntu/0:
        workload-status:
          current: active
          message: ready
          since: 21 Sep 2019 16:54:44Z
        juju-status:
          current: idle
          since: 21 Sep 2019 16:54:45Z
          version: 2.6.8
        leader: true
        machine: "0"
        public-address: 10.244.32.227
        subordinates:
          juju-introspection-proxy/0:
            workload-status:
              current: active
              message: serving metrics
              since: 21 Sep 2019 16:55:26Z
            juju-status:
              current: idle
              since: 21 Sep 2019 16:55:30Z
              version: 2.6.8
            leader: true
            upgrading-from: cs:~juju-introspection-proxy-charmers/juju-introspection-proxy-2
            open-ports:
            - 19090/tcp
            public-address: 10.244.32.227
      ubuntu/1:
        workload-status:
          current: active
          message: ready
          since: 21 Sep 2019 16:55:51Z
        juju-status:
          current: idle
          since: 21 Sep 2019 16:55:52Z
          version: 2.6.8
        machine: "1"
        public-address: 10.244.32.53
        subordinates:
          juju-introspection-proxy/1:
            workload-status:
              current: active
              message: serving metrics
              since: 21 Sep 2019 16:56:29Z
            juju-status:
              current: idle
              since: 21 Sep 2019 16:56:33Z
              version: 2.6.8
            upgrading-from: cs:~juju-introspection-proxy-charmers/juju-introspection-proxy-2
            open-ports:
            - 19090/tcp
            public-address: 10.244.32.53
      ubuntu/2:
        workload-status:
          current: active
          message: ready
          since: 21 Sep 2019 16:55:52Z
        juju-status:
          current: idle
          since: 21 Sep 2019 16:55:54Z
          version: 2.6.8
        machine: "2"
        public-address: 10.244.32.48
        subordinates:
          juju-introspection-proxy/2:
            workload-status:
              current: active
              message: serving metrics
              since: 21 Sep 2019 16:56:27Z
            juju-status:
              current: idle
              since: 21 Sep 2019 16:56:31Z
              version: 2.6.8
            upgrading-from: cs:~juju-introspection-proxy-charmers/juju-introspection-proxy-2
            open-ports:
            - 19090/tcp
            public-address: 10.244.32.48
    version: "18.04"
storage: {}
controller:
  timestamp: 17:14:47Z

https://oil-jenkins.canonical.com/artifacts/cbd44df1-2693-4905-ae05-49790e352e12/index.html

Tags: cdo-qa
Revision history for this message
Ashley Lai (alai) wrote :
Revision history for this message
Richard Harding (rharding) wrote :

Sorry, maybe I'm missing it. What command was in flight during the ping timeout? You mention creating a controller on an openstack base but then status has deployed applications so it's not a fresh bootstrap?

Revision history for this message
Ashley Lai (alai) wrote :

It is a fresh bootstrap then deploy a bundle. It looks like juju is doing the ping check before deploy the bundle and the ping failed. It would be helpful to add the IP that it is trying to ping in the error message.

Revision history for this message
Tim Penhey (thumper) wrote :

This is not a Juju error:

ERROR:root:ERROR health ping timed out after 30s

Probably comes from a python script by the look of the prefix there.

Changed in juju:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for juju because there has been no activity for 60 days.]

Changed in juju:
status: Incomplete → Expired
Revision history for this message
Jason Hobbs (jason-hobbs) wrote :

The python script (juju-wait) is surfacing it, but the error comes from juju:

api/monitor.go: logger.Errorf("health ping timed out after %s", m.pingTimeout)

Changed in juju:
status: Expired → New
Revision history for this message
Pen Gale (pengale) wrote :

I wonder if this is related to https://bugs.launchpad.net/juju/+bug/1899657

Revision history for this message
John A Meinel (jameinel) wrote :

        case <-m.clock.After(m.pingTimeout):
                logger.Errorf("health ping timed out after %s", m.pingTimeout)

Happens if the client sends a Ping request to the controller, which then does not respond within the timeout period.
In this case, PingPeriod is 1 minute, and PingTimeout is 30s.
If you are seeing this, the critical issue is not that ping timeout happened, it is the fact that something is going on inside the controller that is preventing it from responding within 30s.

Either the connection has been silently dropped, or the controller is so heavily loaded that it can't respond to a trivial request.

Revision history for this message
John A Meinel (jameinel) wrote :

For something like https://bugs.launchpad.net/juju/+bug/1899657 I would expect it to manifest more as "peer closed the connection" (eg, the apiserver actively Close() the socket) vs a "just didn't respond".

Revision history for this message
Michael Skalka (mskalka) wrote :

Seen again during this test run: https://solutions.qa.canonical.com/testruns/testRun/399e6b20-63f0-4996-be44-9494b22ff611

Using Juju 2.8.5

This time it was during a hook execution:

var/log/juju/unit-hacluster-vault-2.log:
...
2020-10-20 15:45:16 DEBUG juju-log ha:144: Pacemaker status: True, Corosync status: True
2020-10-20 15:46:01 ERROR juju.api monitor.go:59 health ping timed out after 30s
2020-10-20 15:46:01 DEBUG juju.worker.dependency engine.go:598 "api-caller" manifold worker stopped: api connection broken unexpectedly
2020-10-20 15:46:01 ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly
2020-10-20 15:46:01 DEBUG juju.worker.dependency engine.go:673 stack trace:
/workspace/_build/src/github.com/juju/juju/worker/apicaller/worker.go:73: api connection broken unexpectedly
...

Further down in the log the unit keeps attempting an API connection to the controller and fails:
...
2020-10-20 15:47:11 DEBUG juju.worker.apicaller connect.go:115 connecting with current password
2020-10-20 15:47:14 DEBUG juju.worker.apicaller connect.go:155 [378ecd] failed to connect
2020-10-20 15:47:14 DEBUG juju.worker.dependency engine.go:598 "api-caller" manifold worker stopped: [378ecd] "unit-hacluster-vault-2" cannot open api: unable to connect to API: dial tcp 10.246.64.201:17
070: i/o timeout
...

Then fails to process a hook, leadership election, so on.

During this time the rest of the model was humming away fine.

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1845019] Re: health ping timed out on controller

do we have any other indication that that machine can connect to
10.246.64.201:17070 ? Were there other units on that machine that were
running happily? Was that the correct address of the API server?

On Wed, Oct 21, 2020 at 10:35 AM Michael Skalka <email address hidden>
wrote:

> Seen again during this test run:
>
> https://solutions.qa.canonical.com/testruns/testRun/399e6b20-63f0-4996-be44-9494b22ff611
>
> Using Juju 2.8.5
>
> This time it was during a hook execution:
>
> var/log/juju/unit-hacluster-vault-2.log:
> ...
> 2020-10-20 15:45:16 DEBUG juju-log ha:144: Pacemaker status: True,
> Corosync status: True
> 2020-10-20 15:46:01 ERROR juju.api monitor.go:59 health ping timed out
> after 30s
> 2020-10-20 15:46:01 DEBUG juju.worker.dependency engine.go:598
> "api-caller" manifold worker stopped: api connection broken unexpectedly
> 2020-10-20 15:46:01 ERROR juju.worker.dependency engine.go:671
> "api-caller" manifold worker returned unexpected error: api connection
> broken unexpectedly
> 2020-10-20 15:46:01 DEBUG juju.worker.dependency engine.go:673 stack trace:
> /workspace/_build/src/github.com/juju/juju/worker/apicaller/worker.go:73:
> api connection broken unexpectedly
> ...
>
> Further down in the log the unit keeps attempting an API connection to the
> controller and fails:
> ...
> 2020-10-20 15:47:11 DEBUG juju.worker.apicaller connect.go:115 connecting
> with current password
> 2020-10-20 15:47:14 DEBUG juju.worker.apicaller connect.go:155 [378ecd]
> failed to connect
> 2020-10-20 15:47:14 DEBUG juju.worker.dependency engine.go:598
> "api-caller" manifold worker stopped: [378ecd] "unit-hacluster-vault-2"
> cannot open api: unable to connect to API: dial tcp 10.246.64.201:17
> 070: i/o timeout
> ...
>
> Then fails to process a hook, leadership election, so on.
>
> During this time the rest of the model was humming away fine.
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1845019
>
> Title:
> health ping timed out on controller
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1845019/+subscriptions
>

John A Meinel (jameinel)
Changed in juju:
importance: Undecided → High
status: New → Incomplete
Revision history for this message
Jason Hobbs (jason-hobbs) wrote :

It's not a unit machine that's failing, it's the control point where we're running 'juju' from. That is the correct IP, and we've been talking to it for a long time at this point.

Changed in juju:
status: Incomplete → New
Revision history for this message
Jason Hobbs (jason-hobbs) wrote :

Actually, we see it both from the juju client that fce is driving and from machine agents.

Revision history for this message
Pen Gale (pengale) wrote :

Triaging as high and adding to 2.8.7 milestone, as we are focusing on eliminating Solutions QA hiccups that slow testing down for Juju and its sibling projects.

Changed in juju:
status: New → Triaged
milestone: none → 2.8.7
Pen Gale (pengale)
Changed in juju:
milestone: 2.8.7 → 3.0.0
milestone: 3.0.0 → 2.8.7
Ian Booth (wallyworld)
Changed in juju:
milestone: 2.8.7 → 2.9.1
Revision history for this message
Michael Skalka (mskalka) wrote :

Subscribing field-high, we have started to run into this during our HOV stable release testing. Latest failure is here from one of the hacluster-vault units manifold worker failing: https://solutions.qa.canonical.com/testruns/testRun/6ab8d55a-dacd-413f-ac69-d245f9d8895c

Revision history for this message
Ian Booth (wallyworld) wrote :

On the controller side, for each client connection, a ping responder is set up.
If this responder has not heard a ping from the client for 3 minutes, it closes the connect with a logged debug message "closing connection due to ping timout" (sic).

There's no evidence I can see in the logs of this message, so it seems the controller is not closing the connection.

On the caller side, in this case the agent, the ping works such that everyone 1 minute a ping is sent, and it waits up to 30 seconds for a response. It is this which is timing out and closing the agent connection, causing the agent workers to stop.

The ping is a very simple request/response API call. Either the connection is getting dropped external to Juju, or the system is so busy it cannot reply within the 30 second timeout. 30 seconds does seem a generous amounnt of time to wait.

grepping the logs, it appears 4 units all timed out at around the same time, all on machine 8

8/baremetal/var/log/juju/unit-hacluster-vault-2.log:2020-11-26 04:42:09 ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly
8/baremetal/var/log/juju/unit-vault-mysql-router-2.log:2020-11-26 04:42:17 ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly
8/baremetal/var/log/juju/machine-8.log:2020-11-26 04:42:49 ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly
8/baremetal/var/log/juju/unit-ntp-6.log:2020-11-26 04:43:01 ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly
8/baremetal/var/log/juju/unit-vault-2.log:2020-11-26 04:42:03 ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly

Is it possible to confirm (eg via the juju dashboard) how busy the controllers or various host machines are when the ping times out?

Revision history for this message
Jason Hobbs (jason-hobbs) wrote :
Download full text (3.8 KiB)

From a recent failure:
2021-01-26 07:46:33 DEBUG juju.worker.uniter.remotestate watcher.go:636 update status timer triggered for hacluster-vault/0
2021-01-26 07:46:49 ERROR juju.api monitor.go:59 health ping timed out after 30s
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "api-caller" manifold worker stopped: api connection broken unexpectedly
2021-01-26 07:46:49 ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:673 stack trace:
/home/jenkins/workspace/BuildJuju-amd64/_build/src/github.com/juju/juju/worker/apicaller/worker.go:73: api connection broken unexpectedly
2021-01-26 07:46:49 ERROR juju.worker.uniter.metrics listener.go:52 failed to close the collect-metrics listener: close unix /var/lib/juju/agents/unit-hacluster-vault-0/333732716/s: use of closed network connection
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "upgrader" manifold worker stopped: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "migration-inactive-flag" manifold worker stopped: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "metric-sender" manifold worker stopped: could not send metrics: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "leadership-tracker" manifold worker stopped: leadership failure: error making a leadership claim: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:584 "log-sender" manifold worker completed successfully
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "migration-minion" manifold worker stopped: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "hook-retry-strategy" manifold worker stopped: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "api-address-updater" manifold worker stopped: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "proxy-config-updater" manifold worker stopped: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "meter-status" manifold worker stopped: connection is shut down
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:584 "charm-dir" manifold worker completed successfully
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:584 "metric-spool" manifold worker completed successfully
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:584 "metric-collect" manifold worker completed successfully
2021-01-26 07:46:49 DEBUG juju.worker.uniter runlistener.go:129 juju-run listener stopping
2021-01-26 07:46:49 DEBUG juju.worker.uniter runlistener.go:148 juju-run listener stopped
2021-01-26 07:46:49 INFO juju.worker.logger logger.go:136 logger worker stopped
2021-01-26 07:46:49 DEBUG juju.worker.dependency engine.go:598 "logging-config-updater" manifold worker stopped: connection is shut down
2021-01-26 07:46:52 DEBUG juju.worker.apicaller connect.go:115 connecting with current password
2021-01-26 07:46:55 DEBUG juju.wor...

Read more...

Ian Booth (wallyworld)
Changed in juju:
milestone: 2.9.1 → 2.9.2
Changed in juju:
milestone: 2.9.2 → 2.9.3
Revision history for this message
John A Meinel (jameinel) wrote :

So the particular Health Ping Timeout is indicating that the controller has stopped responding.
I'm not sure there is much more we could do without having access to a controller where this is happening, as it should be some sort of load/corruption/database failures going on at the same time.

It may be worth digging into the logs to figure out if there is some other concurrent failure, but the actual failure appears to be the controller failing to keep up with activity.

Revision history for this message
John A Meinel (jameinel) wrote :

Unsubscribing Field High because we aren't actually treating this as 1-developer 5/8 focus on this.
If it is just that Juju is failing at load that feels like a much bigger cycle issues that needs significant dedicated time.

This doesn't seem to have occurred since January, so I'm going to leave it triaged as medium, but if it does reoccur we can bring up the priority again.

Changed in juju:
importance: High → Medium
milestone: 2.9.3 → none
Revision history for this message
Michael Skalka (mskalka) wrote :
Download full text (6.9 KiB)

We are continuing to see similar issues in our runs, most recently here: https://solutions.qa.canonical.com/testruns/testRun/89dcf2ea-99af-42a0-9275-4fb9928655c5

A unit (hacluster-vault/0) goes into an error state from a failed hook. The failure is not logged in the unit agent logs (which is a bug in and of itself) but in machine-7.log we get:

2021-07-25 19:56:06 ERROR juju.api monitor.go:59 health ping timed out after 30s
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "api-caller" manifold worker stopped: api connection broken unexpectedly
2021-07-25 19:56:06 ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:673 stack trace:
/home/jenkins/workspace/BuildJuju-amd64/_build/src/github.com/juju/juju/worker/apicaller/worker.go:73: api connection broken unexpectedly
2021-07-25 19:56:06 DEBUG juju.cmd.jujud machine.go:1303 stopping so killing worker "7-container-watcher"
2021-07-25 19:56:06 INFO juju.worker.logger logger.go:136 logger worker stopped
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "migration-inactive-flag" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "valid-credential-flag" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "upgrader" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:584 "log-sender" manifold worker completed successfully
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:584 "disk-manager" manifold worker completed successfully
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "fan-configurer" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "logging-config-updater" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "unit-agent-deployer" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:584 "broker-tracker" manifold worker completed successfully
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "api-address-updater" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "ssh-authkeys-updater" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "reboot-executor" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "machine-action-runner" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "instance-mutater" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "upgrade-series" manifold worker stopped: connection is shut down
2021-07-25 19:56:06 DEBUG juju.worker.dependency engine.go:598 "proxy-config-updater" manifold worker stopped: connection is shut down
202...

Read more...

Changed in juju:
status: Triaged → New
Revision history for this message
John A Meinel (jameinel) wrote :

Looking at https://solutions.qa.canonical.com/bugs/bugs, this hasn't triggered since July 2021, so we'll wait to see if this reoccurs before moving forward.

Changed in juju:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for juju because there has been no activity for 60 days.]

Changed in juju:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.