alertmanager doesn't start in lxd container

Bug #1828777 reported by cristi1979
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Prometheus Alertmanager Charm
Won't Fix
Medium
Unassigned

Bug Description

Because of this:https://github.com/prometheus/alertmanager/issues/1434 it doesn't seem possible to start alertmanager in a container.

It considers 252.0.0.0/8 public space I think.

Error:
root@juju-ca59d4-0-lxd-13:~# /snap/prometheus-alertmanager/current/bin/alertmanager --web.listen-address :9093
level=info ts=2019-05-13T07:13:33.054959017Z caller=main.go:174 msg="Starting Alertmanager" version="(version=0.15.0-rc.2, branch=HEAD, revision=ec2cc57d28b66590b2429b03fdba068eae32b17a)"
level=info ts=2019-05-13T07:13:33.055077776Z caller=main.go:175 build_context="(go=go1.9, user=root@lp-xenial-amd64, date=20180611-15:17:11)"
level=info ts=2019-05-13T07:13:33.079230453Z caller=cluster.go:156 component=cluster msg="setting advertise address explicitly" addr=<nil> port=9094
level=error ts=2019-05-13T07:13:33.080026525Z caller=main.go:201 msg="Unable to initialize gossip mesh" err="create memberlist: Failed to get final advertise address: Failed to parse advertise address \"<nil>\""
root@juju-ca59d4-0-lxd-13:~# /snap/prometheus-alertmanager/current/bin/alertmanager --web.listen-address 0.0.0.0:9093
level=info ts=2019-05-13T07:13:44.556028139Z caller=main.go:174 msg="Starting Alertmanager" version="(version=0.15.0-rc.2, branch=HEAD, revision=ec2cc57d28b66590b2429b03fdba068eae32b17a)"
level=info ts=2019-05-13T07:13:44.556456028Z caller=main.go:175 build_context="(go=go1.9, user=root@lp-xenial-amd64, date=20180611-15:17:11)"
level=info ts=2019-05-13T07:13:44.57179716Z caller=cluster.go:156 component=cluster msg="setting advertise address explicitly" addr=<nil> port=9094
level=error ts=2019-05-13T07:13:44.572423434Z caller=main.go:201 msg="Unable to initialize gossip mesh" err="create memberlist: Failed to get final advertise address: Failed to parse advertise address \"<nil>\""

IPs in container:

root@juju-ca59d4-0-lxd-13:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6674: eth0@if6675: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:52:e7:7b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 252.0.215.49/8 brd 252.255.255.255 scope global dynamic eth0
       valid_lft 3214sec preferred_lft 3214sec
    inet6 fe80::216:3eff:fe52:e77b/64 scope link
       valid_lft forever preferred_lft forever
6676: eth1@if6677: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:0e:f5:6d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fe0e:f56d/64 scope link
       valid_lft forever preferred_lft forever

Tags: atos
tags: added: atos
Revision history for this message
Alvaro Uria (aluria) wrote :

This happens when a container only manages the fan network (ie. LXD running on an openstack environment managed by Juju).

If the container runs at least one more (private) network, alertmanager manages to start without issues.

If the container runs a single (private) network, alertmanager also manages to start.

As mentioned in the description, the issue was resolved in 2018 but the snap has not been updated since then. I've marked the bug with "Medium" importance because it's been filed 9 months ago without further follow-up.

Changed in prometheus-alertmanager-charm:
status: New → Confirmed
importance: Undecided → Medium
Revision history for this message
Eric Chen (eric-chen) wrote :

This charm is no longer being actively maintained. Please consider using the new Canonical Observability Stack instead. (https://charmhub.io/topics/canonical-observability-stack)

Changed in charm-prometheus-alertmanager:
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.