2022-06-01 15:50:56 |
Hernan Garcia |
description |
In a two(2) Datacenter customer environment we have deployed two(2) independent Ceph clusters, one on each Datacenter, we are trying to use Grafana to show the Ceph metrics
In the "[juju-openstack] / [juju] OSD device details" dashboard we can see the following error
---
"found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{ceph_daemon=\"osd.0\", dns_name=\"juju-08a910-24-lxd-1.domain\", group=\"promoagents-juju\", instance=\"172.24.12.136:9283\", job=\"remote-6958e464317040928aab296791a550e5\"}, {ceph_daemon=\"osd.0\", dns_name=\"juju-08a910-2-lxd-0..domain\", group=\"promoagents-juju\", instance=\"172.24.12.78:9283\", job=\"remote-2346cc18b30146f38b116257e4436213\"}];many-to-many matching not allowed: matching labels must be unique on one side"
---
Labels are not being unique because OSD disk names are repeated in both clusters, in this case osd.0
A workaround for this could be to name OSD devices differently in both clusters, let's say osd.{0..n} in one cluster and osd{n+1..m} in the other cluster. But it will be good to fix this by making labels unique in some way |
In a two(2) Datacenter customer environment we have deployed two(2) independent Ceph clusters, one on each Datacenter, we are trying to use Grafana to show the Ceph metrics
In the "[juju-openstack] / [juju] OSD device details" dashboard we can see the following error
---
"found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{ceph_daemon=\"osd.0\", dns_name=\"juju-08a910-24-lxd-1.domain\", group=\"promoagents-juju\", instance=\"172.24.12.136:9283\", job=\"remote-6958e464317040928aab296791a550e5\"}, {ceph_daemon=\"osd.0\", dns_name=\"juju-08a910-2-lxd-0..domain\", group=\"promoagents-juju\", instance=\"172.24.12.78:9283\", job=\"remote-2346cc18b30146f38b116257e4436213\"}];many-to-many matching not allowed: matching labels must be unique on one side"
---
Labels are not being unique because OSD disk names are repeated in both clusters, in this case osd.0
A workaround for this could be to name OSD devices differently in both clusters, let's say osd.{0..n} in one cluster and osd{n+1..m} in the other cluster. But it will be good to fix this by making labels unique in some way
These are the relations added
---
# ceph-osd
ceph-osd-atm-zone1:juju-info prometheus-grok-exporter:juju-info juju-info subordinate
ceph-osd-dic-zone1:juju-info prometheus-grok-exporter:juju-info juju-info subordinate
# prometheus
prometheus-libvirt-exporter:dashboards grafana:dashboards grafana-dashboard regular
prometheus-openstack-exporter:dashboards grafana:dashboards grafana-dashboard regular
# grafana
ceph-dashboard-atm-zone1:grafana-dashboard grafana:dashboards grafana-dashboard regular
ceph-dashboard-dic-zone1:grafana-dashboard grafana:dashboards grafana-dashboard regular
prometheus-libvirt-exporter:dashboards grafana:dashboards grafana-dashboard regular
prometheus-openstack-exporter:dashboards grafana:dashboards grafana-dashboard regular
--- |
|