Comment 0 for bug 2020408

Revision history for this message
Nicolas FERRAO (nicolasfrr) wrote :

Hello,

I have an issue while deploying mysql-innodb-cluster on a fresh Openstack Yoga installation, following this process : https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/yoga/install-openstack.html

Environment :
- 1 MaaS (Juju client installed here)
- 1 JuJu Controller
- 3 Bare Metal nodes

JuJu version : 2.9.42-ubuntu-amd64
MaaS version : 3.3.3
Deployments happen behind Maas Proxy.

If I run juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --series jammy --channel 8.0/stable mysql-innodb-cluster

juju status --relations :
openstack maas-controller maas-dc/default 2.9.42 unsupported 05:49:13Z

App Version Status Scale Charm Channel Rev Exposed Message
ceph-osd 17.2.5 blocked 3 ceph-osd quincy/stable 559 no Missing relation: monitor
mysql-innodb-cluster 8.0.33 waiting 3 mysql-innodb-cluster 8.0/stable 43 no Instance not yet in the cluster
nova-compute 25.1.0 blocked 3 nova-compute yoga/stable 650 no Missing relations: image, messaging

Unit Workload Agent Machine Public address Ports Message
ceph-osd/0* blocked idle 0 172.16.22.2 Missing relation: monitor
ceph-osd/1 blocked idle 1 172.16.22.1 Missing relation: monitor
ceph-osd/2 blocked idle 2 172.16.22.3 Missing relation: monitor
mysql-innodb-cluster/9* waiting idle 0/lxd/1 172.16.22.6 Not all instances clustered
mysql-innodb-cluster/10 waiting idle 1/lxd/1 172.16.22.4 Instance not yet in the cluster
mysql-innodb-cluster/11 waiting idle 2/lxd/2 172.16.22.7 Not all instances clustered
nova-compute/0* blocked idle 0 172.16.22.2 Missing relations: image, messaging
nova-compute/1 blocked idle 1 172.16.22.1 Missing relations: messaging, image
nova-compute/2 blocked idle 2 172.16.22.3 Missing relations: image, messaging

Machine State Address Inst id Series AZ Message
0 started 172.16.22.2 BM-3 jammy HUB-DC-T Deployed
0/lxd/1 started 172.16.22.6 juju-e21d11-0-lxd-1 jammy HUB-DC-T Container started
1 started 172.16.22.1 BM-2 jammy HUB-DC-T Deployed
1/lxd/1 started 172.16.22.4 juju-e21d11-1-lxd-1 jammy HUB-DC-T Container started
2 started 172.16.22.3 BM-4 jammy HUB-DC-T Deployed
2/lxd/2 started 172.16.22.7 juju-e21d11-2-lxd-2 jammy HUB-DC-T Container started

Relation provider Requirer Interface Type Message
mysql-innodb-cluster:cluster mysql-innodb-cluster:cluster mysql-innodb-cluster peer
mysql-innodb-cluster:coordinator mysql-innodb-cluster:coordinator coordinator peer
nova-compute:compute-peer nova-compute:compute-peer nova peer

juju export-bundle :

series: jammy
applications:
  ceph-osd:
    charm: ceph-osd
    channel: quincy/stable
    revision: 559
    num_units: 3
    to:
    - "0"
    - "1"
    - "2"
    options:
      osd-devices: /dev/sda /dev/sdb /dev/sdc /dev/sdd
    constraints: arch=amd64 tags=compute
    storage:
      bluestore-db: loop,1024M
      bluestore-wal: loop,1024M
      cache-devices: loop,10240M
      osd-devices: loop,1024M
      osd-journals: loop,1024M
  mysql-innodb-cluster:
    charm: mysql-innodb-cluster
    channel: 8.0/stable
    revision: 43
    resources:
      mysql-shell: 0
    num_units: 3
    to:
    - lxd:1
    - lxd:2
    - lxd:0
    constraints: arch=amd64
  nova-compute:
    charm: nova-compute
    channel: yoga/stable
    revision: 650
    num_units: 3
    to:
    - "0"
    - "1"
    - "2"
    options:
      config-flags: default_ephemeral_format=ext4
      enable-live-migration: true
      enable-resize: true
      migration-auth-type: ssh
      openstack-origin: distro
      virt-type: qemu
    constraints: arch=amd64
    storage:
      ephemeral-device: loop,10240M
machines:
  "0":
    constraints: arch=amd64 tags=compute
  "1":
    constraints: arch=amd64 tags=compute
  "2":
    constraints: arch=amd64 tags=compute

When I check unit logs on the 1/lxd/1 :
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 Invoking reactive handler: reactive/layer_openstack.py:64:default_update_status
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 Invoking reactive handler: reactive/layer_openstack.py:82:check_really_is_update_status
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 Invoking reactive handler: reactive/layer_openstack.py:93:run_default_update_status
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:138:check_quorum
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:240:signal_clustered
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:69:joined:cluster
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:75:changed:cluster
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 Invoking reactive handler: hooks/relations/tls-certificates/requires.py:109:broken:certificates
2023-05-23 05:45:05 INFO unit.mysql-innodb-cluster/10.juju-log server.go:316 coordinator.DelayedActionCoordinator Publishing state
2023-05-23 05:45:06 INFO juju.worker.uniter.operation runhook.go:159 ran "update-status" hook (via explicit, bespoke hook script)