[2.1 beta4] LXD container assigned IP address on LXD IP range instead of IP from MAAS DHCP
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Incomplete
|
High
|
Unassigned |
Bug Description
I deployed openstack-base on 3 arm servers and an amd64 machine. On one of the arm nodes, one of 3 LXD containers was configured with an IP address which came from LXD IP default range. The other 2 containers were assigned MAAS dhcp IP addresses as expected as were all other containers on remaining hosts in the deployment:
$ juju status
Model Controller Cloud/Region Version
default mycontroller larry 2.1-beta4
App Version Status Scale Charm Store Rev OS Notes
ceph-mon 10.2.3 active 3 ceph-mon jujucharms 6 ubuntu
ceph-osd 10.2.3 active 3 ceph-osd jujucharms 238 ubuntu
ceph-radosgw 10.2.3 active 1 ceph-radosgw jujucharms 245 ubuntu
cinder 9.0.0 active 1 cinder jujucharms 257 ubuntu
cinder-ceph 9.0.0 active 1 cinder-ceph jujucharms 221 ubuntu
glance 13.0.0 active 1 glance jujucharms 253 ubuntu
keystone 10.0.0 active 1 keystone jujucharms 258 ubuntu
mysql 5.6.21-25.8 active 1 percona-cluster jujucharms 246 ubuntu
neutron-api 9.0.0 active 1 neutron-api jujucharms 246 ubuntu
neutron-gateway 9.0.0 active 1 neutron-gateway jujucharms 232 ubuntu
neutron-openvswitch 9.0.0 active 3 neutron-openvswitch jujucharms 238 ubuntu
nova-cloud-
nova-compute 14.0.1 active 3 nova-compute jujucharms 259 ubuntu
ntp unknown 4 ntp jujucharms 0 ubuntu
openstack-dashboard 10.0.0 active 1 openstack-dashboard jujucharms 243 ubuntu
rabbitmq-server 3.5.7 active 1 rabbitmq-server jujucharms 54 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0* active idle 1/lxd/0 10.245.32.93 Unit is ready and clustered
ceph-mon/1 active idle 2/lxd/0 10.245.32.39 Unit is ready and clustered
ceph-mon/2 active idle 3/lxd/0 10.245.32.236 Unit is ready and clustered
ceph-osd/0* active idle 1 10.245.31.66 Unit is ready (1 OSD)
ceph-osd/1 active idle 2 10.245.33.153 Unit is ready (1 OSD)
ceph-osd/2 active idle 3 10.245.31.223 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 0/lxd/0 10.245.33.112 80/tcp Unit is ready
cinder/0* active idle 1/lxd/1 10.245.32.46 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.245.32.46 Unit is ready
glance/0* active idle 2/lxd/1 10.245.31.219 9292/tcp Unit is ready
keystone/0* active idle 3/lxd/1 10.245.32.173 5000/tcp Unit is ready
mysql/0* active idle 0/lxd/1 10.245.32.45 Unit is ready
neutron-api/0* active idle 1/lxd/2 10.245.32.230 9696/tcp Unit is ready
neutron-gateway/0* active idle 0 10.245.32.187 Unit is ready
ntp/0* unknown idle 10.245.32.187
nova-cloud-
nova-compute/0* active idle 1 10.245.31.66 Unit is ready
neutron-
ntp/1 unknown idle 10.245.31.66
nova-compute/1 active idle 2 10.245.33.153 Unit is ready
neutron-
ntp/2 unknown idle 10.245.33.153
nova-compute/2 active idle 3 10.245.31.223 Unit is ready
neutron-
ntp/3 unknown idle 10.245.31.223
openstack-
rabbitmq-server/0* active idle 0/lxd/2 10.245.32.74 5672/tcp Unit is ready
Machine State DNS Inst id Series AZ
0 started 10.245.32.187 84cb43 xenial Production
0/lxd/0 started 10.245.33.112 juju-fea521-0-lxd-0 xenial
0/lxd/1 started 10.245.32.45 juju-fea521-0-lxd-1 xenial
0/lxd/2 started 10.245.32.74 juju-fea521-0-lxd-2 xenial
1 started 10.245.31.66 ffppyr xenial Production
1/lxd/0 started 10.245.32.93 juju-fea521-1-lxd-0 xenial
1/lxd/1 started 10.245.32.46 juju-fea521-1-lxd-1 xenial
1/lxd/2 started 10.245.32.230 juju-fea521-1-lxd-2 xenial
2 started 10.245.33.153 deftrk xenial Production
2/lxd/0 started 10.245.32.39 juju-fea521-2-lxd-0 xenial
2/lxd/1 started 10.245.31.219 juju-fea521-2-lxd-1 xenial
2/lxd/2 started 10.0.0.9 juju-fea521-2-lxd-2 xenial
3 started 10.245.31.223 4y3xan xenial Production
3/lxd/0 started 10.245.32.236 juju-fea521-3-lxd-0 xenial
3/lxd/1 started 10.245.32.173 juju-fea521-3-lxd-1 xenial
3/lxd/2 started 10.245.32.194 juju-fea521-3-lxd-2 xenial
Relation Provides Consumes Type
mon ceph-mon ceph-mon peer
mon ceph-mon ceph-osd regular
mon ceph-mon ceph-radosgw regular
ceph ceph-mon cinder-ceph regular
ceph ceph-mon glance regular
ceph ceph-mon nova-compute regular
cluster ceph-radosgw ceph-radosgw peer
identity-service ceph-radosgw keystone regular
cluster cinder cinder peer
storage-backend cinder cinder-ceph subordinate
image-service cinder glance regular
identity-service cinder keystone regular
shared-db cinder mysql regular
cinder-
amqp cinder rabbitmq-server regular
cluster glance glance peer
identity-service glance keystone regular
shared-db glance mysql regular
image-service glance nova-cloud-
image-service glance nova-compute regular
amqp glance rabbitmq-server regular
cluster keystone keystone peer
shared-db keystone mysql regular
identity-service keystone neutron-api regular
identity-service keystone nova-cloud-
identity-service keystone openstack-dashboard regular
cluster mysql mysql peer
shared-db mysql neutron-api regular
shared-db mysql nova-cloud-
cluster neutron-api neutron-api peer
neutron-plugin-api neutron-api neutron-gateway regular
neutron-plugin-api neutron-api neutron-openvswitch regular
neutron-api neutron-api nova-cloud-
amqp neutron-api rabbitmq-server regular
cluster neutron-gateway neutron-gateway peer
quantum-
juju-info neutron-gateway ntp subordinate
amqp neutron-gateway rabbitmq-server regular
neutron-plugin neutron-openvswitch nova-compute regular
amqp neutron-openvswitch rabbitmq-server regular
cluster nova-cloud-
cloud-compute nova-cloud-
amqp nova-cloud-
neutron-plugin nova-compute neutron-openvswitch subordinate
compute-peer nova-compute nova-compute peer
juju-info nova-compute ntp subordinate
amqp nova-compute rabbitmq-server regular
ntp-peers ntp ntp peer
cluster openstack-dashboard openstack-dashboard peer
cluster rabbitmq-server rabbitmq-server peer
This is with Juju 2.1 beta4:
$ juju --version
2.1-beta4-
Attached are logs from both container and the host. This is list of logs included: https:/
Changed in juju: | |
status: | New → Triaged |
importance: | Undecided → High |
milestone: | none → 2.1.0 |
Changed in juju: | |
status: | New → Triaged |
tags: | added: lxd maas maas-provider |
We are investigating similar bugs; it would be helpful if you could attach the juju logs from the controller machine and the container and host where the container came up on lxdbr0. For access to the container you'll probably have to go via `lxc exec juju-xxx-lxd-x bash` to get the logs off the machine. On top of that could you please attach the MAAS logs from the MAAS server; all of /var/log/maas/*.log would be useful.