(3) As far as credentials/accounts go .... I was getting successful lxd deploys on aws instances (with eth0 interface name) all last week, and for all of time before yesterday. Now I can't get an instance to deploy with ethX interface name at all it seems.
(4) I can ssh into affected instances with ens3 device, and `lxc launch ubuntu:16.04 u1`, and have the container come up and have addressability ... but it seems the lxd bridge auto configures the subnet to a 172.x.x.x subnet instead of a 10.0.0.0. Once I do this, I still cannot `juju deploy` lxd's to the affected machine.
@jmeinel
(1 & 2) lxd version seems to be consistent between an instance with working lxd and eth0 vs non-working with ens3 http:// paste.ubuntu. com/24420624/. The kernel versions seem to be similar also http:// paste.ubuntu. com/24420635/.
(3) As far as credentials/ accounts go .... I was getting successful lxd deploys on aws instances (with eth0 interface name) all last week, and for all of time before yesterday. Now I can't get an instance to deploy with ethX interface name at all it seems.
(4) I can ssh into affected instances with ens3 device, and `lxc launch ubuntu:16.04 u1`, and have the container come up and have addressability ... but it seems the lxd bridge auto configures the subnet to a 172.x.x.x subnet instead of a 10.0.0.0. Once I do this, I still cannot `juju deploy` lxd's to the affected machine.