maas bridge script handles VLAN NICs incorrectly

Bug #1532167 reported by Dimiter Naydenov
48
This bug affects 6 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Andrew McDermott
juju-core
Fix Released
High
Andrew McDermott
1.25
Fix Released
High
Andrew McDermott

Bug Description

In Juju 1.25 and master (soon to be 2.0-alpha1), but specifically *not* in the maas-spaces branch, the add-juju-bridge.py script we use on the MAAS provider does not render the changes to /etc/network/interfaces correctly when multiple VLAN virtual NICs are configured on top of one or more physical NICs.

Here's an example of /e/n/i on a KVM node deployed with MAAS 1.9rc4 through juju, before the bridge script changes it:

auto eth0
iface eth0 inet static
    gateway 10.20.19.2
    address 10.20.19.103/24
    mtu 1500

auto eth0.100
iface eth0.100 inet static
    address 10.100.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 100

auto eth0.250
iface eth0.250 inet static
    address 10.250.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 250

auto eth0.50
iface eth0.50 inet static
    address 10.50.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 50

dns-nameservers 10.10.19.2
dns-search maas-19

And here is how it looks like after the script:

iface eth0 inet manual

auto juju-br0
iface juju-br0 inet static
    bridge_ports eth0
    gateway 10.20.19.2
    address 10.20.19.103/24
    mtu 1500

auto juju-br0.100
iface juju-br0.100 inet static
    address 10.100.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 100

auto juju-br0.250
iface juju-br0.250 inet static
    address 10.250.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 250

auto juju-br0.50
iface juju-br0.50 inet static
    address 10.50.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 50

dns-nameservers 10.10.19.2
dns-search maas-19

This causes errors by ifup at boot when the script is trying to activate the modified /e/n/i (e.g. cannot add juju-br0.100 using eth0 as raw device, as eth0.100 already exists).

In comparison, here is the same /e/n/i after it got modified by the improved bridge script in the maas-spaces feature branch:

iface eth0 inet manual

auto br-eth0
iface br-eth0 inet static
    gateway 10.20.19.2
    address 10.20.19.103/24
    mtu 1500
    bridge_ports eth0

iface eth0.100 inet manual
    address 10.100.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 100

auto br-eth0.100
iface br-eth0.100 inet static
    address 10.100.19.103/24
    mtu 1500
    bridge_ports eth0.100

iface eth0.250 inet manual
    address 10.250.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 250

auto br-eth0.250
iface br-eth0.250 inet static
    address 10.250.19.103/24
    mtu 1500
    bridge_ports eth0.250

iface eth0.50 inet manual
    address 10.50.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 50

auto br-eth0.50
iface br-eth0.50 inet static
    address 10.50.19.103/24
    mtu 1500
    bridge_ports eth0.50

dns-nameservers 10.10.19.2
dns-search maas-19

The new script has an issue with multiple "dns-*" options, but I'll file a separate bug for it.

no longer affects: juju-core/2.0
Changed in juju-core:
milestone: 1.25.3 → 2.0-alpha2
Revision history for this message
Dimiter Naydenov (dimitern) wrote :

I think the minimal fix we need for 1.25 is to have the following /e/n/i after the script:

iface eth0 inet manual

auto juju-br0
iface juju-br0 inet static
    gateway 10.20.19.2
    address 10.20.19.103/24
    mtu 1500
    bridge_ports eth0

auto eth0.100
iface eth0.100 inet static
    address 10.100.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 100

auto eth0.250
iface eth0.250 inet static
    address 10.250.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 250

auto eth0.50
iface eth0.50 inet static
    address 10.50.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 50

dns-nameservers 10.10.19.2
dns-search maas-19

Revision history for this message
Andrew McDermott (frobware) wrote :

If we use the script from maas-spaces, and the update in https://github.com/frobware/juju/tree/maas-space-bridge-party, you'll now get:

iface eth0 inet manual

auto br-eth0
iface br-eth0 inet static
    gateway 10.20.19.2
    address 10.20.19.103/24
    mtu 1500
    bridge_ports eth0

iface eth0.100 inet manual
    address 10.100.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 100

auto br-eth0.100
iface br-eth0.100 inet static
    address 10.100.19.103/24
    mtu 1500
    bridge_ports eth0.100

iface eth0.250 inet manual
    address 10.250.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 250

auto br-eth0.250
iface br-eth0.250 inet static
    address 10.250.19.103/24
    mtu 1500
    bridge_ports eth0.250

iface eth0.50 inet manual
    address 10.50.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 50
    dns-nameservers 10.10.19.2
    dns-search maas-19

auto br-eth0.50
iface br-eth0.50 inet static
    address 10.50.19.103/24
    mtu 1500
    dns-nameservers 10.10.19.2
    dns-search maas-19
    bridge_ports eth0.50

Revision history for this message
Andrew McDermott (frobware) wrote :

And obviously prefixed with juju-br for 1.25:

iface eth0 inet manual

auto juju-br-eth0
iface juju-br-eth0 inet static
    gateway 10.20.19.2
    address 10.20.19.103/24
    mtu 1500
    bridge_ports eth0

iface eth0.100 inet manual
    address 10.100.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 100

auto juju-br-eth0.100
iface juju-br-eth0.100 inet static
    address 10.100.19.103/24
    mtu 1500
    bridge_ports eth0.100

iface eth0.250 inet manual
    address 10.250.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 250

auto juju-br-eth0.250
iface juju-br-eth0.250 inet static
    address 10.250.19.103/24
    mtu 1500
    bridge_ports eth0.250

iface eth0.50 inet manual
    address 10.50.19.103/24
    vlan-raw-device eth0
    mtu 1500
    vlan_id 50
    dns-nameservers 10.10.19.2
    dns-search maas-19

auto juju-br-eth0.50
iface juju-br-eth0.50 inet static
    address 10.50.19.103/24
    mtu 1500
    dns-nameservers 10.10.19.2
    dns-search maas-19
    bridge_ports eth0.50

Revision history for this message
Mike Deats (mikedeats) wrote :

I am moving this comment from a different bug (https://bugs.launchpad.net/juju-core/+bug/1516891) because it seems to have a slightly different signature when using bonded interfaces w/ VLANs. It appears that juju will create a bridge interface for every entry in /e/n/i that contains "bond0", but every bridge is named "juju-br0" (This is using MAAS 1.9 + Juju 1.25.2). This makes all the networks off of bond0 stop working, which causes the Juju charm deployment to fail. The same thing happens when attempting a "juju bootstrap"

Fortunately Juju seems to leave bond1 alone, so I can still SSH into the node through one of those interfaces and see what kind of mess it made.

I've attached my /e/n/i file with how it appears after Juju messes with it.

Revision history for this message
Mike Deats (mikedeats) wrote :

After numerous attempts, the only way I could get Juju to play nice with MAAS is to set "disable-network-management=true". Obviously this means that Juju cannot create LXC containers on any of the MAAS nodes, so it make is much harder to deploy complex charms like Openstack when using more sophisticated network designs with bonding and VLANs.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - from comment #4 is it possible to get the original /e/n/i file?

Juju copies the original before making any changes; the original should be in /etc/network. (I realise that it is a little late in the day for this request.)

Revision history for this message
Mike Deats (mikedeats) wrote :

@Andrew - No problem! Here is the "before" file. I didn't attach it before because...well I forgot. :-P

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - thanks. I add the originals to our unit tests to ensure we don't regress,

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - I ran your original through the script and attached the /e/n/i it would generate. I would appreciate it if you could cast an eye over it.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - actually, to remain 100% compatible the bridge name would remain as 'juju-br0'.

Revision history for this message
Mike Deats (mikedeats) wrote :

@andrew - Yes, that file looks much better. But it probably shouldn't leave the IP/gateway attached to bond0 once it creates the bridge? Also, would this allow LXC containers to access the other VLANs? I'm not that familiar with LXC so I am unsure if you need a bridge for every interface you want the containers to have access to. I know you have to create a bridge interface for each VLAN interface to give normal KVM/QEMU VMs access to them. I would assume Juju would need to do something similar to create containers that can access the VLANs, though that might be a more complex feature than a bug.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - your assertions are correct with regard to multiple bridges. However, that work is only happening in Juju 2.0 -- it's currently work in progress too so not something you can try "today".

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - if I was to build juju binaries based on this change would you be willing to test them? You would have to use --upload-tools when bootstrapping.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - in my experiments not removing options from the interface (e.g., IP/gateway) doesn't seem to cause any problems.

Revision history for this message
Mike Deats (mikedeats) wrote :

@Andrew - I could definitely give it a try if you have a patched version of Juju. Might be a day or so until I can do a teardown to try a new bootstrap, but I'd be happy to give it a shot.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - a pre-built binary for this fix:

  http://178.62.20.154/~aim/juju-1.25-lp1532167.tar.gz

Revision history for this message
Andrew McDermott (frobware) wrote :
Revision history for this message
Mike Deats (mikedeats) wrote :

@andrew - The new Juju tools seem to work fine for bootstrapping and deploying charms. I tested it using the same network environment I originally posted with my comment and it worked great!

Revision history for this message
Mike Deats (mikedeats) wrote :

@andrew - Well, spoke a little too soon. It does work for bootstrapping, and it does work for deploying charms directly, but any LXC containers Juju attempts to create don't seem to work. It looks like they fail trying to initialize the network interface.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - please could you send me your /etc/network/interfaces (original). My sanity test with just bridging eth0 seems to work.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - please could you also cut + paste the ouput from `juju status' for the case where a deployed containers fails.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - please could you try dropping 802.3ad from you bond configuration and use the default.

I believe I see a problem when using that mode; the clone of ubuntu-15.10-server-cloudimg-amd64-root.tar.gz seems to hang. And running a wget on the latest kernel tarball (from kernel.org) hangs part way through the download too.

Revision history for this message
Andrew McDermott (frobware) wrote :

Note against comment #22: the clone of the ubuntu image is part of the LXC instantiation.

Revision history for this message
Mike Deats (mikedeats) wrote :

@andrew - Yes, I see that behavior as well. We were also having some network issues, which didn't help. But I removed all the VLANs and bonds, and I still get a hang at the same point, even when falling back to the 1.25.3 version of Juju. I'm doing a clean wipe and rebuild today to see if there is still an issue.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - in the case where you have no vlans or bonds could you attach /e/n/interfaces. This case is very simple and I have tried and tested this multiple times. Once juju has bootstrapped can you verify general network connectivity with:

 wget https://cdn.kernel.org/pub/linux/kernel/v3.x/linux-3.18.26.tar.xz

before you try adding a container.

Revision history for this message
Mike Deats (mikedeats) wrote :

@andrew - I finally got LXC containers working again using your patched Juju. The issue is not how the bonds are configured (802.3ad works fine). The problem is that when you are using VLAN tagging (like I am) you must attach the juju-br0 bridge to a VLAN interface (e.g. eth0.123 or bond0.456) so that both the LXC machine and the MAAS server (providing DHCP) are sending tagged traffic. Otherwise the LXC cannot pull DHCP, and the deploy fails.

I reset my environment to use only eth0 set to the default VLAN. Juju could bootstrap just fine, and created juju-br0 attached to eth0. However it still could not create any LXC containers. They would hang trying to get a DHCP address, and eventually give up.

However if I manually create a VLAN interface for the default VLAN and attach the juju-br0 bridge to it, LXC containers start up normally, both when using the bonded interface and plain eth0. So I rewrote the /e/n/i file to create a bond0.1, then attached the juju-br0 bridge to that. That started working.

Anyway, it's an interesting quirk of the networking regarding VLANs.

Revision history for this message
Andrew McDermott (frobware) wrote :

@mike - can I assume from you first paragraph that everything is OK? I got confused when you said "however, it still could not create any LXC containers". I was trying to understand whether Juju is doing the right thing, given a valid VLAN setup.

Revision history for this message
Mike Deats (mikedeats) wrote :

@andrew - yes it appears to be working correctly, and it has nothing to do with bonds or the 802.3ad setting. The issue was stemming from a misconfiguration between the MAAS server and the LXC container Juju was trying to create. Seems the MAAS server was mis-tagging the VLAN traffic.

Changed in juju-core:
status: Triaged → In Progress
assignee: nobody → Andrew McDermott (frobware)
Curtis Hovey (sinzui)
Changed in juju-core:
milestone: 2.0-alpha2 → 2.0-beta1
Revision history for this message
Mark Brown (mstevenbrown) wrote :

This defect is blocking at least 2 Openstack POCs on Power. Any way to increase priority?

Revision history for this message
Cheryl Jennings (cherylj) wrote :

The fix has been tested and will be released with 1.25.4. In the meantime, I can give you a binary with the fix for testing if you'd like.

Changed in juju-core:
status: In Progress → Fix Committed
Revision history for this message
rory schramm (roryschramm) wrote :
Download full text (19.4 KiB)

Okay,

I Installed the 1.25.4 fix I received from Doug Sikora and deployed usuing juju bootstrap --upload-tools. Juju now creates a bridge only for the maas pxe network witch fixes the bad config for the interfaces I was having. However, when I deploy the openstack bundle I've been using (which worked with juju 1.24.7), the cluster fails to get deployed. I'm not sure if this is related to this fix or not. A lot of the lxc container agents are hanging on waiting for agent initialization to finish.

Below is output from juju status. Are there any other logs that you would like to see?

root@maas:~# juju status --format tabular
[Services]
NAME STATUS EXPOSED CHARM
ceilometer waiting false cs:trusty/ceilometer-17
ceilometer-agent false cs:trusty/ceilometer-agent-13
ceph blocked false cs:trusty/ceph-43
ceph-osd waiting false cs:trusty/ceph-osd-14
ceph-radosgw blocked false cs:trusty/ceph-radosgw-19
cinder waiting false cs:trusty/cinder-34
cinder-ceph false cs:trusty/cinder-ceph-16
glance waiting false cs:trusty/glance-30
hacluster-cinder false cs:trusty/hacluster-27
hacluster-glance false cs:trusty/hacluster-27
hacluster-horizon false cs:trusty/hacluster-27
hacluster-keystone false cs:trusty/hacluster-27
hacluster-neutron false cs:trusty/hacluster-27
hacluster-nova false cs:trusty/hacluster-27
hacluster-pxc false cs:trusty/hacluster-27
hacluster-radosgw false cs:trusty/hacluster-27
keystone unknown false cs:trusty/keystone-33
memcached unknown false cs:trusty/memcached-11
mongodb unknown false cs:trusty/mongodb-33
neutron-api waiting false cs:trusty/neutron-api-23
neutron-gateway waiting false cs:trusty/neutron-gateway-9
neutron-openvswitch false cs:trusty/neutron-openvswitch-15
nodes-api unknown false cs:trusty/ubuntu-5
nodes-compute unknown false cs:trusty/ubuntu-5
nodes-syslog unknown false cs:trusty/ubuntu-5
nova-cloud-controller unknown false cs:trusty/nova-cloud-controller-66
nova-compute waiting false cs:trusty/nova-compute-36
ntp false cs:trusty/ntp-14
openstack-dashboard unknown false cs:trusty/openstack-dashboard-21
percona-cluster unknown false cs:trusty/percona-cluster-32
rabbitmq-server unknown false cs:trusty/rabbitmq-server-43
rsyslog unknown false cs:trusty/rsyslog-9
rsyslog-forwarder false cs:trusty/rsyslog-forwarder-1

[Units]
ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
ceilometer/0 waiting idle 1.25.4.1 2/lxc/0 8777/tcp 10.7.53.57 Incomplete relations: messaging, identity
ceilometer/1 unknown allocating 3/lxc/0 ...

Revision history for this message
rory schramm (roryschramm) wrote :

below is link to formatted version of above juju output

http://paste.ubuntu.com/15027236/

Revision history for this message
Cheryl Jennings (cherylj) wrote :

@roryschramm - It could be related, but please open up a separate bug for it so we can track it.

Once you have the bug open, please include:
From at least one of the machines hosting units that are waiting:
- /var/log/juju/machine-<id>.log
- /var/log/juju/unit-<id>.log

From the bootstrap node:
- /var/log/juju/machine-0.log

Revision history for this message
rory schramm (roryschramm) wrote : RE: [Bug 1532167] Re: maas bridge script handles VLAN NICs incorrectly
Download full text (4.2 KiB)

@Cheryl

That issue appears to unrelated. I just redeployed openstack and it did not happen again.

-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Cheryl Jennings
Sent: Friday, February 12, 2016 10:25 AM
To: Rory Schramm
Subject: [Bug 1532167] Re: maas bridge script handles VLAN NICs incorrectly

@roryschramm - It could be related, but please open up a separate bug for it so we can track it.

Once you have the bug open, please include:
>From at least one of the machines hosting units that are waiting:
- /var/log/juju/machine-<id>.log
- /var/log/juju/unit-<id>.log

>From the bootstrap node:
- /var/log/juju/machine-0.log

--
You received this bug notification because you are subscribed to a duplicate bug report (1544694).
https://bugs.launchpad.net/bugs/1532167

Title:
  maas bridge script handles VLAN NICs incorrectly

Status in juju-core:
  Fix Committed
Status in juju-core 1.25 series:
  In Progress

Bug description:
  In Juju 1.25 and master (soon to be 2.0-alpha1), but specifically
  *not* in the maas-spaces branch, the add-juju-bridge.py script we use
  on the MAAS provider does not render the changes to
  /etc/network/interfaces correctly when multiple VLAN virtual NICs are
  configured on top of one or more physical NICs.

  Here's an example of /e/n/i on a KVM node deployed with MAAS 1.9rc4
  through juju, before the bridge script changes it:

  auto eth0
  iface eth0 inet static
      gateway 10.20.19.2
      address 10.20.19.103/24
      mtu 1500

  auto eth0.100
  iface eth0.100 inet static
      address 10.100.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 100

  auto eth0.250
  iface eth0.250 inet static
      address 10.250.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 250

  auto eth0.50
  iface eth0.50 inet static
      address 10.50.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 50

  dns-nameservers 10.10.19.2
  dns-search maas-19

  And here is how it looks like after the script:

  iface eth0 inet manual

  auto juju-br0
  iface juju-br0 inet static
      bridge_ports eth0
      gateway 10.20.19.2
      address 10.20.19.103/24
      mtu 1500

  auto juju-br0.100
  iface juju-br0.100 inet static
      address 10.100.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 100

  auto juju-br0.250
  iface juju-br0.250 inet static
      address 10.250.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 250

  auto juju-br0.50
  iface juju-br0.50 inet static
      address 10.50.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 50

  dns-nameservers 10.10.19.2
  dns-search maas-19

  This causes errors by ifup at boot when the script is trying to
  activate the modified /e/n/i (e.g. cannot add juju-br0.100 using eth0
  as raw device, as eth0.100 already exists).

  In comparison, here is the same /e/n/i after it got modified by the
  improved bridge script in the maas-spaces feature branch:

  iface eth0 inet manual

  auto br-eth0
  iface br-eth0 inet static
  ...

Read more...

Revision history for this message
rory schramm (roryschramm) wrote :
Download full text (4.4 KiB)

Was there a change in the verbosity of juju debug-log?

I'm not seeing any log entries when my service units are installing apt packages. Before I would see all the apt-get install output in the debug log and I'm not seeing that at all anymore. It doesn't appear to be just package installation either.

Rory

-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Cheryl Jennings
Sent: Friday, February 12, 2016 10:25 AM
To: Rory Schramm
Subject: [Bug 1532167] Re: maas bridge script handles VLAN NICs incorrectly

@roryschramm - It could be related, but please open up a separate bug for it so we can track it.

Once you have the bug open, please include:
>From at least one of the machines hosting units that are waiting:
- /var/log/juju/machine-<id>.log
- /var/log/juju/unit-<id>.log

>From the bootstrap node:
- /var/log/juju/machine-0.log

--
You received this bug notification because you are subscribed to a duplicate bug report (1544694).
https://bugs.launchpad.net/bugs/1532167

Title:
  maas bridge script handles VLAN NICs incorrectly

Status in juju-core:
  Fix Committed
Status in juju-core 1.25 series:
  In Progress

Bug description:
  In Juju 1.25 and master (soon to be 2.0-alpha1), but specifically
  *not* in the maas-spaces branch, the add-juju-bridge.py script we use
  on the MAAS provider does not render the changes to
  /etc/network/interfaces correctly when multiple VLAN virtual NICs are
  configured on top of one or more physical NICs.

  Here's an example of /e/n/i on a KVM node deployed with MAAS 1.9rc4
  through juju, before the bridge script changes it:

  auto eth0
  iface eth0 inet static
      gateway 10.20.19.2
      address 10.20.19.103/24
      mtu 1500

  auto eth0.100
  iface eth0.100 inet static
      address 10.100.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 100

  auto eth0.250
  iface eth0.250 inet static
      address 10.250.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 250

  auto eth0.50
  iface eth0.50 inet static
      address 10.50.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 50

  dns-nameservers 10.10.19.2
  dns-search maas-19

  And here is how it looks like after the script:

  iface eth0 inet manual

  auto juju-br0
  iface juju-br0 inet static
      bridge_ports eth0
      gateway 10.20.19.2
      address 10.20.19.103/24
      mtu 1500

  auto juju-br0.100
  iface juju-br0.100 inet static
      address 10.100.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 100

  auto juju-br0.250
  iface juju-br0.250 inet static
      address 10.250.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 250

  auto juju-br0.50
  iface juju-br0.50 inet static
      address 10.50.19.103/24
      vlan-raw-device eth0
      mtu 1500
      vlan_id 50

  dns-nameservers 10.10.19.2
  dns-search maas-19

  This causes errors by ifup at boot when the script is trying to
  activate the modified /e/n/i (e.g. cannot add juju-br0.100 using eth0
  as raw device, as eth0.100 already exists).

...

Read more...

Revision history for this message
Dimiter Naydenov (dimitern) wrote :

@rory can we consider the bug fixed then? Please confirm.

Curtis Hovey (sinzui)
Changed in juju-core:
status: Fix Committed → Fix Released
Revision history for this message
rory schramm (roryschramm) wrote :

HI Dimiter, that fixed the bug. However, what repo is the 1.25.4 release in? I'm having issues deploying to power right now because It cant find the juju 1.25.4 packages from a repo. see https://bugs.launchpad.net/curtin/+bug/1523779 for details on the power issues.

I've looked at both the devel and stable ppa's and the latest they have is 1.25.3

Rory

Revision history for this message
Andrew McDermott (frobware) wrote :

@roryschramm 1.25.4 has not been released yet.

Revision history for this message
rory schramm (roryschramm) wrote :

How do I get juju to push the pre-release 1.25.4 code I'm using so that I can deploy to IBM power. I can deploy to x86 just fine since i Deployed the juju environment via juju bootstrap --upload-tools so the bootstrap nodes have the x86 code for 1.25.4. However, I have no way of getting juju to push 1.25.4 to the power nodes.

Revision history for this message
rory schramm (roryschramm) wrote :

I saw that there was a release for juuj-core 1.25.4. However, I cant seem to find it in any repos.

Is there a specicific repo I need to use?

I'm currently using deb http://ppa.launchpad.net/juju/stable/ubuntu trusty main.

I also searched the devel repo but didn't see it there either.

Revision history for this message
Cheryl Jennings (cherylj) wrote :

@roryschramm - if you add the following in your environments.yaml, the environment will be bootstrapped with the 1.25.4 tools:

        agent-stream: "proposed"
        agent-version: "1.25.4"

affects: juju-core → juju
Changed in juju:
milestone: 2.0-beta1 → none
milestone: none → 2.0-beta1
Changed in juju-core:
assignee: nobody → Andrew McDermott (frobware)
importance: Undecided → High
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.