Activity log for bug #1337873

Date Who What changed Old value New value Message
2014-07-04 14:11:00 Rafael David Tinoco bug added bug
2014-07-04 14:11:08 Rafael David Tinoco ifupdown (Ubuntu): status New In Progress
2014-07-04 14:11:12 Rafael David Tinoco ifupdown (Ubuntu): assignee Rafael David Tinoco (inaddy)
2014-07-04 14:16:34 Rafael David Tinoco attachment added lp1337873.sh https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4145591/+files/lp1337873.sh
2014-07-04 14:23:15 Rafael David Tinoco summary bonding initialization problems caused by race condition Precise, Trusty, Utopic - bonding initialization problems caused by race condition
2014-07-04 14:25:31 Rafael David Tinoco bug added subscriber Keiichi KII
2014-07-04 14:26:08 Rafael David Tinoco bug added subscriber David Douglas
2014-07-04 14:26:22 Rafael David Tinoco bug added subscriber Tom Zhou
2014-07-04 17:30:32 Rafael David Tinoco summary Precise, Trusty, Utopic - bonding initialization problems caused by race condition Precise, Trusty, Utopic - ifupdown initialization problems caused by race condition
2014-07-04 18:04:58 Rafael David Tinoco bug watch added http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=753755
2014-07-04 18:04:58 Rafael David Tinoco attachment added ifupdown_0.7.47.2ubuntu4.2~lp1337873.diff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4145680/+files/ifupdown_0.7.47.2ubuntu4.2%7Elp1337873.diff
2014-07-04 20:20:50 Ubuntu Foundations Team Bug Bot tags patch
2014-07-04 20:20:58 Ubuntu Foundations Team Bug Bot bug added subscriber Ubuntu Sponsors Team
2014-07-09 10:51:32 Esa Peuha bug task added ifupdown (Debian)
2014-07-09 22:23:54 Bug Watch Updater ifupdown (Debian): status Unknown New
2014-07-30 14:55:43 Rafael David Tinoco attachment removed ifupdown_0.7.47.2ubuntu4.2~lp1337873.diff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4145680/+files/ifupdown_0.7.47.2ubuntu4.2%7Elp1337873.diff
2014-09-09 13:04:26 Rafael David Tinoco description It was brought to my attention (by others) that ifupdown runs into race conditions on some specific cases. [Impact] When trying to deploy many servers at once (higher chances of happening) or from time-to-time, like any other intermittent race-condition. Interfaces are not brought up like they should and this has a big impact for servers that cannot rely on network start scripts. The problem is caused by a race condition when init(upstart) starts up network interfaces in parallel. [Test Case] Use attached script to reproduce the error (it might take some hours, in a single virtual machine, for the error to occur). (example 1) *** sequence to trigger race-condition *** (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file.                                   1-1. Wait for locking ifstate.lock                                       file. 1-2. Read ifstate file to check      the target NIC. 1-3. close(=release) ifstate.lock      file. 1-4. Judge that the target NIC      isn't processed.                                   1-2. Read ifstate file to check                                        the target NIC.                                   1-3. close(=release) ifstate.lock                                        file.                                   1-4. Judge that the target NIC                                        isn't processed. 2. Lock and update ifstate file.    Release the lock.                                   2. Lock and update ifstate file.                                      Release the lock. (example 2) Bonding device using eth0. ifenslave for eth0 is also executed in parallel, eth0 remains down. *** sequence to trigger race-condition *** (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to    /sys/class/net/bond0/bonding /slaves then NIC gets up                                   4. Link down the target NIC.                                   5. Fails to write NIC id to                                      /sys/class/net/bond0/bonding/ slaves it is already written. (example 3) bonding is not set to active-backup as defined in config file: When the init(upstart) executes "if-pre-up.d/ifenslave" script and "if-pre-up.d/vlan" script for bond0 device in parallel, the "if-pre-up.d/ifenslave" script fails to change the bonding mode with a error message, "bonding: unable to update mode of bond0 because interface is up.". *** sequence to trigger race-condition *** (a)ifup bond0 (b)ifup -a ----------------------------------------------------------------------- 1. Update statefile about bond0.                                   1. Does nothing about bond0                                      because statefile is already                                      updated about it. 2. ifenslave::setup_master()    sysfs_change_down mode 1    and link down bond0.                                   2. Link up bond0 by the vlan                                      script on the processing                                      for linking up bond0.201(*1). 3. "echo 1 > .../mode" fails. [ /etc/network/if-pre-up.d/vlan ] 46 if [ -n "$IF_VLAN_RAW_DEVICE" ] && [ ! -d /sys/class/net/$IFACE ]; then 47 if [ ! -x /sbin/vconfig ]; then 48 exit 0 49 fi 50 if ! ip link show dev "$IF_VLAN_RAW_DEVICE" > /dev/null; then 51 echo "$IF_VLAN_RAW_DEVICE does not exist, unable to create $IFACE" 52 exit 1 53 fi 54 ip link set up dev $IF_VLAN_RAW_DEVICE <-- (*1). 55 vconfig add $IF_VLAN_RAW_DEVICE $VLANID 56 fi [Regression Potential] * Attaching proposed patch (for upstream as well) and describing potential later on today. [Other Info] Example: [ /etc/network/interfaces ] auto lo iface lo inet loopback auto eth0 iface eth0 inet manual  bond-master bond0 auto eth1 iface eth1 inet manual  bond-master bond0 auto bond0 iface bond0 inet dhcp  bond-slaves eth0 eth1  hwaddress 11:22:33:44:55:66  bond-primary eth0  bond-mode 1  bond-miimon 100  bond-updelay 200  bond-downdelay 200 auto bond0.201 iface bond0.201 inet dhcp  hwaddress 11:22:33:44:55:66  vlan-raw-device bond0 ... auto bond0.205 iface bond0.205 inet dhcp  hwaddress 11:22:33:44:55:66  vlan-raw-device bond0 It was brought to my attention (by others) that ifupdown runs into race conditions on some specific cases. [Impact] When trying to deploy many servers at once (higher chances of happening) or from time-to-time, like any other intermittent race-condition. Interfaces are not brought up like they should and this has a big impact for servers that cannot rely on network start scripts. The problem is caused by a race condition when init(upstart) starts up network interfaces in parallel. [Test Case] Use attached script to reproduce the error (it might take some hours, in a single virtual machine, for the error to occur). * please consider my bonding examples are using eth1 and eth2 as slave interfaces. ifupdown some race conditions explained bellow: !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file. 1-1. Wait for locking ifstate.lock file. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 2. Lock and update ifstate file. Release the lock. 2. Lock and update ifstate file. Release the lock. !!! !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to /sys/class/net/bond0/bonding /slaves then NIC gets up 4. Link down the target NIC. 5. Fails to write NIC id to /sys/class/net/bond0/bonding/ slaves it is already written. !!!
2014-09-09 13:04:43 Rafael David Tinoco description It was brought to my attention (by others) that ifupdown runs into race conditions on some specific cases. [Impact] When trying to deploy many servers at once (higher chances of happening) or from time-to-time, like any other intermittent race-condition. Interfaces are not brought up like they should and this has a big impact for servers that cannot rely on network start scripts. The problem is caused by a race condition when init(upstart) starts up network interfaces in parallel. [Test Case] Use attached script to reproduce the error (it might take some hours, in a single virtual machine, for the error to occur). * please consider my bonding examples are using eth1 and eth2 as slave interfaces. ifupdown some race conditions explained bellow: !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file. 1-1. Wait for locking ifstate.lock file. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 2. Lock and update ifstate file. Release the lock. 2. Lock and update ifstate file. Release the lock. !!! !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to /sys/class/net/bond0/bonding /slaves then NIC gets up 4. Link down the target NIC. 5. Fails to write NIC id to /sys/class/net/bond0/bonding/ slaves it is already written. !!! It was brought to my attention (by others) that ifupdown runs into race conditions on some specific cases. [Impact] When trying to deploy many servers at once (higher chances of happening) or from time-to-time, like any other intermittent race-condition. Interfaces are not brought up like they should and this has a big impact for servers that cannot rely on network start scripts. The problem is caused by a race condition when init(upstart) starts up network interfaces in parallel. [Test Case] Use attached script to reproduce the error (it might take some hours, in a single virtual machine, for the error to occur). * please consider my bonding examples are using eth1 and eth2 as slave  interfaces. ifupdown some race conditions explained bellow: !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file.                                   1-1. Wait for locking ifstate.lock                                       file. 1-2. Read ifstate file to check      the target NIC. 1-3. close(=release) ifstate.lock      file. 1-4. Judge that the target NIC      isn't processed.                                   1-2. Read ifstate file to check                                        the target NIC.                                   1-3. close(=release) ifstate.lock                                        file.                                   1-4. Judge that the target NIC                                        isn't processed. 2. Lock and update ifstate file.    Release the lock.                                   2. Lock and update ifstate file.                                      Release the lock. !!! !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to    /sys/class/net/bond0/bonding    /slaves then NIC gets up                                   4. Link down the target NIC.                                   5. Fails to write NIC id to                                      /sys/class/net/bond0/bonding/                                      slaves it is already written. !!!
2014-09-09 20:25:29 Rafael David Tinoco description It was brought to my attention (by others) that ifupdown runs into race conditions on some specific cases. [Impact] When trying to deploy many servers at once (higher chances of happening) or from time-to-time, like any other intermittent race-condition. Interfaces are not brought up like they should and this has a big impact for servers that cannot rely on network start scripts. The problem is caused by a race condition when init(upstart) starts up network interfaces in parallel. [Test Case] Use attached script to reproduce the error (it might take some hours, in a single virtual machine, for the error to occur). * please consider my bonding examples are using eth1 and eth2 as slave  interfaces. ifupdown some race conditions explained bellow: !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file.                                   1-1. Wait for locking ifstate.lock                                       file. 1-2. Read ifstate file to check      the target NIC. 1-3. close(=release) ifstate.lock      file. 1-4. Judge that the target NIC      isn't processed.                                   1-2. Read ifstate file to check                                        the target NIC.                                   1-3. close(=release) ifstate.lock                                        file.                                   1-4. Judge that the target NIC                                        isn't processed. 2. Lock and update ifstate file.    Release the lock.                                   2. Lock and update ifstate file.                                      Release the lock. !!! !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to    /sys/class/net/bond0/bonding    /slaves then NIC gets up                                   4. Link down the target NIC.                                   5. Fails to write NIC id to                                      /sys/class/net/bond0/bonding/                                      slaves it is already written. !!! * please consider my bonding examples are using eth1 and eth2 as slave interfaces. ifupdown some race conditions explained bellow. ifenslave does not behave well with sysv networking and upstart network-interface scripts running together. !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file. 1-1. Wait for locking ifstate.lock file. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 2. Lock and update ifstate file. Release the lock. 2. Lock and update ifstate file. Release the lock. !!! to be explained !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to /sys/class/net/bond0/bonding /slaves then NIC gets up 4. Link down the target NIC. 5. Fails to write NIC id to /sys/class/net/bond0/bonding/ slaves it is already written. !!! ##################################################################### #### My setup: root@provisioned:~# cat /etc/modprobe.d/bonding.conf alias bond0 bonding options bonding mode=1 arp_interval=2000 Both, /etc/init.d/networking and upstart network-interface begin enabled. #### Beginning: root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I'm able to boot with both scripts (networking and network-interface enabled) with no problem. I can also boot with only "networking" script enabled: --- root@provisioned:~# initctl list | grep network network-interface stop/waiting network-interface-security (networking) start/running networking start/running network-interface-container stop/waiting --- OR only the script "network-interface" enabled: --- root@provisioned:~# initctl list | grep network network-interface (eth2) start/running network-interface (lo) start/running network-interface (eth0) start/running network-interface (eth1) start/running networking start/running network-interface-container stop/waiting --- #### Enabling bonding: Following ifenslave configuration example (/usr/share/doc/ifenslave/ examples/two_hotplug_ethernet), my /etc/network/interfaces has to look like this: --- auto eth1 iface eth1 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet static bond-mode 1 bond-miimon 100 bond-primary eth1 eth2 address 192.168.169.1 netmask 255.255.255.0 broadcast 192.168.169.255 --- Having both scripts running does not make any difference since we are missing "bond-slaves" keyword on slave interfaces, for ifenslave to work, and they are set to "manual". Ifenslave code: """ for slave in $BOND_SLAVES ; do ... # Ensure $slave is down. ip link set "$slave" down 2>/dev/null if ! sysfs_add slaves "$slave" 2>/dev/null ; then echo "Failed to enslave $slave to $BOND_MASTER. Is $BOND_MASTER ready and a bonding interface ?" >&2 else # Bring up slave if it is the target of an allow-bondX stanza. # This is usefull to bring up slaves that need extra setup. if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\" --list | grep -q $slave; then ifup $v --allow "$BOND_MASTER" "$slave" fi """ Without the keyword "bond-slaves" on the master interface declaration, ifenslave will NOT bring any slave interface up on the "master" interface ifup invocation. *********** Part 1 So, having networking sysv init script AND upstart network-interface script running together... the following example works: --- root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet static bond-mode 1 bond-miimon 100 bond-primary eth1 bond-slaves eth1 eth2 address 192.168.169.1 netmask 255.255.255.0 broadcast 192.168.169.255 --- Ifenslave script sets link down to all slave interfaces, declared by "bond-slaves" keyword, and assigns them to correct bonding. Ifenslave script ONLY tries to make a reentrant call to ifupdown if the slave interfaces have "allow-bondX" stanza (not our case). So this should not work, since when the master bonding interface (bond0) is called, ifenslave does not configure slaves without "allow-bondX" stanza. What is happening, why is it working ? If we disable upstart "network-interface" script.. our bonding stops to work on the boot. This is because upstart was the one setting the slave interfaces up (with the configuration above) and not sysv networking scripts. It is clear that ifenslave from sysv script invocation can set the slave interface down anytime (even during upstart script execution) so it might work and might not: """ ip link set "$slave" down 2>/dev/null """ root@provisioned:~# initctl list | grep network-interface network-interface (eth2) start/running network-interface (lo) start/running network-interface (bond0) start/running network-interface (eth0) start/running network-interface (eth1) start/running Since having the interface down is a requirement to slave it, running both scripts together (upstart and sysv) could create a situation where upstart puts slave interface online but ifenslave from sysv script puts it down and never bring it up again (because it does not have "allow-bondX" stanza). *********** Part 2 What if I disable upstart "network-interface", stay only with the sysv script but introduce the "allow-bondX" stanza to slave interfaces ? The funny part begins... without upstart, the ifupdown tool calls ifenslave, for bond0 interface, and ifenslave calls this line: """ for slave in $BOND_SLAVES ; do ... if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\" --list | grep -q $slave; then ifup $v --allow "$BOND_MASTER" "$slave" fi """ But ifenslave stays waiting for the bond0 interface to be online forever. We do have a chicken egg situation now: * ifupdown trys to put bond0 interface online. * we are not running upstart network-interface script. * ifupdown for bond0 calls ifenslave. * ifenslave tries to find interfaces with "allow-bondX" stanza * ifenslave tries to ifup slave interfaces with that stanza * slave interfaces keep forever waiting for the master * master is waiting for the slave interface * slave interface is waiting for the master interface ... :D And we have an infinite loop for ifenslave: """ # Wait for the master to be ready [ ! -f /run/network/ifenslave.$BOND_MASTER ] && echo "Waiting for bond master $BOND_MASTER to be ready" while :; do if [ -f /run/network/ifenslave.$BOND_MASTER ]; then break fi sleep 0.1 done """ *********** Conclusion That can be achieved if correct triggers are set (like the ones I just showed). Not having ifupdown parallel executions (sysv and upstart, for example) can make an infinite loop to happen during the boot. Having parallel ifupdown executions can trigger race conditions between: 1) ifupdown itself (case a on the bug description). 2) ifupdown and ifenslave script (case b on the bug description).
2014-09-09 20:32:21 Rafael David Tinoco description * please consider my bonding examples are using eth1 and eth2 as slave interfaces. ifupdown some race conditions explained bellow. ifenslave does not behave well with sysv networking and upstart network-interface scripts running together. !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file. 1-1. Wait for locking ifstate.lock file. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 2. Lock and update ifstate file. Release the lock. 2. Lock and update ifstate file. Release the lock. !!! to be explained !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to /sys/class/net/bond0/bonding /slaves then NIC gets up 4. Link down the target NIC. 5. Fails to write NIC id to /sys/class/net/bond0/bonding/ slaves it is already written. !!! ##################################################################### #### My setup: root@provisioned:~# cat /etc/modprobe.d/bonding.conf alias bond0 bonding options bonding mode=1 arp_interval=2000 Both, /etc/init.d/networking and upstart network-interface begin enabled. #### Beginning: root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I'm able to boot with both scripts (networking and network-interface enabled) with no problem. I can also boot with only "networking" script enabled: --- root@provisioned:~# initctl list | grep network network-interface stop/waiting network-interface-security (networking) start/running networking start/running network-interface-container stop/waiting --- OR only the script "network-interface" enabled: --- root@provisioned:~# initctl list | grep network network-interface (eth2) start/running network-interface (lo) start/running network-interface (eth0) start/running network-interface (eth1) start/running networking start/running network-interface-container stop/waiting --- #### Enabling bonding: Following ifenslave configuration example (/usr/share/doc/ifenslave/ examples/two_hotplug_ethernet), my /etc/network/interfaces has to look like this: --- auto eth1 iface eth1 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet static bond-mode 1 bond-miimon 100 bond-primary eth1 eth2 address 192.168.169.1 netmask 255.255.255.0 broadcast 192.168.169.255 --- Having both scripts running does not make any difference since we are missing "bond-slaves" keyword on slave interfaces, for ifenslave to work, and they are set to "manual". Ifenslave code: """ for slave in $BOND_SLAVES ; do ... # Ensure $slave is down. ip link set "$slave" down 2>/dev/null if ! sysfs_add slaves "$slave" 2>/dev/null ; then echo "Failed to enslave $slave to $BOND_MASTER. Is $BOND_MASTER ready and a bonding interface ?" >&2 else # Bring up slave if it is the target of an allow-bondX stanza. # This is usefull to bring up slaves that need extra setup. if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\" --list | grep -q $slave; then ifup $v --allow "$BOND_MASTER" "$slave" fi """ Without the keyword "bond-slaves" on the master interface declaration, ifenslave will NOT bring any slave interface up on the "master" interface ifup invocation. *********** Part 1 So, having networking sysv init script AND upstart network-interface script running together... the following example works: --- root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet static bond-mode 1 bond-miimon 100 bond-primary eth1 bond-slaves eth1 eth2 address 192.168.169.1 netmask 255.255.255.0 broadcast 192.168.169.255 --- Ifenslave script sets link down to all slave interfaces, declared by "bond-slaves" keyword, and assigns them to correct bonding. Ifenslave script ONLY tries to make a reentrant call to ifupdown if the slave interfaces have "allow-bondX" stanza (not our case). So this should not work, since when the master bonding interface (bond0) is called, ifenslave does not configure slaves without "allow-bondX" stanza. What is happening, why is it working ? If we disable upstart "network-interface" script.. our bonding stops to work on the boot. This is because upstart was the one setting the slave interfaces up (with the configuration above) and not sysv networking scripts. It is clear that ifenslave from sysv script invocation can set the slave interface down anytime (even during upstart script execution) so it might work and might not: """ ip link set "$slave" down 2>/dev/null """ root@provisioned:~# initctl list | grep network-interface network-interface (eth2) start/running network-interface (lo) start/running network-interface (bond0) start/running network-interface (eth0) start/running network-interface (eth1) start/running Since having the interface down is a requirement to slave it, running both scripts together (upstart and sysv) could create a situation where upstart puts slave interface online but ifenslave from sysv script puts it down and never bring it up again (because it does not have "allow-bondX" stanza). *********** Part 2 What if I disable upstart "network-interface", stay only with the sysv script but introduce the "allow-bondX" stanza to slave interfaces ? The funny part begins... without upstart, the ifupdown tool calls ifenslave, for bond0 interface, and ifenslave calls this line: """ for slave in $BOND_SLAVES ; do ... if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\" --list | grep -q $slave; then ifup $v --allow "$BOND_MASTER" "$slave" fi """ But ifenslave stays waiting for the bond0 interface to be online forever. We do have a chicken egg situation now: * ifupdown trys to put bond0 interface online. * we are not running upstart network-interface script. * ifupdown for bond0 calls ifenslave. * ifenslave tries to find interfaces with "allow-bondX" stanza * ifenslave tries to ifup slave interfaces with that stanza * slave interfaces keep forever waiting for the master * master is waiting for the slave interface * slave interface is waiting for the master interface ... :D And we have an infinite loop for ifenslave: """ # Wait for the master to be ready [ ! -f /run/network/ifenslave.$BOND_MASTER ] && echo "Waiting for bond master $BOND_MASTER to be ready" while :; do if [ -f /run/network/ifenslave.$BOND_MASTER ]; then break fi sleep 0.1 done """ *********** Conclusion That can be achieved if correct triggers are set (like the ones I just showed). Not having ifupdown parallel executions (sysv and upstart, for example) can make an infinite loop to happen during the boot. Having parallel ifupdown executions can trigger race conditions between: 1) ifupdown itself (case a on the bug description). 2) ifupdown and ifenslave script (case b on the bug description). * please consider my bonding examples are using eth1 and eth2 as slave interfaces. ifupdown some race conditions explained bellow. ifenslave does not behave well with sysv networking and upstart network-interface scripts running together. !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file. 1-1. Wait for locking ifstate.lock file. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 2. Lock and update ifstate file. Release the lock. 2. Lock and update ifstate file. Release the lock. !!! to be explained !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to /sys/class/net/bond0/bonding /slaves then NIC gets up 4. Link down the target NIC. 5. Fails to write NIC id to /sys/class/net/bond0/bonding/ slaves it is already written. !!! ##################################################################### #### My setup: root@provisioned:~# cat /etc/modprobe.d/bonding.conf alias bond0 bonding options bonding mode=1 arp_interval=2000 Both, /etc/init.d/networking and upstart network-interface begin enabled. #### Beginning: root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I'm able to boot with both scripts (networking and network-interface enabled) with no problem. I can also boot with only "networking" script enabled: --- root@provisioned:~# initctl list | grep network network-interface stop/waiting networking start/running --- OR only the script "network-interface" enabled: --- root@provisioned:~# initctl list | grep network network-interface (eth2) start/running network-interface (lo) start/running network-interface (eth0) start/running network-interface (eth1) start/running --- #### Enabling bonding: Following ifenslave configuration example (/usr/share/doc/ifenslave/ examples/two_hotplug_ethernet), my /etc/network/interfaces has to look like this: --- auto eth1 iface eth1 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet static bond-mode 1 bond-miimon 100 bond-primary eth1 eth2 address 192.168.169.1 netmask 255.255.255.0 broadcast 192.168.169.255 --- Having both scripts running does not make any difference since we are missing "bond-slaves" keyword on slave interfaces, for ifenslave to work, and they are set to "manual". Ifenslave code: """ for slave in $BOND_SLAVES ; do ... # Ensure $slave is down. ip link set "$slave" down 2>/dev/null if ! sysfs_add slaves "$slave" 2>/dev/null ; then echo "Failed to enslave $slave to $BOND_MASTER. Is $BOND_MASTER ready and a bonding interface ?" >&2 else # Bring up slave if it is the target of an allow-bondX stanza. # This is usefull to bring up slaves that need extra setup. if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\" --list | grep -q $slave; then ifup $v --allow "$BOND_MASTER" "$slave" fi """ Without the keyword "bond-slaves" on the master interface declaration, ifenslave will NOT bring any slave interface up on the "master" interface ifup invocation. *********** Part 1 So, having networking sysv init script AND upstart network-interface script running together... the following example works: --- root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet static bond-mode 1 bond-miimon 100 bond-primary eth1 bond-slaves eth1 eth2 address 192.168.169.1 netmask 255.255.255.0 broadcast 192.168.169.255 --- Ifenslave script sets link down to all slave interfaces, declared by "bond-slaves" keyword, and assigns them to correct bonding. Ifenslave script ONLY tries to make a reentrant call to ifupdown if the slave interfaces have "allow-bondX" stanza (not our case). So this should not work, since when the master bonding interface (bond0) is called, ifenslave does not configure slaves without "allow-bondX" stanza. What is happening, why is it working ? If we disable upstart "network-interface" script.. our bonding stops to work on the boot. This is because upstart was the one setting the slave interfaces up (with the configuration above) and not sysv networking scripts. It is clear that ifenslave from sysv script invocation can set the slave interface down anytime (even during upstart script execution) so it might work and might not: """ ip link set "$slave" down 2>/dev/null """ root@provisioned:~# initctl list | grep network-interface network-interface (eth2) start/running network-interface (lo) start/running network-interface (bond0) start/running network-interface (eth0) start/running network-interface (eth1) start/running Since having the interface down is a requirement to slave it, running both scripts together (upstart and sysv) could create a situation where upstart puts slave interface online but ifenslave from sysv script puts it down and never bring it up again (because it does not have "allow-bondX" stanza). *********** Part 2 What if I disable upstart "network-interface", stay only with the sysv script but introduce the "allow-bondX" stanza to slave interfaces ? The funny part begins... without upstart, the ifupdown tool calls ifenslave, for bond0 interface, and ifenslave calls this line: """ for slave in $BOND_SLAVES ; do ... if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\" --list | grep -q $slave; then ifup $v --allow "$BOND_MASTER" "$slave" fi """ But ifenslave stays waiting for the bond0 interface to be online forever. We do have a chicken egg situation now: * ifupdown trys to put bond0 interface online. * we are not running upstart network-interface script. * ifupdown for bond0 calls ifenslave. * ifenslave tries to find interfaces with "allow-bondX" stanza * ifenslave tries to ifup slave interfaces with that stanza * slave interfaces keep forever waiting for the master * master is waiting for the slave interface * slave interface is waiting for the master interface ... :D And we have an infinite loop for ifenslave: """ # Wait for the master to be ready [ ! -f /run/network/ifenslave.$BOND_MASTER ] && echo "Waiting for bond master $BOND_MASTER to be ready" while :; do if [ -f /run/network/ifenslave.$BOND_MASTER ]; then break fi sleep 0.1 done """ *********** Conclusion That can be achieved if correct triggers are set (like the ones I just showed). Not having ifupdown parallel executions (sysv and upstart, for example) can make an infinite loop to happen during the boot. Having parallel ifupdown executions can trigger race conditions between: 1) ifupdown itself (case a on the bug description). 2) ifupdown and ifenslave script (case b on the bug description).
2014-09-09 20:36:12 Jay Vosburgh bug added subscriber Jay Vosburgh
2014-09-10 12:56:27 Rafael David Tinoco attachment added trusty_ifupdown_0.7.47.2ubuntu4.2.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4200561/+files/trusty_ifupdown_0.7.47.2ubuntu4.2.debdiff
2014-09-10 12:57:38 Rafael David Tinoco attachment removed trusty_ifupdown_0.7.47.2ubuntu4.2.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4200561/+files/trusty_ifupdown_0.7.47.2ubuntu4.2.debdiff
2014-09-10 12:59:58 Rafael David Tinoco attachment added trusty_ifupdown_0.7.47.2ubuntu4.2.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4200562/+files/trusty_ifupdown_0.7.47.2ubuntu4.2.debdiff
2014-09-10 15:46:35 Rafael David Tinoco bug added subscriber Stéphane Graber
2014-09-11 01:07:57 Rafael David Tinoco bug added subscriber Masaki Tachibana
2014-09-15 14:42:07 Iain Lane removed subscriber Ubuntu Sponsors Team
2014-10-10 23:24:19 Jorge Niedbalski tags patch cts patch
2015-05-02 03:28:05 Rafael David Tinoco ifupdown (Ubuntu): assignee Rafael David Tinoco (inaddy)
2015-06-16 14:48:47 Nobuto Murata bug added subscriber Nobuto Murata
2015-08-06 09:53:59 Liam Macgillavry bug added subscriber Liam Macgillavry
2015-09-04 15:30:03 Dariusz Gadomski ifupdown (Ubuntu): assignee Dariusz Gadomski (dgadomski)
2015-09-28 15:48:36 Louis Bouchard nominated for series Ubuntu Vivid
2015-09-28 15:48:36 Louis Bouchard bug task added ifupdown (Ubuntu Vivid)
2015-09-28 15:49:08 Louis Bouchard nominated for series Ubuntu Trusty
2015-09-28 15:49:08 Louis Bouchard bug task added ifupdown (Ubuntu Trusty)
2015-09-28 15:49:08 Louis Bouchard nominated for series Ubuntu Precise
2015-09-28 15:49:08 Louis Bouchard bug task added ifupdown (Ubuntu Precise)
2015-10-01 10:37:22 Launchpad Janitor ifupdown (Ubuntu Precise): status New Confirmed
2015-10-01 10:37:22 Launchpad Janitor ifupdown (Ubuntu Trusty): status New Confirmed
2015-10-01 10:37:22 Launchpad Janitor ifupdown (Ubuntu Vivid): status New Confirmed
2015-10-16 00:29:30 Eric Desrochers bug added subscriber Eric Desrochers
2015-10-21 14:23:49 Dariusz Gadomski description * please consider my bonding examples are using eth1 and eth2 as slave interfaces. ifupdown some race conditions explained bellow. ifenslave does not behave well with sysv networking and upstart network-interface scripts running together. !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file. 1-1. Wait for locking ifstate.lock file. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 1-2. Read ifstate file to check the target NIC. 1-3. close(=release) ifstate.lock file. 1-4. Judge that the target NIC isn't processed. 2. Lock and update ifstate file. Release the lock. 2. Lock and update ifstate file. Release the lock. !!! to be explained !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to /sys/class/net/bond0/bonding /slaves then NIC gets up 4. Link down the target NIC. 5. Fails to write NIC id to /sys/class/net/bond0/bonding/ slaves it is already written. !!! ##################################################################### #### My setup: root@provisioned:~# cat /etc/modprobe.d/bonding.conf alias bond0 bonding options bonding mode=1 arp_interval=2000 Both, /etc/init.d/networking and upstart network-interface begin enabled. #### Beginning: root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I'm able to boot with both scripts (networking and network-interface enabled) with no problem. I can also boot with only "networking" script enabled: --- root@provisioned:~# initctl list | grep network network-interface stop/waiting networking start/running --- OR only the script "network-interface" enabled: --- root@provisioned:~# initctl list | grep network network-interface (eth2) start/running network-interface (lo) start/running network-interface (eth0) start/running network-interface (eth1) start/running --- #### Enabling bonding: Following ifenslave configuration example (/usr/share/doc/ifenslave/ examples/two_hotplug_ethernet), my /etc/network/interfaces has to look like this: --- auto eth1 iface eth1 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet static bond-mode 1 bond-miimon 100 bond-primary eth1 eth2 address 192.168.169.1 netmask 255.255.255.0 broadcast 192.168.169.255 --- Having both scripts running does not make any difference since we are missing "bond-slaves" keyword on slave interfaces, for ifenslave to work, and they are set to "manual". Ifenslave code: """ for slave in $BOND_SLAVES ; do ... # Ensure $slave is down. ip link set "$slave" down 2>/dev/null if ! sysfs_add slaves "$slave" 2>/dev/null ; then echo "Failed to enslave $slave to $BOND_MASTER. Is $BOND_MASTER ready and a bonding interface ?" >&2 else # Bring up slave if it is the target of an allow-bondX stanza. # This is usefull to bring up slaves that need extra setup. if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\" --list | grep -q $slave; then ifup $v --allow "$BOND_MASTER" "$slave" fi """ Without the keyword "bond-slaves" on the master interface declaration, ifenslave will NOT bring any slave interface up on the "master" interface ifup invocation. *********** Part 1 So, having networking sysv init script AND upstart network-interface script running together... the following example works: --- root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet static bond-mode 1 bond-miimon 100 bond-primary eth1 bond-slaves eth1 eth2 address 192.168.169.1 netmask 255.255.255.0 broadcast 192.168.169.255 --- Ifenslave script sets link down to all slave interfaces, declared by "bond-slaves" keyword, and assigns them to correct bonding. Ifenslave script ONLY tries to make a reentrant call to ifupdown if the slave interfaces have "allow-bondX" stanza (not our case). So this should not work, since when the master bonding interface (bond0) is called, ifenslave does not configure slaves without "allow-bondX" stanza. What is happening, why is it working ? If we disable upstart "network-interface" script.. our bonding stops to work on the boot. This is because upstart was the one setting the slave interfaces up (with the configuration above) and not sysv networking scripts. It is clear that ifenslave from sysv script invocation can set the slave interface down anytime (even during upstart script execution) so it might work and might not: """ ip link set "$slave" down 2>/dev/null """ root@provisioned:~# initctl list | grep network-interface network-interface (eth2) start/running network-interface (lo) start/running network-interface (bond0) start/running network-interface (eth0) start/running network-interface (eth1) start/running Since having the interface down is a requirement to slave it, running both scripts together (upstart and sysv) could create a situation where upstart puts slave interface online but ifenslave from sysv script puts it down and never bring it up again (because it does not have "allow-bondX" stanza). *********** Part 2 What if I disable upstart "network-interface", stay only with the sysv script but introduce the "allow-bondX" stanza to slave interfaces ? The funny part begins... without upstart, the ifupdown tool calls ifenslave, for bond0 interface, and ifenslave calls this line: """ for slave in $BOND_SLAVES ; do ... if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\" --list | grep -q $slave; then ifup $v --allow "$BOND_MASTER" "$slave" fi """ But ifenslave stays waiting for the bond0 interface to be online forever. We do have a chicken egg situation now: * ifupdown trys to put bond0 interface online. * we are not running upstart network-interface script. * ifupdown for bond0 calls ifenslave. * ifenslave tries to find interfaces with "allow-bondX" stanza * ifenslave tries to ifup slave interfaces with that stanza * slave interfaces keep forever waiting for the master * master is waiting for the slave interface * slave interface is waiting for the master interface ... :D And we have an infinite loop for ifenslave: """ # Wait for the master to be ready [ ! -f /run/network/ifenslave.$BOND_MASTER ] && echo "Waiting for bond master $BOND_MASTER to be ready" while :; do if [ -f /run/network/ifenslave.$BOND_MASTER ]; then break fi sleep 0.1 done """ *********** Conclusion That can be achieved if correct triggers are set (like the ones I just showed). Not having ifupdown parallel executions (sysv and upstart, for example) can make an infinite loop to happen during the boot. Having parallel ifupdown executions can trigger race conditions between: 1) ifupdown itself (case a on the bug description). 2) ifupdown and ifenslave script (case b on the bug description). [Impact] * A lack of proper synchronization in ifupdown causes a race condition resulting in occasional incorrect network interface initialization (e.g. in bonding case - wrong bonding settings, network unavailable because slave<->master interfaces initialization order was wrong * This is very annoying in case of large deployments (e.g. when bringing up 1000 machines it is almost guaranteed that at least a few of them will end up with network down). * It has been fixed by introducing hierarchical and per-interface locking mechanism ensuring the right order (along with the correct order in the /e/n/interfaces file) of initialization [Test Case] 1. Create a VM with bonding configured with at least 2 slave interfaces. 2. Reboot. 3. If all interfaces are up - go to 2. [Regression Potential] * This change has been introduced upstream in Debian. * It does not require any config changes to existing installations. [Other Info] Original bug description: * please consider my bonding examples are using eth1 and eth2 as slave  interfaces. ifupdown some race conditions explained bellow. ifenslave does not behave well with sysv networking and upstart network-interface scripts running together. !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file.                                   1-1. Wait for locking ifstate.lock                                       file. 1-2. Read ifstate file to check      the target NIC. 1-3. close(=release) ifstate.lock      file. 1-4. Judge that the target NIC      isn't processed.                                   1-2. Read ifstate file to check                                        the target NIC.                                   1-3. close(=release) ifstate.lock                                        file.                                   1-4. Judge that the target NIC                                        isn't processed. 2. Lock and update ifstate file.    Release the lock.                                   2. Lock and update ifstate file.                                      Release the lock. !!! to be explained !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to    /sys/class/net/bond0/bonding    /slaves then NIC gets up                                   4. Link down the target NIC.                                   5. Fails to write NIC id to                                      /sys/class/net/bond0/bonding/                                      slaves it is already written. !!! ##################################################################### #### My setup: root@provisioned:~# cat /etc/modprobe.d/bonding.conf alias bond0 bonding options bonding mode=1 arp_interval=2000 Both, /etc/init.d/networking and upstart network-interface begin enabled. #### Beginning: root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I'm able to boot with both scripts (networking and network-interface enabled) with no problem. I can also boot with only "networking" script enabled: --- root@provisioned:~# initctl list | grep network network-interface stop/waiting networking start/running --- OR only the script "network-interface" enabled: --- root@provisioned:~# initctl list | grep network network-interface (eth2) start/running network-interface (lo) start/running network-interface (eth0) start/running network-interface (eth1) start/running --- #### Enabling bonding: Following ifenslave configuration example (/usr/share/doc/ifenslave/ examples/two_hotplug_ethernet), my /etc/network/interfaces has to look like this: --- auto eth1 iface eth1 inet manual     bond-master bond0 auto eth2 iface eth2 inet manual     bond-master bond0 auto bond0 iface bond0 inet static     bond-mode 1     bond-miimon 100     bond-primary eth1 eth2  address 192.168.169.1  netmask 255.255.255.0  broadcast 192.168.169.255 --- Having both scripts running does not make any difference since we are missing "bond-slaves" keyword on slave interfaces, for ifenslave to work, and they are set to "manual". Ifenslave code: """ for slave in $BOND_SLAVES ; do ... # Ensure $slave is down. ip link set "$slave" down 2>/dev/null if ! sysfs_add slaves "$slave" 2>/dev/null ; then  echo "Failed to enslave $slave to $BOND_MASTER. Is $BOND_MASTER    ready and a bonding interface ?" >&2 else  # Bring up slave if it is the target of an allow-bondX stanza.  # This is usefull to bring up slaves that need extra setup.  if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\"   --list | grep -q $slave; then   ifup $v --allow "$BOND_MASTER" "$slave"  fi """ Without the keyword "bond-slaves" on the master interface declaration, ifenslave will NOT bring any slave interface up on the "master" interface ifup invocation. *********** Part 1 So, having networking sysv init script AND upstart network-interface script running together... the following example works: --- root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet manual     bond-master bond0 auto eth2 iface eth2 inet manual     bond-master bond0 auto bond0 iface bond0 inet static     bond-mode 1     bond-miimon 100     bond-primary eth1     bond-slaves eth1 eth2     address 192.168.169.1     netmask 255.255.255.0     broadcast 192.168.169.255 --- Ifenslave script sets link down to all slave interfaces, declared by "bond-slaves" keyword, and assigns them to correct bonding. Ifenslave script ONLY tries to make a reentrant call to ifupdown if the slave interfaces have "allow-bondX" stanza (not our case). So this should not work, since when the master bonding interface (bond0) is called, ifenslave does not configure slaves without "allow-bondX" stanza. What is happening, why is it working ? If we disable upstart "network-interface" script.. our bonding stops to work on the boot. This is because upstart was the one setting the slave interfaces up (with the configuration above) and not sysv networking scripts. It is clear that ifenslave from sysv script invocation can set the slave interface down anytime (even during upstart script execution) so it might work and might not: """ ip link set "$slave" down 2>/dev/null """ root@provisioned:~# initctl list | grep network-interface network-interface (eth2) start/running network-interface (lo) start/running network-interface (bond0) start/running network-interface (eth0) start/running network-interface (eth1) start/running Since having the interface down is a requirement to slave it, running both scripts together (upstart and sysv) could create a situation where upstart puts slave interface online but ifenslave from sysv script puts it down and never bring it up again (because it does not have "allow-bondX" stanza). *********** Part 2 What if I disable upstart "network-interface", stay only with the sysv script but introduce the "allow-bondX" stanza to slave interfaces ? The funny part begins... without upstart, the ifupdown tool calls ifenslave, for bond0 interface, and ifenslave calls this line: """ for slave in $BOND_SLAVES ; do ...  if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\"   --list | grep -q $slave; then   ifup $v --allow "$BOND_MASTER" "$slave"  fi """ But ifenslave stays waiting for the bond0 interface to be online forever. We do have a chicken egg situation now: * ifupdown trys to put bond0 interface online. * we are not running upstart network-interface script. * ifupdown for bond0 calls ifenslave. * ifenslave tries to find interfaces with "allow-bondX" stanza * ifenslave tries to ifup slave interfaces with that stanza * slave interfaces keep forever waiting for the master * master is waiting for the slave interface * slave interface is waiting for the master interface ... :D And we have an infinite loop for ifenslave: """ # Wait for the master to be ready [ ! -f /run/network/ifenslave.$BOND_MASTER ] &&  echo "Waiting for bond master $BOND_MASTER to be ready" while :; do     if [ -f /run/network/ifenslave.$BOND_MASTER ]; then         break     fi     sleep 0.1 done """ *********** Conclusion That can be achieved if correct triggers are set (like the ones I just showed). Not having ifupdown parallel executions (sysv and upstart, for example) can make an infinite loop to happen during the boot. Having parallel ifupdown executions can trigger race conditions between: 1) ifupdown itself (case a on the bug description). 2) ifupdown and ifenslave script (case b on the bug description).
2015-10-21 14:25:20 Dariusz Gadomski attachment added wily_ifupdown_0.7.54ubuntu2.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4501797/+files/wily_ifupdown_0.7.54ubuntu2.debdiff
2015-10-21 14:25:50 Dariusz Gadomski attachment added vivid_ifupdown_0.7.48.1ubuntu11.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4501798/+files/vivid_ifupdown_0.7.48.1ubuntu11.debdiff
2015-10-21 14:26:39 Dariusz Gadomski attachment added trusty_ifupdown_0.7.47.2ubuntu4.2.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4501799/+files/trusty_ifupdown_0.7.47.2ubuntu4.2.debdiff
2015-10-21 14:27:25 Dariusz Gadomski attachment added trusty_ifenslave_2.4ubuntu1.2.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4501800/+files/trusty_ifenslave_2.4ubuntu1.2.debdiff
2015-10-21 14:27:47 Dariusz Gadomski bug added subscriber Ubuntu Sponsors Team
2015-10-21 14:27:57 Dariusz Gadomski bug added subscriber Ubuntu Stable Release Updates Team
2015-10-21 15:50:25 Dariusz Gadomski description [Impact] * A lack of proper synchronization in ifupdown causes a race condition resulting in occasional incorrect network interface initialization (e.g. in bonding case - wrong bonding settings, network unavailable because slave<->master interfaces initialization order was wrong * This is very annoying in case of large deployments (e.g. when bringing up 1000 machines it is almost guaranteed that at least a few of them will end up with network down). * It has been fixed by introducing hierarchical and per-interface locking mechanism ensuring the right order (along with the correct order in the /e/n/interfaces file) of initialization [Test Case] 1. Create a VM with bonding configured with at least 2 slave interfaces. 2. Reboot. 3. If all interfaces are up - go to 2. [Regression Potential] * This change has been introduced upstream in Debian. * It does not require any config changes to existing installations. [Other Info] Original bug description: * please consider my bonding examples are using eth1 and eth2 as slave  interfaces. ifupdown some race conditions explained bellow. ifenslave does not behave well with sysv networking and upstart network-interface scripts running together. !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file.                                   1-1. Wait for locking ifstate.lock                                       file. 1-2. Read ifstate file to check      the target NIC. 1-3. close(=release) ifstate.lock      file. 1-4. Judge that the target NIC      isn't processed.                                   1-2. Read ifstate file to check                                        the target NIC.                                   1-3. close(=release) ifstate.lock                                        file.                                   1-4. Judge that the target NIC                                        isn't processed. 2. Lock and update ifstate file.    Release the lock.                                   2. Lock and update ifstate file.                                      Release the lock. !!! to be explained !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to    /sys/class/net/bond0/bonding    /slaves then NIC gets up                                   4. Link down the target NIC.                                   5. Fails to write NIC id to                                      /sys/class/net/bond0/bonding/                                      slaves it is already written. !!! ##################################################################### #### My setup: root@provisioned:~# cat /etc/modprobe.d/bonding.conf alias bond0 bonding options bonding mode=1 arp_interval=2000 Both, /etc/init.d/networking and upstart network-interface begin enabled. #### Beginning: root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I'm able to boot with both scripts (networking and network-interface enabled) with no problem. I can also boot with only "networking" script enabled: --- root@provisioned:~# initctl list | grep network network-interface stop/waiting networking start/running --- OR only the script "network-interface" enabled: --- root@provisioned:~# initctl list | grep network network-interface (eth2) start/running network-interface (lo) start/running network-interface (eth0) start/running network-interface (eth1) start/running --- #### Enabling bonding: Following ifenslave configuration example (/usr/share/doc/ifenslave/ examples/two_hotplug_ethernet), my /etc/network/interfaces has to look like this: --- auto eth1 iface eth1 inet manual     bond-master bond0 auto eth2 iface eth2 inet manual     bond-master bond0 auto bond0 iface bond0 inet static     bond-mode 1     bond-miimon 100     bond-primary eth1 eth2  address 192.168.169.1  netmask 255.255.255.0  broadcast 192.168.169.255 --- Having both scripts running does not make any difference since we are missing "bond-slaves" keyword on slave interfaces, for ifenslave to work, and they are set to "manual". Ifenslave code: """ for slave in $BOND_SLAVES ; do ... # Ensure $slave is down. ip link set "$slave" down 2>/dev/null if ! sysfs_add slaves "$slave" 2>/dev/null ; then  echo "Failed to enslave $slave to $BOND_MASTER. Is $BOND_MASTER    ready and a bonding interface ?" >&2 else  # Bring up slave if it is the target of an allow-bondX stanza.  # This is usefull to bring up slaves that need extra setup.  if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\"   --list | grep -q $slave; then   ifup $v --allow "$BOND_MASTER" "$slave"  fi """ Without the keyword "bond-slaves" on the master interface declaration, ifenslave will NOT bring any slave interface up on the "master" interface ifup invocation. *********** Part 1 So, having networking sysv init script AND upstart network-interface script running together... the following example works: --- root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet manual     bond-master bond0 auto eth2 iface eth2 inet manual     bond-master bond0 auto bond0 iface bond0 inet static     bond-mode 1     bond-miimon 100     bond-primary eth1     bond-slaves eth1 eth2     address 192.168.169.1     netmask 255.255.255.0     broadcast 192.168.169.255 --- Ifenslave script sets link down to all slave interfaces, declared by "bond-slaves" keyword, and assigns them to correct bonding. Ifenslave script ONLY tries to make a reentrant call to ifupdown if the slave interfaces have "allow-bondX" stanza (not our case). So this should not work, since when the master bonding interface (bond0) is called, ifenslave does not configure slaves without "allow-bondX" stanza. What is happening, why is it working ? If we disable upstart "network-interface" script.. our bonding stops to work on the boot. This is because upstart was the one setting the slave interfaces up (with the configuration above) and not sysv networking scripts. It is clear that ifenslave from sysv script invocation can set the slave interface down anytime (even during upstart script execution) so it might work and might not: """ ip link set "$slave" down 2>/dev/null """ root@provisioned:~# initctl list | grep network-interface network-interface (eth2) start/running network-interface (lo) start/running network-interface (bond0) start/running network-interface (eth0) start/running network-interface (eth1) start/running Since having the interface down is a requirement to slave it, running both scripts together (upstart and sysv) could create a situation where upstart puts slave interface online but ifenslave from sysv script puts it down and never bring it up again (because it does not have "allow-bondX" stanza). *********** Part 2 What if I disable upstart "network-interface", stay only with the sysv script but introduce the "allow-bondX" stanza to slave interfaces ? The funny part begins... without upstart, the ifupdown tool calls ifenslave, for bond0 interface, and ifenslave calls this line: """ for slave in $BOND_SLAVES ; do ...  if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\"   --list | grep -q $slave; then   ifup $v --allow "$BOND_MASTER" "$slave"  fi """ But ifenslave stays waiting for the bond0 interface to be online forever. We do have a chicken egg situation now: * ifupdown trys to put bond0 interface online. * we are not running upstart network-interface script. * ifupdown for bond0 calls ifenslave. * ifenslave tries to find interfaces with "allow-bondX" stanza * ifenslave tries to ifup slave interfaces with that stanza * slave interfaces keep forever waiting for the master * master is waiting for the slave interface * slave interface is waiting for the master interface ... :D And we have an infinite loop for ifenslave: """ # Wait for the master to be ready [ ! -f /run/network/ifenslave.$BOND_MASTER ] &&  echo "Waiting for bond master $BOND_MASTER to be ready" while :; do     if [ -f /run/network/ifenslave.$BOND_MASTER ]; then         break     fi     sleep 0.1 done """ *********** Conclusion That can be achieved if correct triggers are set (like the ones I just showed). Not having ifupdown parallel executions (sysv and upstart, for example) can make an infinite loop to happen during the boot. Having parallel ifupdown executions can trigger race conditions between: 1) ifupdown itself (case a on the bug description). 2) ifupdown and ifenslave script (case b on the bug description). [Impact]  * Lack of proper synchronization in ifupdown causes a race condition resulting in occasional incorrect network interface initialization (e.g. in bonding case - wrong bonding settings, network unavailable because slave<->master interfaces initialization order was wrong  * This is very annoying in case of large deployments (e.g. when bringing up 1000 machines it is almost guaranteed that at least a few of them will end up with network down).  * It has been fixed by introducing hierarchical and per-interface locking mechanism ensuring the right order (along with the correct order in the /e/n/interfaces file) of initialization [Test Case]  1. Create a VM with bonding configured with at least 2 slave interfaces.  2. Reboot.  3. If all interfaces are up - go to 2. [Regression Potential]  * This change has been introduced upstream in Debian.  * It does not require any config changes to existing installations. [Other Info] Original bug description: * please consider my bonding examples are using eth1 and eth2 as slave  interfaces. ifupdown some race conditions explained bellow. ifenslave does not behave well with sysv networking and upstart network-interface scripts running together. !!!! case 1) (a) ifup eth0 (b) ifup -a for eth0 ----------------------------------------------------------------- 1-1. Lock ifstate.lock file.                                   1-1. Wait for locking ifstate.lock                                       file. 1-2. Read ifstate file to check      the target NIC. 1-3. close(=release) ifstate.lock      file. 1-4. Judge that the target NIC      isn't processed.                                   1-2. Read ifstate file to check                                        the target NIC.                                   1-3. close(=release) ifstate.lock                                        file.                                   1-4. Judge that the target NIC                                        isn't processed. 2. Lock and update ifstate file.    Release the lock.                                   2. Lock and update ifstate file.                                      Release the lock. !!! to be explained !!! case 2) (a) ifenslave of eth0 (b) ifenslave of eth0 ------------------------------------------------------------------ 3. Execute ifenslave of eth0. 3. Execute ifenslave of eth0. 4. Link down the target NIC. 5. Write NIC id to    /sys/class/net/bond0/bonding    /slaves then NIC gets up                                   4. Link down the target NIC.                                   5. Fails to write NIC id to                                      /sys/class/net/bond0/bonding/                                      slaves it is already written. !!! ##################################################################### #### My setup: root@provisioned:~# cat /etc/modprobe.d/bonding.conf alias bond0 bonding options bonding mode=1 arp_interval=2000 Both, /etc/init.d/networking and upstart network-interface begin enabled. #### Beginning: root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I'm able to boot with both scripts (networking and network-interface enabled) with no problem. I can also boot with only "networking" script enabled: --- root@provisioned:~# initctl list | grep network network-interface stop/waiting networking start/running --- OR only the script "network-interface" enabled: --- root@provisioned:~# initctl list | grep network network-interface (eth2) start/running network-interface (lo) start/running network-interface (eth0) start/running network-interface (eth1) start/running --- #### Enabling bonding: Following ifenslave configuration example (/usr/share/doc/ifenslave/ examples/two_hotplug_ethernet), my /etc/network/interfaces has to look like this: --- auto eth1 iface eth1 inet manual     bond-master bond0 auto eth2 iface eth2 inet manual     bond-master bond0 auto bond0 iface bond0 inet static     bond-mode 1     bond-miimon 100     bond-primary eth1 eth2  address 192.168.169.1  netmask 255.255.255.0  broadcast 192.168.169.255 --- Having both scripts running does not make any difference since we are missing "bond-slaves" keyword on slave interfaces, for ifenslave to work, and they are set to "manual". Ifenslave code: """ for slave in $BOND_SLAVES ; do ... # Ensure $slave is down. ip link set "$slave" down 2>/dev/null if ! sysfs_add slaves "$slave" 2>/dev/null ; then  echo "Failed to enslave $slave to $BOND_MASTER. Is $BOND_MASTER    ready and a bonding interface ?" >&2 else  # Bring up slave if it is the target of an allow-bondX stanza.  # This is usefull to bring up slaves that need extra setup.  if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\"   --list | grep -q $slave; then   ifup $v --allow "$BOND_MASTER" "$slave"  fi """ Without the keyword "bond-slaves" on the master interface declaration, ifenslave will NOT bring any slave interface up on the "master" interface ifup invocation. *********** Part 1 So, having networking sysv init script AND upstart network-interface script running together... the following example works: --- root@provisioned:~# cat /etc/network/interfaces # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet manual     bond-master bond0 auto eth2 iface eth2 inet manual     bond-master bond0 auto bond0 iface bond0 inet static     bond-mode 1     bond-miimon 100     bond-primary eth1     bond-slaves eth1 eth2     address 192.168.169.1     netmask 255.255.255.0     broadcast 192.168.169.255 --- Ifenslave script sets link down to all slave interfaces, declared by "bond-slaves" keyword, and assigns them to correct bonding. Ifenslave script ONLY tries to make a reentrant call to ifupdown if the slave interfaces have "allow-bondX" stanza (not our case). So this should not work, since when the master bonding interface (bond0) is called, ifenslave does not configure slaves without "allow-bondX" stanza. What is happening, why is it working ? If we disable upstart "network-interface" script.. our bonding stops to work on the boot. This is because upstart was the one setting the slave interfaces up (with the configuration above) and not sysv networking scripts. It is clear that ifenslave from sysv script invocation can set the slave interface down anytime (even during upstart script execution) so it might work and might not: """ ip link set "$slave" down 2>/dev/null """ root@provisioned:~# initctl list | grep network-interface network-interface (eth2) start/running network-interface (lo) start/running network-interface (bond0) start/running network-interface (eth0) start/running network-interface (eth1) start/running Since having the interface down is a requirement to slave it, running both scripts together (upstart and sysv) could create a situation where upstart puts slave interface online but ifenslave from sysv script puts it down and never bring it up again (because it does not have "allow-bondX" stanza). *********** Part 2 What if I disable upstart "network-interface", stay only with the sysv script but introduce the "allow-bondX" stanza to slave interfaces ? The funny part begins... without upstart, the ifupdown tool calls ifenslave, for bond0 interface, and ifenslave calls this line: """ for slave in $BOND_SLAVES ; do ...  if [ -z "$(which ifquery)" ] || ifquery --allow \"$BOND_MASTER\"   --list | grep -q $slave; then   ifup $v --allow "$BOND_MASTER" "$slave"  fi """ But ifenslave stays waiting for the bond0 interface to be online forever. We do have a chicken egg situation now: * ifupdown trys to put bond0 interface online. * we are not running upstart network-interface script. * ifupdown for bond0 calls ifenslave. * ifenslave tries to find interfaces with "allow-bondX" stanza * ifenslave tries to ifup slave interfaces with that stanza * slave interfaces keep forever waiting for the master * master is waiting for the slave interface * slave interface is waiting for the master interface ... :D And we have an infinite loop for ifenslave: """ # Wait for the master to be ready [ ! -f /run/network/ifenslave.$BOND_MASTER ] &&  echo "Waiting for bond master $BOND_MASTER to be ready" while :; do     if [ -f /run/network/ifenslave.$BOND_MASTER ]; then         break     fi     sleep 0.1 done """ *********** Conclusion That can be achieved if correct triggers are set (like the ones I just showed). Not having ifupdown parallel executions (sysv and upstart, for example) can make an infinite loop to happen during the boot. Having parallel ifupdown executions can trigger race conditions between: 1) ifupdown itself (case a on the bug description). 2) ifupdown and ifenslave script (case b on the bug description).
2015-10-30 16:38:24 Mathew Hodson ifupdown (Ubuntu): importance Undecided Medium
2015-10-30 16:38:26 Mathew Hodson ifupdown (Ubuntu Precise): importance Undecided Medium
2015-10-30 16:38:29 Mathew Hodson ifupdown (Ubuntu Trusty): importance Undecided Medium
2015-10-30 16:38:32 Mathew Hodson ifupdown (Ubuntu Vivid): importance Undecided Medium
2015-11-09 19:52:07 Stéphane Graber removed subscriber Stéphane Graber
2015-11-10 10:32:06 Dariusz Gadomski attachment added xenial_ifupdown_0.7.54ubuntu2.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4515989/+files/xenial_ifupdown_0.7.54ubuntu2.debdiff
2015-11-10 15:08:19 Martin Pitt ifupdown (Ubuntu): status In Progress Fix Committed
2015-11-10 15:59:28 Sebastien Bacher removed subscriber Ubuntu Sponsors Team
2015-11-10 20:17:31 Launchpad Janitor ifupdown (Ubuntu): status Fix Committed Fix Released
2015-11-18 15:30:07 Andrew McDermott bug added subscriber Andrew McDermott
2016-01-05 14:12:22 Martin Pitt nominated for series Ubuntu Wily
2016-01-05 14:12:22 Martin Pitt bug task added ifupdown (Ubuntu Wily)
2016-01-05 14:13:33 Martin Pitt ifupdown (Ubuntu Vivid): status Confirmed Won't Fix
2016-01-05 14:16:38 Martin Pitt ifupdown (Ubuntu Wily): status New In Progress
2016-01-05 14:16:49 Martin Pitt ifupdown (Ubuntu Trusty): status Confirmed In Progress
2016-01-05 14:17:07 Martin Pitt bug task added ifenslave (Ubuntu)
2016-01-05 14:17:21 Martin Pitt ifenslave (Ubuntu Wily): status New Fix Released
2016-01-05 14:17:32 Martin Pitt ifenslave (Ubuntu Vivid): status New Fix Released
2016-01-05 14:19:02 Martin Pitt ifenslave (Ubuntu Trusty): status New In Progress
2016-01-05 14:19:39 Martin Pitt ifenslave (Ubuntu Precise): status New Won't Fix
2016-01-05 14:19:51 Martin Pitt ifupdown (Ubuntu Precise): status Confirmed Won't Fix
2016-01-05 14:20:04 Martin Pitt ifenslave (Ubuntu): status New Fix Released
2016-01-07 21:27:33 Brian Murray ifupdown (Ubuntu Trusty): status In Progress Fix Committed
2016-01-07 21:27:40 Brian Murray bug added subscriber SRU Verification
2016-01-07 21:27:52 Brian Murray tags cts patch cts patch verification-needed
2016-01-07 21:33:46 Brian Murray ifenslave (Ubuntu Trusty): status In Progress Fix Committed
2016-01-08 16:13:07 Brian Murray ifupdown (Ubuntu Wily): status In Progress Fix Committed
2016-01-10 05:39:45 Mathew Hodson ifupdown (Ubuntu Wily): importance Undecided Medium
2016-01-10 05:39:58 Mathew Hodson ifenslave (Ubuntu): importance Undecided Medium
2016-01-10 05:40:08 Mathew Hodson ifenslave (Ubuntu Precise): importance Undecided Medium
2016-01-10 05:40:22 Mathew Hodson ifenslave (Ubuntu Trusty): importance Undecided Medium
2016-01-10 05:40:33 Mathew Hodson ifenslave (Ubuntu Vivid): importance Undecided Medium
2016-01-10 05:40:45 Mathew Hodson ifenslave (Ubuntu Wily): importance Undecided Medium
2016-01-11 13:34:18 Dariusz Gadomski tags cts patch verification-needed patch sts verification-done
2016-01-12 06:15:54 Martin Pitt tags patch sts verification-done patch sts verification-failed
2016-01-12 09:39:45 Dariusz Gadomski attachment added trusty_ifupdown_0.7.47.2ubuntu4.3.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4548422/+files/trusty_ifupdown_0.7.47.2ubuntu4.3.debdiff
2016-01-12 09:40:08 Dariusz Gadomski attachment added wily_ifupdown_0.7.54ubuntu3.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4548423/+files/wily_ifupdown_0.7.54ubuntu3.debdiff
2016-01-12 10:01:59 Martin Pitt summary Precise, Trusty, Utopic - ifupdown initialization problems caused by race condition ifupdown initialization problems caused by race condition
2016-01-12 10:04:08 Dariusz Gadomski attachment removed wily_ifupdown_0.7.54ubuntu3.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4548423/+files/wily_ifupdown_0.7.54ubuntu3.debdiff
2016-01-12 10:04:22 Dariusz Gadomski attachment removed trusty_ifupdown_0.7.47.2ubuntu4.3.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4548422/+files/trusty_ifupdown_0.7.47.2ubuntu4.3.debdiff
2016-01-12 10:13:38 Dariusz Gadomski attachment added trusty_ifupdown_0.7.47.2ubuntu4.3.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4548430/+files/trusty_ifupdown_0.7.47.2ubuntu4.3.debdiff
2016-01-12 10:14:11 Dariusz Gadomski attachment added wily_ifupdown_0.7.54ubuntu1.2.debdiff https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4548431/+files/wily_ifupdown_0.7.54ubuntu1.2.debdiff
2016-01-12 10:21:02 Aristarkh Zagorodnikov bug added subscriber Aristarkh Zagorodnikov
2016-01-12 11:15:46 Martin Pitt ifupdown (Ubuntu Trusty): status Fix Committed In Progress
2016-01-12 11:15:55 Martin Pitt ifupdown (Ubuntu Wily): status Fix Committed In Progress
2016-01-12 20:05:35 Adam Conrad ifupdown (Ubuntu Trusty): status In Progress Fix Committed
2016-01-12 20:05:59 Adam Conrad ifupdown (Ubuntu Wily): status In Progress Fix Committed
2016-01-12 20:06:36 Adam Conrad tags patch sts verification-failed patch sts verification-needed
2016-01-18 09:25:27 Dariusz Gadomski tags patch sts verification-needed patch sts verification-done
2016-01-21 21:33:55 Brian Murray removed subscriber Ubuntu Stable Release Updates Team
2016-01-21 21:34:14 Launchpad Janitor ifenslave (Ubuntu Trusty): status Fix Committed Fix Released
2016-02-01 14:16:23 Launchpad Janitor ifupdown (Ubuntu Trusty): status Fix Committed Fix Released
2016-02-01 14:17:41 Martin Pitt ifupdown (Ubuntu Wily): status Fix Committed Fix Released
2016-02-22 23:59:09 Swe W Aung attachment added sosreport-SweAung.1337873-20160223104913.tar.gz https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1337873/+attachment/4578447/+files/sosreport-SweAung.1337873-20160223104913.tar.gz
2016-09-29 22:55:11 Dan Streetman bug added subscriber Dan Streetman
2016-12-07 16:01:24 Bug Watch Updater ifupdown (Debian): status New Fix Released