vlan created on bond fails auto activation on updating parent network bond

Bug #1790098 reported by bugproxy
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ubuntu on IBM z Systems
High
Canonical Foundations Team
network-manager (Ubuntu)
High
Skipper Bug Screeners
Bionic
High
Unassigned

Bug Description

Auto activation of Vlan created over network-bond fails if bond is deactivated and reactivated.

Contact Information = Abhiram Kulkarni(<email address hidden>), Mandar Deshpande(<email address hidden>)

---uname output---
Linux S36MANDAR 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:14:23 UTC 2018 s390x s390x s390x GNU/Linux

Machine Type = s390x

---Debugger---
A debugger is not configured

---Steps to Reproduce---
 1. Created a network bond with static IP address (and no IPv6 address); active backup mode; ARP polling; single slave.

2. Created a VLAN using said network bond with static IPv4 address (and no IPv6 address).

3. Can ping from the appliance to a target on both links (the parent bond and the VLAN).

4. Switched to another slave for the created bond

5. Can still ping from the appliance to a target via the parent bond; however, cannot ping to the target via the VLAN.

=============================================================================
Detailed steps:

1. Initial setup:
============
root@S36MANDAR:~# nmcli c s
NAME UUID TYPE DEVICE
enc1a80 c3a2037d-60a3-3cb4-9234-45aed55f7093 ethernet enc1a80
enc8f00 2423add6-1464-3765-877c-a214dc497492 ethernet enc8f00
enc1d40 ff2d70f8-130e-3dc6-ab24-1dba07563605 ethernet --

2. Create Netwrok-bond with one slave:
================================
root@S36MANDAR:~# nmcli c add type bond con-name mybond1 ifname mybond1 ipv4.method disabled ipv6.method ignore
Connection 'mybond1' (4b918a65-43a6-4ec3-b3c4-388ed52b116d) successfully added.

root@S36MANDAR:~# nmcli con add type ethernet ifname enc1d40 master mybond1
Connection 'bond-slave-enc1d40' (cfe4b245-3dda-4f45-b7b4-6d40d144a02f) successfully added.
root@S36MANDAR:~# nmcli con up bond-slave-enc1d40
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/18)
root@S36MANDAR:~# nmcli c s
NAME UUID TYPE DEVICE
bond-slave-enc1d40 cfe4b245-3dda-4f45-b7b4-6d40d144a02f ethernet enc1d40
enc1a80 c3a2037d-60a3-3cb4-9234-45aed55f7093 ethernet enc1a80
enc8f00 2423add6-1464-3765-877c-a214dc497492 ethernet enc8f00
mybond1 4b918a65-43a6-4ec3-b3c4-388ed52b116d bond mybond1
enc1d40 ff2d70f8-130e-3dc6-ab24-1dba07563605 ethernet --

3. Create vlan over mybond1:
=======================
root@S36MANDAR:~# nmcli con add type vlan con-name vlanbond.100 ifname vlanbond.100 dev mybond1 id 100 ipv4.method disabled ipv6.method ignore
Connection 'vlanbond.100' (e054df42-97a0-492b-b2c9-b9571077493e) successfully added.
root@S36MANDAR:~# nmcli c s
NAME UUID TYPE DEVICE
bond-slave-enc1d40 cfe4b245-3dda-4f45-b7b4-6d40d144a02f ethernet enc1d40
enc1a80 c3a2037d-60a3-3cb4-9234-45aed55f7093 ethernet enc1a80
enc8f00 2423add6-1464-3765-877c-a214dc497492 ethernet enc8f00
mybond1 4b918a65-43a6-4ec3-b3c4-388ed52b116d bond mybond1
vlanbond.100 e054df42-97a0-492b-b2c9-b9571077493e vlan vlanbond.100
enc1d40 ff2d70f8-130e-3dc6-ab24-1dba07563605 ethernet --

4. Reactivate bond :
=================
root@S36MANDAR:~# nmcli con up mybond1
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/30)

root@S36MANDAR:~# nmcli c s
NAME UUID TYPE DEVICE
enc1a80 c3a2037d-60a3-3cb4-9234-45aed55f7093 ethernet enc1a80
enc1d40 ff2d70f8-130e-3dc6-ab24-1dba07563605 ethernet enc1d40
enc8f00 2423add6-1464-3765-877c-a214dc497492 ethernet enc8f00
mybond1 4b918a65-43a6-4ec3-b3c4-388ed52b116d bond mybond1
bond-slave-enc1d40 cfe4b245-3dda-4f45-b7b4-6d40d144a02f ethernet --
encw1810 4ddeb38e-d5f7-3814-abb7-be50b8da874e ethernet --
vlanbond.100 e054df42-97a0-492b-b2c9-b9571077493e vlan --

As is seen, vlan(vlanbond.100) did not get activated

Now, if manually I make vlan connection up, it gets activated:

root@S36MANDAR:~# nmcli con up vlanbond.100
NAME UUID TYPE DEVICE
bond-slave-enc1d40 cfe4b245-3dda-4f45-b7b4-6d40d144a02f ethernet enc1d40
enc1a80 c3a2037d-60a3-3cb4-9234-45aed55f7093 ethernet enc1a80
enc8f00 2423add6-1464-3765-877c-a214dc497492 ethernet enc8f00
mybond1 4b918a65-43a6-4ec3-b3c4-388ed52b116d bond mybond1
vlanbond.100 e054df42-97a0-492b-b2c9-b9571077493e vlan vlanbond.100
enc1d40 ff2d70f8-130e-3dc6-ab24-1dba07563605 ethernet --
encw1810 4ddeb38e-d5f7-3814-abb7-be50b8da874e ethernet --

bugproxy (bugproxy)
tags: added: architecture-s39064 bugnameltc-170894 severity-high targetmilestone-inin1804
Changed in ubuntu:
assignee: nobody → Skipper Bug Screeners (skipper-screen-team)
affects: ubuntu → linux (Ubuntu)
Frank Heimes (fheimes)
Changed in ubuntu-z-systems:
importance: Undecided → High
Frank Heimes (fheimes)
Changed in ubuntu-z-systems:
assignee: nobody → Canonical Foundations Team (canonical-foundations)
affects: linux (Ubuntu) → network-manager (Ubuntu)
Revision history for this message
Steve Langasek (vorlon) wrote :

This output shows that you are using nmcli to manage network connections. Network-manager is not part of the Ubuntu Server platform, it is not included in the images Ubuntu provides for s390x, and it is not the recommended tool for configuring vlans and bonds in Ubuntu.

To show that this is a problem with the Ubuntu Server network stack, please reproduce this problem with netplan(5).

Frank Heimes (fheimes)
Changed in network-manager (Ubuntu):
status: New → Incomplete
Changed in ubuntu-z-systems:
status: New → Incomplete
Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2018-09-07 05:39 EDT-------
I agree that we need to switch to netplan, but that is currently not feasible approach for us.
We will keep you updated once we have anything concrete on it.

We expect nmcli also to work as we get the rpm from ubuntu repo.

While i was investigating the below case between RHEL and Ubuntu, i noticed the below behaviour.

Test case :
1)Create bond
2)Create vlan
3)Deactivate bond #routes for both bond and vlan gets deleted## Hence vlan also gets deactivated
4)Activate bond #Routes get created for bond and not for vlan. ## Hence vlan continues to be in deactivated state?

Where in case of RHEL, the route for vlan also was created in step 4, hence the vlan was in activated state.
I monitored the system logs during above execution, i did not notice nmcli doing any explicit activation/deactivation logs for vlan when bond was deactivated or activated.

So with this i suspect, its underlying kernel taking care of some sort of notification events which is black box to me.

Could you please help us inspect on same lines and review the case?
What is the expected behaviour with netplan for the above test case?

Revision history for this message
Steve Langasek (vorlon) wrote :

The description for reproducing this says:

> 1. Created a network bond with static IP address (and no IPv6 address); active backup mode; ARP polling; single slave.

> 2. Created a VLAN using said network bond with static IPv4 address (and no IPv6 address).

However, the commands you list out afterwards specify 'ipv4.method disabled'.

Where are you configuring your IP addresses?

Why are you configuring IP addresses on both the bond and the vlan in this case?

Revision history for this message
Steve Langasek (vorlon) wrote :

Here is an example netplan yaml that emulates this configuration in a test container.

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: false
    bonds:
        bond0:
            interfaces: [eth0]
            parameters:
                mode: active-backup
                arp-interval: 10
    vlans:
        bond0.100:
            id: 100
            link: bond0
            addresses: [10.44.49.39/24]
            routes:
              - to: 0.0.0.0/0
                via: 10.44.49.1

Setting 'arp-interval' causes the link to drop after a short period because the lxd bridge for the container isn't configured compatibly; so I've removed that line for testing here.

However, I haven't understood why you have addresses assigned on both the bond and the vlan, or what you're intending to achieve by reconfiguring the bond with nmcli. Here, the links appear to work as expected when managed directly with iproute2.

# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: bond0.100@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.44.49.39/24 brd 10.44.49.255 scope global bond0.100
       valid_lft forever preferred_lft forever
# ip link set dev bond0 down
# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
# ip link set dev bond0 up
# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: bond0.100@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.44.49.39/24 brd 10.44.49.255 scope global bond0.100
       valid_lft forever preferred_lft forever
#

I don't see anything here to indicate the kernel is mishandling these interfaces.

Revision history for this message
Steve Langasek (vorlon) wrote :

Note that the above test was with Ubuntu 18.04 on x86. With a preliminary test of Ubuntu 18.10 on s390x, the eth0 interface fails to configure at all. I will investigate further on Monday.

Revision history for this message
Steve Langasek (vorlon) wrote :

Test repeated on an Ubuntu 18.04 s390x container (Ubuntu 18.04 host):

# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: bond0.100@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.242.92.157/24 brd 10.242.92.255 scope global bond0.100
       valid_lft forever preferred_lft forever
# ip link set dev bond0 down
# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
# ip link set dev bond0 up
# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: bond0.100@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.242.92.157/24 brd 10.242.92.255 scope global bond0.100
       valid_lft forever preferred_lft forever
# uname -a
Linux enhanced-warthog 4.17.0-9-generic #10-Ubuntu SMP Wed Aug 22 13:31:35 UTC 2018 s390x s390x s390x GNU/Linux
#

No indication of behavior difference on Ubuntu 18.04 s390x.

Revision history for this message
Steve Langasek (vorlon) wrote :

sorry, that was with Ubuntu 18.10 host kernel by mistake.

Revision history for this message
Steve Langasek (vorlon) wrote :

# uname -a
Linux rare-stork 4.15.0-33-generic #36-Ubuntu SMP Wed Aug 15 13:42:17 UTC 2018 s390x s390x s390x GNU/Linux
# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: bond0.100@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.132.106.86/24 brd 10.132.106.255 scope global bond0.100
       valid_lft forever preferred_lft forever
# ip link set dev bond0 down
# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
# ip link set dev bond0 up
# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: bond0.100@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.132.106.86/24 brd 10.132.106.255 scope global bond0.100
       valid_lft forever preferred_lft forever
#

Same results on a lxd container on a bionic kernel.

Revision history for this message
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2018-09-10 05:16 EDT-------
Vorlon,

In the detailed steps , the commands have been simplified.
Let me put simple use case for ease to avoid confusion.

1)Create bond using nmcli utility with below attributes set:
one slave (eth0) , mode = active_backup , ipv4 is assigned to bond say 30.1.2.100/24

2)Create VLAN on top of bond created in above step 1. ipv4 is assigned to vlan say 30.1.3.100/24

3)Ping operation under test works for 30.1.2.1 and 30.1.3.1

4)Deactivate bond. Vlan also gets deactivated. Both 30.1.2.1 and 30.1.3.1 are not reachable.
Because ip routes no more exists on deactivation of bond. #This is expected as per my understanding

5)Activate bond. VLAN DOES NOT GET ACTIVATED AUTOMATICALLY. Only 30.1.2.1 is reachable.
Route for VLAN is not created. #This was working in combination with nmcli + other distros automatically, and as a end user need not do any operation with respect to VLAN.

6)For nmcli + Ubuntu, we are forced to Manually Activate VLAN, route is then created for VLAN. 30.1.3.1 is now reachable.

Above steps can be repeated
1)Without assiging ip to bond as well
2)For some reasons if we want to add a new slave or delete a slave from bond OR change any attributes of bond say primary, primary_reselect etc, bond is deactivated, attributes of bond are modified and activated. In this process VLAN gets deactivated autmatically but not gets auto - activated resulting in loss of traffic

So either i suspect one of the two issues here
1)nmcli was internally handling this in case of other distros. i.e deactivated the vlan, when bond is deactivated and activate the vlan , when bond is activated.
OR
2)network kernel underlying module takes care of these notifications between bond and vlan. I suspected this because, as i did not see any log updates with respect to nmcli tool for automatically deactivating and activating VLAN

Revision history for this message
Steve Langasek (vorlon) wrote : Re: [Bug 1790098] Comment bridged from LTC Bugzilla

On Mon, Sep 10, 2018 at 09:19:41AM -0000, bugproxy wrote:
> So either i suspect one of the two issues here

> 1)nmcli was internally handling this in case of other distros. i.e
> deactivated the vlan, when bond is deactivated and activate the vlan ,
> when bond is activated.

> OR

> 2)network kernel underlying module takes care of these notifications
> between bond and vlan. I suspected this because, as i did not see any log
> updates with respect to nmcli tool for automatically deactivating and
> activating VLAN

It would be good for us to be able to isolate which of these two cases it
is, since if there is a bug with the kernel driver, that is definitely in
scope for us to ensure we fix.

To establish that this is a kernel bug, we would need to see a reproducer
for it in your existing environment that does not depend on nmcli, but is
instead reproducible using low-level iproute2 commands.

Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2018-09-12 05:05 EDT-------
Vorlon,

On investigation, comparing steps , behaviour with Ubuntu and Other distros, adding below data.

Other distros, nmcli behaviour
Deactivation of bond steps via our script/code:
+++++++++++++++++++++++++++++++++++++++++++++++
1)Deactivate slaves [nmcli c down <slave_uuid> ]
2)Deactivate bond [nmcli c down <bond_uuid> ]
Result : All slaves , bond get deactivated. VLANs created on top of bond automatically get deactivated.

Activation of bond steps via our script/code:
+++++++++++++++++++++++++++++++++++++++++++++++
1)Activate slaves [nmcli c up <slave_uuid>]
2)Activate bond [nmcli c up <bond_uuid> ]
Result : All slaves , bond get activated . VLANs created on top of bond automatically get activated.

In Ubuntu + nmcli :
++++++++++++++++++++
Deactivation of bond:
1)Deactivate slaves [nmcli c down <slave_uuid> ]
2)Deactivate bond [nmcli c down <bond_uuid> ]
Result : All slaves , bond get deactivated. VLANs created on top of bond automatically get deactivated.

Activation of bond:
1)Activate slaves [nmcli c up <slave_uuid>]
2)Activate bond [nmcli c up <bond_uuid> ]
Result : All slaves , bond get activated . VLANs created on top of bond ##FAILED## to activate.

Workaround to make autoactivation for vlan work is as follows:
Deactivation of bond:
1)Deactivate bond [nmcli c down <bond_uuid> ]
Result : Bond get deactivated. Slaves part of bond and VLANs created on top of bond automatically get deactivated.

Activation of bond:
1)Activate bond [nmcli c up <bond_uuid> ]
Result : Bond gets activated . Slaves part of bond and VLANs created on top of bond automatically get activated.

But this workaround will not work in other distros. Other distros explicitly needed the slaves to be activated.

I am not sure how could i simulate above steps behaviour with other utility like ip commands. Any suggestions there?
I thought of using "ip link set bond_device down " as equivalent to "nmcli c down <bond_uuid>" but they differ. "ip link set bond_device down" only brings the link down for bond where as "nmcli c down <bond_uuid>" brings the bond device down and also detaches slave from bond.

I also notice "nmcli c up <bond_device>" behaviour is different in Ubuntu and Other distros.
In ubuntu , it probably assumes all slaves attached to bond, bond device itself and vlans created on top of bond as one entity and activated all.
In other distros , it assumes slave as not part of it , hence only bond device is active. Slave and VLANs remain inactive.

So this is probably bug in one of the distros or probably expected behaviour of nmcli component in different distros. Any suggestions/inputs here ?

Revision history for this message
Sebastien Bacher (seb128) wrote :

The issue looks like it could be what is fixed by https://gitlab.freedesktop.org/NetworkManager/NetworkManager/commit/34ca1680 (the change is included in 1.10.8 which is currently not yet in bionic but being SRUed)

Revision history for this message
Frank Heimes (fheimes) wrote :

Since our focus is on netplan only, I'm changing the status to Won't Fix.

Changed in ubuntu-z-systems:
status: Incomplete → Won't Fix
Manoj Iyer (manjo)
Changed in network-manager (Ubuntu):
status: Incomplete → Won't Fix
Revision history for this message
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2019-06-19 03:47 EDT-------
IBM Bugzilla status -> closed, Will not be fixed by Canonical

Revision history for this message
Timo Aaltonen (tjaalton) wrote : Please test proposed package

Hello bugproxy, or anyone else affected,

Accepted network-manager into bionic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/network-manager/1.10.6-2ubuntu1.2 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-bionic to verification-done-bionic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-bionic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in network-manager (Ubuntu Bionic):
status: New → Fix Committed
tags: added: verification-needed verification-needed-bionic
Revision history for this message
Ubuntu SRU Bot (ubuntu-sru-bot) wrote : Autopkgtest regression report (network-manager/1.10.6-2ubuntu1.2)

All autopkgtests for the newly accepted network-manager (1.10.6-2ubuntu1.2) for bionic have finished running.
The following regressions have been reported in tests triggered by the package:

network-manager/1.10.6-2ubuntu1.2 (amd64, arm64, i386, ppc64el)
systemd/237-3ubuntu10.31 (i386, ppc64el)
netplan.io/0.98-0ubuntu1~18.04.1 (arm64)

Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUpdates policy regarding autopkgtest regressions [1].

https://people.canonical.com/~ubuntu-archive/proposed-migration/bionic/update_excuses.html#network-manager

[1] https://wiki.ubuntu.com/StableReleaseUpdates#Autopkgtest_Regressions

Thank you!

Mathew Hodson (mhodson)
Changed in network-manager (Ubuntu):
status: Won't Fix → Fix Released
Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2019-10-29 03:59 EDT-------
Tested below scenario:

1)Create a bond network device
2)Create a vlan on top of above bond network device
3)Deactivate the bond
4)VLAN DEVICE ALSO GOES INTO DEACTIVATED state automatically
5)Activate bond
6)With the fix in, VLAN ALSO NOW GETS ACTIVATED AUTOMATICALLY

This defect fix is verified

Dan Streetman (ddstreet)
tags: added: verification-done verification-done-bionic
removed: verification-needed verification-needed-bionic
Frank Heimes (fheimes)
Changed in ubuntu-z-systems:
status: Won't Fix → Fix Committed
Mathew Hodson (mhodson)
Changed in network-manager (Ubuntu Bionic):
importance: Undecided → High
Changed in network-manager (Ubuntu):
importance: Undecided → High
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package network-manager - 1.10.6-2ubuntu1.2

---------------
network-manager (1.10.6-2ubuntu1.2) bionic; urgency=medium

  [ Till Kamppeter ]
  * debian/tests/nm: Add gi.require_version() calls for NetworkManager
    and NMClient to avoid stderr output which fails the test. (LP: #1825946)

  [ Dariusz Gadomski ]
  * d/p/fix-dns-leak-lp1754671.patch: backport of DNS leak fix. (LP: #1754671)
  * d/p/lp1790098.patch: retry activating devices when the parent becomes
    managed. (LP: #1790098)

 -- Dariusz Gadomski <email address hidden> Sat, 07 Sep 2019 16:10:59 +0200

Changed in network-manager (Ubuntu Bionic):
status: Fix Committed → Fix Released
Revision history for this message
Łukasz Zemczak (sil2100) wrote : Update Released

The verification of the Stable Release Update for network-manager has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Frank Heimes (fheimes)
Changed in ubuntu-z-systems:
status: Fix Committed → Fix Released
Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2020-02-11 04:54 EDT-------
IBM Bugzilla status -> closed, Fix Released with bionic

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers