2022-09-16 16:22:08 |
Narendra K |
bug |
|
|
added bug |
2022-09-16 16:22:28 |
Narendra K |
information type |
Public |
Private |
|
2022-09-16 16:23:02 |
Narendra K |
bug |
|
|
added subscriber Michael Reed |
2022-09-19 18:05:52 |
Michael Reed |
bug |
|
|
added subscriber Adrian Lane |
2022-09-19 18:06:32 |
Michael Reed |
bug |
|
|
added subscriber Rod Smith |
2022-09-19 18:10:57 |
Michael Reed |
bug |
|
|
added subscriber Jeff Lane |
2022-09-21 13:49:43 |
Michael Reed |
bug |
|
|
added subscriber Vinay HM |
2022-09-21 13:49:55 |
Michael Reed |
bug |
|
|
added subscriber Gordon Bookless |
2022-09-21 13:50:09 |
Michael Reed |
linux (Ubuntu): assignee |
|
Michael Reed (mreed8855) |
|
2022-09-23 20:57:39 |
Michael Reed |
description |
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d |
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
|
2022-09-29 19:26:11 |
Michael Reed |
description |
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
[Impact]
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
[Fix]
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
[Test Plan]
[Where problems could occur]
[Other Info]
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
|
2022-09-29 19:26:25 |
Michael Reed |
summary |
Ubuntu 22.04 - NVMe TCP - Host fails to reconnect to target after link down/link up sequence |
[SRU]Ubuntu 22.04 - NVMe TCP - Host fails to reconnect to target after link down/link up sequence |
|
2022-09-29 19:26:44 |
Michael Reed |
summary |
[SRU]Ubuntu 22.04 - NVMe TCP - Host fails to reconnect to target after link down/link up sequence |
[SRU] Ubuntu 22.04 - NVMe TCP - Host fails to reconnect to target after link down/link up sequence |
|
2022-09-29 19:26:51 |
Michael Reed |
nominated for series |
|
Ubuntu Jammy |
|
2022-09-29 19:26:51 |
Michael Reed |
bug task added |
|
linux (Ubuntu Jammy) |
|
2022-09-29 19:26:59 |
Michael Reed |
linux (Ubuntu Jammy): assignee |
|
Michael Reed (mreed8855) |
|
2022-09-29 19:27:04 |
Michael Reed |
linux (Ubuntu Jammy): status |
New |
In Progress |
|
2022-09-29 19:27:07 |
Michael Reed |
linux (Ubuntu): status |
New |
In Progress |
|
2022-11-14 11:29:18 |
Narendra K |
description |
[Impact]
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
[Fix]
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
[Test Plan]
[Where problems could occur]
[Other Info]
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
[Impact]
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
[Fix]
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
[Test Plan]
1. Boot into Ubuntu 22.04 kernel without fix.
2. Establish connection to powerstore and create more than 70 NVMe controllers ( >64 controllers)
nvme connect -t tcp -a <target address> -n <target nqn> -D
Observe that nvme controllers > 64 get assigned 8 queues.
3. Delete few controllers so that total number of controllers becomes < 64. This results in higher number of queues becoming available to remaining NVMe controllers.
nvme disconnect -d <nvme controller>
4. Toggle NIC link and bring link up after 10 seconds.
5. Observe that connection to target is lost and after link comes up, controller from host tries to re-establish connection.
6. With patch, reconnection succeeds with higher number of queues.
[Where problems could occur]
Regression risk is low.
[Other Info]
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
|
2022-11-14 11:29:57 |
Narendra K |
description |
[Impact]
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
[Fix]
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
[Test Plan]
1. Boot into Ubuntu 22.04 kernel without fix.
2. Establish connection to powerstore and create more than 70 NVMe controllers ( >64 controllers)
nvme connect -t tcp -a <target address> -n <target nqn> -D
Observe that nvme controllers > 64 get assigned 8 queues.
3. Delete few controllers so that total number of controllers becomes < 64. This results in higher number of queues becoming available to remaining NVMe controllers.
nvme disconnect -d <nvme controller>
4. Toggle NIC link and bring link up after 10 seconds.
5. Observe that connection to target is lost and after link comes up, controller from host tries to re-establish connection.
6. With patch, reconnection succeeds with higher number of queues.
[Where problems could occur]
Regression risk is low.
[Other Info]
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
[Impact]
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
[Fix]
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
[Test Plan]
1. Boot into Ubuntu 22.04 kernel without fix.
2. Establish connection to powerstore and create more than 70 NVMe controllers ( >64 controllers)
nvme connect -t tcp -a <target address> -n <target nqn> -D
Observe that nvme controllers > 64 get assigned 8 queues.
3. Delete few controllers so that total number of controllers becomes < 64. This results in higher number of queues becoming available to remaining NVMe controllers.
nvme disconnect -d <nvme controller>
4. Toggle NIC link and bring link up after 10 seconds.
5. Observe that connection to target is lost and after link comes up, controller from host tries to re-establish connection.
6. With patch, reconnection succeeds with higher number of queues.
[Where problems could occur]
Regression risk is low to medium.
[Other Info]
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
|
2022-11-15 03:38:37 |
Michael Reed |
description |
[Impact]
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
[Fix]
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
[Test Plan]
1. Boot into Ubuntu 22.04 kernel without fix.
2. Establish connection to powerstore and create more than 70 NVMe controllers ( >64 controllers)
nvme connect -t tcp -a <target address> -n <target nqn> -D
Observe that nvme controllers > 64 get assigned 8 queues.
3. Delete few controllers so that total number of controllers becomes < 64. This results in higher number of queues becoming available to remaining NVMe controllers.
nvme disconnect -d <nvme controller>
4. Toggle NIC link and bring link up after 10 seconds.
5. Observe that connection to target is lost and after link comes up, controller from host tries to re-establish connection.
6. With patch, reconnection succeeds with higher number of queues.
[Where problems could occur]
Regression risk is low to medium.
[Other Info]
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
[Impact]
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.
[Fix]
Following upstream patch set helps address the issue.
1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2
2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4
3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86
The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.
Also, following patch addresses error code parsing issue in the reconnect sequence.
nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d
[Test Plan]
1. Boot into Ubuntu 22.04 kernel without fix.
2. Establish connection to NVMe TCP target.
3. Toggle NIC link and bring link up after 10 seconds. When the NIC link is down, on the target increase the number of queues assigned to the controller.
4. Observe that connection to target is lost and after link comes up, controller from host tries to re-establish connection.
5. With patch, reconnection succeeds with higher number of queues
[Where problems could occur]
Regression risk is low to medium.
[Other Info]
Test Kernel Source
https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp |
|
2022-11-15 03:38:58 |
Michael Reed |
information type |
Private |
Public |
|
2022-11-16 09:56:48 |
Stefan Bader |
linux (Ubuntu Jammy): importance |
Undecided |
Medium |
|
2022-11-16 09:57:02 |
Stefan Bader |
linux (Ubuntu): status |
In Progress |
Invalid |
|
2023-01-10 01:31:36 |
Ubuntu Kernel Bot |
tags |
|
kernel-spammed-jammy-linux verification-needed-jammy |
|
2023-01-16 11:23:50 |
Narendra K |
tags |
kernel-spammed-jammy-linux verification-needed-jammy |
kernel-spammed-jammy-linux verification-done-jammy |
|
2023-02-08 17:37:38 |
Launchpad Janitor |
linux (Ubuntu Jammy): status |
In Progress |
Fix Released |
|
2023-02-08 17:37:38 |
Launchpad Janitor |
cve linked |
|
2022-47940 |
|
2023-02-11 23:12:35 |
Ubuntu Kernel Bot |
tags |
kernel-spammed-jammy-linux verification-done-jammy |
kernel-spammed-jammy-linux kernel-spammed-jammy-linux-azure verification-needed-jammy |
|
2023-02-13 17:14:14 |
Ubuntu Kernel Bot |
tags |
kernel-spammed-jammy-linux kernel-spammed-jammy-linux-azure verification-needed-jammy |
kernel-spammed-jammy-linux kernel-spammed-jammy-linux-aws kernel-spammed-jammy-linux-azure verification-needed-jammy |
|
2023-02-22 11:38:41 |
Ubuntu Kernel Bot |
tags |
kernel-spammed-jammy-linux kernel-spammed-jammy-linux-aws kernel-spammed-jammy-linux-azure verification-needed-jammy |
kernel-spammed-jammy-linux kernel-spammed-jammy-linux-aws kernel-spammed-jammy-linux-azure kernel-spammed-jammy-linux-realtime verification-needed-jammy |
|
2023-09-09 14:17:04 |
Ubuntu Kernel Bot |
tags |
kernel-spammed-jammy-linux kernel-spammed-jammy-linux-aws kernel-spammed-jammy-linux-azure kernel-spammed-jammy-linux-realtime verification-needed-jammy |
kernel-spammed-focal-linux-aws-5.15-v2 kernel-spammed-jammy-linux kernel-spammed-jammy-linux-aws kernel-spammed-jammy-linux-azure kernel-spammed-jammy-linux-realtime verification-needed-focal-linux-aws-5.15 verification-needed-jammy |
|
2024-03-01 06:17:25 |
Ubuntu Kernel Bot |
tags |
kernel-spammed-focal-linux-aws-5.15-v2 kernel-spammed-jammy-linux kernel-spammed-jammy-linux-aws kernel-spammed-jammy-linux-azure kernel-spammed-jammy-linux-realtime verification-needed-focal-linux-aws-5.15 verification-needed-jammy |
kernel-spammed-focal-linux-aws-5.15-v2 kernel-spammed-jammy-linux kernel-spammed-jammy-linux-aws kernel-spammed-jammy-linux-azure kernel-spammed-jammy-linux-mtk-v2 kernel-spammed-jammy-linux-realtime verification-needed-focal-linux-aws-5.15 verification-needed-jammy verification-needed-jammy-linux-mtk |
|