Activity log for bug #1633120

Date Who What changed Old value New value Message
2016-10-13 15:37:09 Chinmaya Dwibedy bug added bug
2016-10-14 06:53:37 Prateek Arora bug added subscriber Prateek Arora
2016-11-04 23:37:19 Jon Proulx nova: status New Confirmed
2016-11-18 20:32:02 Maciej Szankin tags pci
2017-05-29 05:34:55 Dominique Poulain bug added subscriber Dominique Poulain
2017-06-02 11:58:44 Frode Nordahl summary Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance (openstack-mitaka) Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance
2017-06-27 15:59:10 Sean Dague tags pci openstack-version.mitaka pci
2018-12-19 15:48:29 Matt Riedemann nova: importance Undecided High
2018-12-19 15:48:36 Matt Riedemann nova: assignee sean mooney (sean-k-mooney)
2018-12-19 15:48:47 Matt Riedemann nominated for series nova/queens
2018-12-19 15:48:47 Matt Riedemann bug task added nova/queens
2018-12-19 15:48:47 Matt Riedemann nominated for series nova/rocky
2018-12-19 15:48:47 Matt Riedemann bug task added nova/rocky
2018-12-19 15:48:47 Matt Riedemann nominated for series nova/ocata
2018-12-19 15:48:47 Matt Riedemann bug task added nova/ocata
2018-12-19 15:48:47 Matt Riedemann nominated for series nova/pike
2018-12-19 15:48:47 Matt Riedemann bug task added nova/pike
2018-12-19 15:48:56 Matt Riedemann nova/ocata: status New Triaged
2018-12-19 15:48:58 Matt Riedemann nova: status Confirmed Triaged
2018-12-19 15:49:01 Matt Riedemann nova/pike: status New Triaged
2018-12-19 15:49:03 Matt Riedemann nova/queens: status New Triaged
2018-12-19 15:49:05 Matt Riedemann nova/rocky: status New Triaged
2018-12-19 15:49:08 Matt Riedemann nova/ocata: importance Undecided Medium
2018-12-19 15:49:11 Matt Riedemann nova/queens: importance Undecided Medium
2018-12-19 15:49:14 Matt Riedemann nova: importance High Medium
2018-12-19 15:49:18 Matt Riedemann nova/pike: importance Undecided Medium
2018-12-19 15:49:22 Matt Riedemann nova/rocky: importance Undecided Medium
2018-12-19 19:58:49 OpenStack Infra nova: status Triaged In Progress
2019-02-05 23:17:51 OpenStack Infra nova: status In Progress Fix Released
2019-02-05 23:29:47 OpenStack Infra nova/rocky: status Triaged In Progress
2019-02-05 23:29:47 OpenStack Infra nova/rocky: assignee sean mooney (sean-k-mooney)
2019-02-05 23:30:05 OpenStack Infra nova/queens: status Triaged In Progress
2019-02-05 23:30:05 OpenStack Infra nova/queens: assignee sean mooney (sean-k-mooney)
2019-02-05 23:38:41 OpenStack Infra nova/pike: status Triaged In Progress
2019-02-05 23:38:41 OpenStack Infra nova/pike: assignee sean mooney (sean-k-mooney)
2019-02-05 23:39:11 OpenStack Infra nova/ocata: status Triaged In Progress
2019-02-05 23:39:11 OpenStack Infra nova/ocata: assignee sean mooney (sean-k-mooney)
2019-02-11 17:17:27 OpenStack Infra nova/rocky: status In Progress Fix Committed
2019-02-23 00:43:50 OpenStack Infra nova/queens: status In Progress Fix Committed
2019-03-13 14:57:44 OpenStack Infra nova/pike: status In Progress Fix Committed
2019-04-06 09:55:58 OpenStack Infra nova/ocata: status In Progress Fix Committed
2019-05-03 01:36:20 Mike Joseph bug added subscriber Mike Joseph
2019-07-04 15:31:55 Edward Hope-Morley summary Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance
2019-07-04 15:36:27 Edward Hope-Morley description Upon trying to create VM instance (Say A) with one QAT VF, it fails with the following error i.e., “Requested operation is not valid: PCI device 0000:88:04.7 is in use by driver QEMU, domain instance-00000081”. Please note that, PCI device 0000:88:04.7 is already being assigned to another VM (Say B) . We have installed openstack-mitaka release on CentO7 system. It has two Intel QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device Out of 64 VFs, only 8 VFs are allocated (to VM instances) and rest should be available. But the nova scheduler tries to assign an already-in-use SRIOV VF to a new instance and instance fails. It appears that the nova database is not tracking which VF's have already been taken. But if I shut down VM B instance, then other instance VM A boots up and vice-versa. Note that, both the VM instances cannot run simultaneously because of the aforesaid issue. We should always be able to create as many instances with the requested PCI devices as there are available VFs. Please feel free to let me know if additional information is needed. Can anyone please suggest why it tries to assign same PCI device which has been assigned already? Is there any way to resolve this issue? Thank you in advance for your support and help. [root@localhost ~(keystone_admin)]# lspci -d:435 83:00.0 Co-processor: Intel Corporation DH895XCC Series QAT 88:00.0 Co-processor: Intel Corporation DH895XCC Series QAT [root@localhost ~(keystone_admin)]# [root@localhost ~(keystone_admin)]# lspci -d:443 | grep "QAT Virtual Function" | wc -l 64 [root@localhost ~(keystone_admin)]# [root@localhost ~(keystone_admin)]# mysql -u root nova -e "SELECT hypervisor_hostname, address, instance_uuid, status FROM pci_devices JOIN compute_nodes oncompute_nodes.id=compute_node_id" | grep 0000:88:04.7 localhost 0000:88:04.7 e10a76f3-e58e-4071-a4dd-7a545e8000de allocated localhost 0000:88:04.7 c3dbac90-198d-4150-ba0f-a80b912d8021 allocated localhost 0000:88:04.7 c7f6adad-83f0-4881-b68f-6d154d565ce3 allocated localhost.nfv.benunets.com 0000:88:04.7 0c3c11a5-f9a4-4f0d-b120-40e4dde843d4 allocated [root@localhost ~(keystone_admin)]# [root@localhost ~(keystone_admin)]# grep -r e10a76f3-e58e-4071-a4dd-7a545e8000de /etc/libvirt/qemu /etc/libvirt/qemu/instance-00000081.xml: <uuid>e10a76f3-e58e-4071-a4dd-7a545e8000de</uuid> /etc/libvirt/qemu/instance-00000081.xml: <entry name='uuid'>e10a76f3-e58e-4071-a4dd-7a545e8000de</entry> /etc/libvirt/qemu/instance-00000081.xml: <source file='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-7a545e8000de/disk'/> /etc/libvirt/qemu/instance-00000081.xml: <source path='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-7a545e8000de/console.log'/> /etc/libvirt/qemu/instance-00000081.xml: <source path='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-7a545e8000de/console.log'/> [root@localhost ~(keystone_admin)]# [root@localhost ~(keystone_admin)]# grep -r 0c3c11a5-f9a4-4f0d-b120-40e4dde843d4 /etc/libvirt/qemu /etc/libvirt/qemu/instance-000000ab.xml: <uuid>0c3c11a5-f9a4-4f0d-b120-40e4dde843d4</uuid> /etc/libvirt/qemu/instance-000000ab.xml: <entry name='uuid'>0c3c11a5-f9a4-4f0d-b120-40e4dde843d4</entry> /etc/libvirt/qemu/instance-000000ab.xml: <source file='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-40e4dde843d4/disk'/> /etc/libvirt/qemu/instance-000000ab.xml: <source path='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-40e4dde843d4/console.log'/> /etc/libvirt/qemu/instance-000000ab.xml: <source path='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-40e4dde843d4/console.log'/> [root@localhost ~(keystone_admin)]# On the controller, , it appears there are duplicate PCI device entries in the Database: MariaDB [nova]> select hypervisor_hostname,address,count(*) from pci_devices JOIN compute_nodes on compute_nodes.id=compute_node_id group by hypervisor_hostname,address having count(*) > 1; +---------------------+--------------+----------+ | hypervisor_hostname | address | count(*) | +---------------------+--------------+----------+ | localhost | 0000:05:00.0 | 3 | | localhost | 0000:05:00.1 | 3 | | localhost | 0000:83:01.0 | 3 | | localhost | 0000:83:01.1 | 3 | | localhost | 0000:83:01.2 | 3 | | localhost | 0000:83:01.3 | 3 | | localhost | 0000:83:01.4 | 3 | | localhost | 0000:83:01.5 | 3 | | localhost | 0000:83:01.6 | 3 | | localhost | 0000:83:01.7 | 3 | | localhost | 0000:83:02.0 | 3 | | localhost | 0000:83:02.1 | 3 | | localhost | 0000:83:02.2 | 3 | | localhost | 0000:83:02.3 | 3 | | localhost | 0000:83:02.4 | 3 | | localhost | 0000:83:02.5 | 3 | | localhost | 0000:83:02.6 | 3 | | localhost | 0000:83:02.7 | 3 | | localhost | 0000:83:03.0 | 3 | | localhost | 0000:83:03.1 | 3 | | localhost | 0000:83:03.2 | 3 | | localhost | 0000:83:03.3 | 3 | | localhost | 0000:83:03.4 | 3 | | localhost | 0000:83:03.5 | 3 | | localhost | 0000:83:03.6 | 3 | | localhost | 0000:83:03.7 | 3 | | localhost | 0000:83:04.0 | 3 | | localhost | 0000:83:04.1 | 3 | | localhost | 0000:83:04.2 | 3 | | localhost | 0000:83:04.3 | 3 | | localhost | 0000:83:04.4 | 3 | | localhost | 0000:83:04.5 | 3 | | localhost | 0000:83:04.6 | 3 | | localhost | 0000:83:04.7 | 3 | | localhost | 0000:88:01.0 | 3 | | localhost | 0000:88:01.1 | 3 | | localhost | 0000:88:01.2 | 3 | | localhost | 0000:88:01.3 | 3 | | localhost | 0000:88:01.4 | 3 | | localhost | 0000:88:01.5 | 3 | | localhost | 0000:88:01.6 | 3 | | localhost | 0000:88:01.7 | 3 | | localhost | 0000:88:02.0 | 3 | | localhost | 0000:88:02.1 | 3 | | localhost | 0000:88:02.2 | 3 | | localhost | 0000:88:02.3 | 3 | | localhost | 0000:88:02.4 | 3 | | localhost | 0000:88:02.5 | 3 | | localhost | 0000:88:02.6 | 3 | | localhost | 0000:88:02.7 | 3 | | localhost | 0000:88:03.0 | 3 | | localhost | 0000:88:03.1 | 3 | | localhost | 0000:88:03.2 | 3 | | localhost | 0000:88:03.3 | 3 | | localhost | 0000:88:03.4 | 3 | | localhost | 0000:88:03.5 | 3 | | localhost | 0000:88:03.6 | 3 | | localhost | 0000:88:03.7 | 3 | | localhost | 0000:88:04.0 | 3 | | localhost | 0000:88:04.1 | 3 | | localhost | 0000:88:04.2 | 3 | | localhost | 0000:88:04.3 | 3 | | localhost | 0000:88:04.4 | 3 | | localhost | 0000:88:04.5 | 3 | | localhost | 0000:88:04.6 | 3 | | localhost | 0000:88:04.7 | 3 | +---------------------+--------------+----------+ 66 rows in set (0.00 sec) MariaDB [nova]> [Impact] This patch is required to prevent nova from accidentally marking pci_device allocations as deleted when it incorrectly reads the passthrough whitelist [Test Case] * deploy openstack (any version that supports sriov) * single compute configured for sriov with at least once device in pci_passthrough_whitelist * create a vm and attach sriov port * remove device from pci_passthrough_whitelist and restart nova-compute * check that pci_devices allocations have not been marked as deleted [Regression Potential] None anticipated ---------------------------------------------------------------------------- Upon trying to create VM instance (Say A) with one QAT VF, it fails with the following error i.e., “Requested operation is not valid: PCI device 0000:88:04.7 is in use by driver QEMU, domain instance-00000081”. Please note that, PCI device 0000:88:04.7 is already being assigned to another VM (Say B) . We have installed openstack-mitaka release on CentO7 system. It has two Intel QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device Out of 64 VFs, only 8 VFs are allocated (to VM instances) and rest should be available. But the nova scheduler tries to assign an already-in-use SRIOV VF to a new instance and instance fails. It appears that the nova database is not tracking which VF's have already been taken. But if I shut down VM B instance, then other instance VM A boots up and vice-versa. Note that, both the VM instances cannot run simultaneously because of the aforesaid issue. We should always be able to create as many instances with the requested PCI devices as there are available VFs. Please feel free to let me know if additional information is needed. Can anyone please suggest why it tries to assign same PCI device which has been assigned already? Is there any way to resolve this issue? Thank you in advance for your support and help. [root@localhost ~(keystone_admin)]# lspci -d:435 83:00.0 Co-processor: Intel Corporation DH895XCC Series QAT 88:00.0 Co-processor: Intel Corporation DH895XCC Series QAT [root@localhost ~(keystone_admin)]# [root@localhost ~(keystone_admin)]# lspci -d:443 | grep "QAT Virtual Function" | wc -l 64 [root@localhost ~(keystone_admin)]# [root@localhost ~(keystone_admin)]# mysql -u root nova -e "SELECT hypervisor_hostname, address, instance_uuid, status FROM pci_devices JOIN compute_nodes oncompute_nodes.id=compute_node_id" | grep 0000:88:04.7 localhost 0000:88:04.7 e10a76f3-e58e-4071-a4dd-7a545e8000de allocated localhost 0000:88:04.7 c3dbac90-198d-4150-ba0f-a80b912d8021 allocated localhost 0000:88:04.7 c7f6adad-83f0-4881-b68f-6d154d565ce3 allocated localhost.nfv.benunets.com 0000:88:04.7 0c3c11a5-f9a4-4f0d-b120-40e4dde843d4 allocated [root@localhost ~(keystone_admin)]# [root@localhost ~(keystone_admin)]# grep -r e10a76f3-e58e-4071-a4dd-7a545e8000de /etc/libvirt/qemu /etc/libvirt/qemu/instance-00000081.xml: <uuid>e10a76f3-e58e-4071-a4dd-7a545e8000de</uuid> /etc/libvirt/qemu/instance-00000081.xml: <entry name='uuid'>e10a76f3-e58e-4071-a4dd-7a545e8000de</entry> /etc/libvirt/qemu/instance-00000081.xml: <source file='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-7a545e8000de/disk'/> /etc/libvirt/qemu/instance-00000081.xml: <source path='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-7a545e8000de/console.log'/> /etc/libvirt/qemu/instance-00000081.xml: <source path='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-7a545e8000de/console.log'/> [root@localhost ~(keystone_admin)]# [root@localhost ~(keystone_admin)]# grep -r 0c3c11a5-f9a4-4f0d-b120-40e4dde843d4 /etc/libvirt/qemu /etc/libvirt/qemu/instance-000000ab.xml: <uuid>0c3c11a5-f9a4-4f0d-b120-40e4dde843d4</uuid> /etc/libvirt/qemu/instance-000000ab.xml: <entry name='uuid'>0c3c11a5-f9a4-4f0d-b120-40e4dde843d4</entry> /etc/libvirt/qemu/instance-000000ab.xml: <source file='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-40e4dde843d4/disk'/> /etc/libvirt/qemu/instance-000000ab.xml: <source path='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-40e4dde843d4/console.log'/> /etc/libvirt/qemu/instance-000000ab.xml: <source path='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-40e4dde843d4/console.log'/> [root@localhost ~(keystone_admin)]# On the controller, , it appears there are duplicate PCI device entries in the Database: MariaDB [nova]> select hypervisor_hostname,address,count(*) from pci_devices JOIN compute_nodes on compute_nodes.id=compute_node_id group by hypervisor_hostname,address having count(*) > 1; +---------------------+--------------+----------+ | hypervisor_hostname | address | count(*) | +---------------------+--------------+----------+ | localhost | 0000:05:00.0 | 3 | | localhost | 0000:05:00.1 | 3 | | localhost | 0000:83:01.0 | 3 | | localhost | 0000:83:01.1 | 3 | | localhost | 0000:83:01.2 | 3 | | localhost | 0000:83:01.3 | 3 | | localhost | 0000:83:01.4 | 3 | | localhost | 0000:83:01.5 | 3 | | localhost | 0000:83:01.6 | 3 | | localhost | 0000:83:01.7 | 3 | | localhost | 0000:83:02.0 | 3 | | localhost | 0000:83:02.1 | 3 | | localhost | 0000:83:02.2 | 3 | | localhost | 0000:83:02.3 | 3 | | localhost | 0000:83:02.4 | 3 | | localhost | 0000:83:02.5 | 3 | | localhost | 0000:83:02.6 | 3 | | localhost | 0000:83:02.7 | 3 | | localhost | 0000:83:03.0 | 3 | | localhost | 0000:83:03.1 | 3 | | localhost | 0000:83:03.2 | 3 | | localhost | 0000:83:03.3 | 3 | | localhost | 0000:83:03.4 | 3 | | localhost | 0000:83:03.5 | 3 | | localhost | 0000:83:03.6 | 3 | | localhost | 0000:83:03.7 | 3 | | localhost | 0000:83:04.0 | 3 | | localhost | 0000:83:04.1 | 3 | | localhost | 0000:83:04.2 | 3 | | localhost | 0000:83:04.3 | 3 | | localhost | 0000:83:04.4 | 3 | | localhost | 0000:83:04.5 | 3 | | localhost | 0000:83:04.6 | 3 | | localhost | 0000:83:04.7 | 3 | | localhost | 0000:88:01.0 | 3 | | localhost | 0000:88:01.1 | 3 | | localhost | 0000:88:01.2 | 3 | | localhost | 0000:88:01.3 | 3 | | localhost | 0000:88:01.4 | 3 | | localhost | 0000:88:01.5 | 3 | | localhost | 0000:88:01.6 | 3 | | localhost | 0000:88:01.7 | 3 | | localhost | 0000:88:02.0 | 3 | | localhost | 0000:88:02.1 | 3 | | localhost | 0000:88:02.2 | 3 | | localhost | 0000:88:02.3 | 3 | | localhost | 0000:88:02.4 | 3 | | localhost | 0000:88:02.5 | 3 | | localhost | 0000:88:02.6 | 3 | | localhost | 0000:88:02.7 | 3 | | localhost | 0000:88:03.0 | 3 | | localhost | 0000:88:03.1 | 3 | | localhost | 0000:88:03.2 | 3 | | localhost | 0000:88:03.3 | 3 | | localhost | 0000:88:03.4 | 3 | | localhost | 0000:88:03.5 | 3 | | localhost | 0000:88:03.6 | 3 | | localhost | 0000:88:03.7 | 3 | | localhost | 0000:88:04.0 | 3 | | localhost | 0000:88:04.1 | 3 | | localhost | 0000:88:04.2 | 3 | | localhost | 0000:88:04.3 | 3 | | localhost | 0000:88:04.4 | 3 | | localhost | 0000:88:04.5 | 3 | | localhost | 0000:88:04.6 | 3 | | localhost | 0000:88:04.7 | 3 | +---------------------+--------------+----------+ 66 rows in set (0.00 sec) MariaDB [nova]>
2019-07-04 15:36:41 Edward Hope-Morley tags openstack-version.mitaka pci openstack-version.mitaka pci sts-sru-needed
2019-07-04 15:36:51 Edward Hope-Morley bug task added nova (Ubuntu)
2019-07-04 15:37:04 Edward Hope-Morley bug task added cloud-archive
2019-07-04 15:37:27 Edward Hope-Morley nominated for series cloud-archive/mitaka
2019-07-04 15:37:27 Edward Hope-Morley bug task added cloud-archive/mitaka
2019-07-04 15:37:27 Edward Hope-Morley nominated for series cloud-archive/rocky
2019-07-04 15:37:27 Edward Hope-Morley bug task added cloud-archive/rocky
2019-07-04 15:37:27 Edward Hope-Morley nominated for series cloud-archive/ocata
2019-07-04 15:37:27 Edward Hope-Morley bug task added cloud-archive/ocata
2019-07-04 15:37:27 Edward Hope-Morley nominated for series cloud-archive/stein
2019-07-04 15:37:27 Edward Hope-Morley bug task added cloud-archive/stein
2019-07-04 15:37:27 Edward Hope-Morley nominated for series cloud-archive/queens
2019-07-04 15:37:27 Edward Hope-Morley bug task added cloud-archive/queens
2019-07-04 15:38:48 Edward Hope-Morley nominated for series Ubuntu Bionic
2019-07-04 15:38:48 Edward Hope-Morley bug task added nova (Ubuntu Bionic)
2019-07-04 15:38:48 Edward Hope-Morley nominated for series Ubuntu Cosmic
2019-07-04 15:38:48 Edward Hope-Morley bug task added nova (Ubuntu Cosmic)
2019-07-04 15:38:48 Edward Hope-Morley nominated for series Ubuntu Xenial
2019-07-04 15:38:48 Edward Hope-Morley bug task added nova (Ubuntu Xenial)
2019-07-04 15:38:48 Edward Hope-Morley nominated for series Ubuntu Eoan
2019-07-04 15:38:48 Edward Hope-Morley bug task added nova (Ubuntu Eoan)
2019-07-04 15:38:48 Edward Hope-Morley nominated for series Ubuntu Disco
2019-07-04 15:38:48 Edward Hope-Morley bug task added nova (Ubuntu Disco)
2019-07-08 19:00:19 Corey Bryant nova (Ubuntu Cosmic): status New Won't Fix
2019-07-08 19:27:09 Corey Bryant nova (Ubuntu Eoan): status New Fix Released
2019-07-08 19:27:33 Corey Bryant nova (Ubuntu Disco): status New Fix Released
2019-07-08 19:27:52 Corey Bryant nova (Ubuntu Cosmic): status Won't Fix Fix Released
2019-07-08 19:28:35 Corey Bryant nova (Ubuntu Bionic): status New Fix Released
2019-07-08 19:29:59 Corey Bryant nova (Ubuntu Xenial): importance Undecided High
2019-07-08 19:29:59 Corey Bryant nova (Ubuntu Xenial): status New Triaged
2019-07-08 19:33:45 Corey Bryant cloud-archive/stein: status New Fix Released
2019-07-08 19:34:00 Corey Bryant cloud-archive/rocky: status New Fix Released
2019-07-08 19:34:11 Corey Bryant cloud-archive/queens: status New Fix Released
2019-07-08 19:34:28 Corey Bryant cloud-archive/ocata: importance Undecided High
2019-07-08 19:34:28 Corey Bryant cloud-archive/ocata: status New Triaged
2019-07-08 19:34:41 Corey Bryant cloud-archive/mitaka: importance Undecided High
2019-07-08 19:34:41 Corey Bryant cloud-archive/mitaka: status New Triaged
2019-07-15 17:59:20 Corey Bryant cloud-archive/ocata: status Triaged Fix Committed
2019-07-15 17:59:22 Corey Bryant tags openstack-version.mitaka pci sts-sru-needed openstack-version.mitaka pci sts-sru-needed verification-ocata-needed
2019-07-23 13:52:52 Edward Hope-Morley tags openstack-version.mitaka pci sts-sru-needed verification-ocata-needed openstack-version.mitaka pci sts-sru-needed verification-ocata-done
2019-08-01 09:42:20 Edward Hope-Morley cloud-archive/mitaka: status Triaged Won't Fix
2019-08-01 09:42:35 Edward Hope-Morley nova (Ubuntu Xenial): status Triaged Won't Fix
2019-08-02 16:42:05 Corey Bryant cloud-archive/ocata: status Fix Committed Fix Released
2019-09-09 13:24:41 Edward Hope-Morley tags openstack-version.mitaka pci sts-sru-needed verification-ocata-done openstack-version.mitaka pci sts-sru-done verification-ocata-done