Activity log for bug #2013228

Date Who What changed Old value New value Message
2023-03-29 16:14:36 John Garbutt bug added bug
2023-03-29 16:17:28 John Garbutt tags rfe
2023-03-29 16:19:08 John Garbutt description Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Example operator deployments we work with: * all SR-IOV direct ports are offloaded using ovn * all SR-IOV direct ports are offloaded using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV * (not a use case I have, but theoretically its possible) some mix of the above Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Example operator deployments we work with: * all SR-IOV direct ports are offloaded using ovn * all SR-IOV direct ports are offloaded using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False
2023-03-29 16:21:11 John Garbutt description Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Example operator deployments we work with: * all SR-IOV direct ports are offloaded using ovn * all SR-IOV direct ports are offloaded using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Example operator deployments: * all SR-IOV direct ports are offloaded using ovn * all SR-IOV direct ports are offloaded using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False
2023-03-29 16:22:49 John Garbutt description Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Example operator deployments: * all SR-IOV direct ports are offloaded using ovn * all SR-IOV direct ports are offloaded using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. Example operator deployments: * all SR-IOV direct ports are offloaded using ovn * all SR-IOV direct ports are offloaded using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False At the PTG it was mentioned that this was configuration that change what the API does. But I don't understand how using ovn vs ovs ml2 is any different to hardware offloaded vs not-hardware offloads direct NICs.
2023-03-29 16:23:12 John Garbutt description Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. Example operator deployments: * all SR-IOV direct ports are offloaded using ovn * all SR-IOV direct ports are offloaded using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False At the PTG it was mentioned that this was configuration that change what the API does. But I don't understand how using ovn vs ovs ml2 is any different to hardware offloaded vs not-hardware offloads direct NICs. Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. Example operator deployments: * all SR-IOV direct ports are hardware offloaded with switch_dev using ovn * all SR-IOV direct ports are hardware offloaded with switch_dev using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False At the PTG it was mentioned that this was configuration that change what the API does. But I don't understand how using ovn vs ovs ml2 is any different to hardware offloaded vs not-hardware offloads direct NICs.
2023-03-29 16:32:11 John Garbutt description Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. Example operator deployments: * all SR-IOV direct ports are hardware offloaded with switch_dev using ovn * all SR-IOV direct ports are hardware offloaded with switch_dev using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False At the PTG it was mentioned that this was configuration that change what the API does. But I don't understand how using ovn vs ovs ml2 is any different to hardware offloaded vs not-hardware offloads direct NICs. Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. Example operator deployments: * all SR-IOV direct ports are hardware offloaded with switch_dev using ovn * all SR-IOV direct ports are hardware offloaded with switch_dev using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False At the PTG it was mentioned that this was configuration that change what the API does. But I don't understand how using ovn vs ovs ml2 is any different to hardware offloaded vs not-hardware offloads direct NICs. I have a local workaround, for ovs ml2 only, I comment out this line and just use direct ports: https://github.com/openstack/neutron/blob/b73399fa746d951a99fdf29950a1c0a801e941a2/neutron/plugins/ml2/drivers/openvswitch/mech_driver/mech_openvswitch.py#L128 Note, if we add a new API for this (rather than just a new VNIC type), we would need to add that to all these things before we can use it with customers, once we have upgrade to the release that API change has landed in: * terraform * k8s cluster api provider openstack * openstack CLI
2023-03-29 16:35:43 Bartosz Bezak bug added subscriber Bartosz Bezak
2023-03-30 09:47:40 Lajos Katona summary Non-admin users unable to create SR-IOV switch dev ports [rfe] Non-admin users unable to create SR-IOV switch dev ports
2023-03-30 13:58:31 Rodolfo Alonso bug added subscriber Rodolfo Alonso
2023-04-04 23:23:34 Miguel Lavalle neutron: status New Confirmed
2023-04-04 23:23:40 Miguel Lavalle neutron: importance Undecided Wishlist
2023-04-21 14:57:22 Rodolfo Alonso tags rfe rfe-approved
2023-05-03 14:54:46 Rodolfo Alonso neutron: assignee Rodolfo Alonso (rodolfo-alonso-hernandez)
2023-05-10 14:23:23 OpenStack Infra neutron: status Confirmed In Progress