dev name in the pci whitelist is not honored for SRIOV

Bug #1572826 reported by Preethi Dsilva on 2016-04-21
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Undecided
Unassigned

Bug Description

dev name in the pci whitelist is not honored for SRIOV

steps to reproduce:
================
1.enable sriov in the bios in my case I have mellanox card with dual port nic which shows up in teh OS as eth4 and eth5
2.provide PCI whitelist in nova.conf
pci_passthrough_whitelist = {"devname":"eth4","physical_network":"physnet1"}
3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 num_vfs=6 probe_vf=6 enable_64b_cqe_eqe=0 log_num_mgm_entry_size=-1
However, the behavior seen is that irrespective of the devname specified the tenant VM gets booted into eth4 or eth5 .

Tested the issue with MItaka code I am attaching the nova logs and local.conf for your reference.

Preethi Dsilva (prdsilva) wrote :
affects: neutron → nova
Preethi Dsilva (prdsilva) wrote :
  • Local.conf Edit (13.6 KiB, application/vnd.openxmlformats-officedocument.wordprocessingml.document)
Itzik Brown (itzikb1) wrote :

1. When attaching configuration files please do it as text files and not as word documents.
2. Can you supply your /etc/nova/nova.conf from the compute node? There is mismatch between local.conf and the logs.

Changed in nova:
status: New → Incomplete
Preethi Dsilva (prdsilva) wrote :

Hi,

You may find difference as I have modified whitelist from address to devname multiple times.

Preethi Dsilva (prdsilva) wrote :

nova.conf
=========

[DEFAULT]
#pci_passthrough_whitelist = {"address":"*:.*","physical_network":"physnet1"}
pci_passthrough_whitelist = {"devname":"eth4","physical_network":"physnet1"}
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
vif_plugging_timeout = 300
vif_plugging_is_fatal = True
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
default_ephemeral_format = ext4
dhcpbridge_flagfile = /etc/nova/nova-dhcpbridge.conf
graceful_shutdown_timeout = 5
metadata_workers = 2
osapi_compute_workers = 2
rpc_backend = rabbit
logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s ^[[01;35m%(instance)s^[[00m
logging_debug_format_suffix = ^[[00;33mfrom (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d^[[00m
logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [^[[00;36m-%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [^[[01;36m%(request_id)s ^[[00;36m%(user_name)s %(project_name)s%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
force_config_drive = True
send_arp_for_ha = True
multi_host = True
instances_path = /opt/stack/data/nova/instances
state_path = /opt/stack/data/nova
s3_listen = 0.0.0.0
metadata_listen = 0.0.0.0
osapi_compute_listen = 0.0.0.0
instance_name_template = instance-%08x
my_ip = 10.10.100.63
s3_port = 3333
s3_host = 10.10.100.60
default_floating_pool = public
force_dhcp_release = True
scheduler_default_filters = AvailabilityZoneFilter,RetryFilter,ComputeFilter,DiskFilter,RamFilter,ImagePropertiesFilter,ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter,ComputeCapabilitiesFilter,NUMATopologyFilter,PciPassthroughFilter

Paul Murray (pmurray) wrote :

It looks like device name is not used in the match for pci device specifications:

From PciDeviceSpec class in nova.pci.devspec:

    def match(self, dev_dict):
        conditions = [
            self.vendor_id in (ANY, dev_dict['vendor_id']),
            self.product_id in (ANY, dev_dict['product_id']),
            self.address.match(dev_dict['address'],
                dev_dict.get('parent_addr'))
            ]
        return all(conditions)

Augustina Ragwitz (auggy) wrote :

OP provided requested info and it looks like someone has looked at it but there's no info on if anyone was able to reproduce this. Marking this as New so someone can review and make a decision on if this is Confirmed or Invalid, or if we still need more info.

Changed in nova:
status: Incomplete → New
tags: added: network
Moshe Levi (moshele) wrote :

So according to driver configuration I see that you are using Mellanox ConnectX3 which is working a bit different. So for that you will need to use the address.

ConnectX3-Pro duel port NIC works a little different. It expose one pci device for all the NIC and when you do ip link show
you will see all the VF on port 1 and all the VF on port 2.
For example if my driver configuration is as follows:
options mlx4_core port_type_array=2,2 num_vfs=4,4,0 probe_vf=4,4,0

I will have 4 VF on port 1 and 4 VF on port 2, but ip link show you will see each PF with 8 VF.
The VF marked in yellow are the one port 1 and port 2 is using respectively

can you post your pci_devices tables in the nova database

Changed in nova:
assignee: nobody → Moshe Levi (moshele)
status: New → Incomplete
Sean Dague (sdague) on 2017-06-23
Changed in nova:
assignee: Moshe Levi (moshele) → nobody
Sean Dague (sdague) wrote :

Automatically discovered version mitaka in description. If this is incorrect, please update the description to include 'nova version: ...'

tags: added: openstack-version.mitaka
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.]

Changed in nova:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Bug attachments