VM boot with pci-sriov interface is not reachable

Bug #1790191 reported by Peng Peng
18
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Invalid
High
Steven Webster

Bug Description

Brief Description
-----------------
Boot up a VM with pci-sriov interface, ping vm over mgmt and internal-net failed.

Severity
--------
Major

Steps to Reproduce
------------------
1. Get a network that supports pci-sriov to boot vm
2. Create a flavor with dedicated cpu policy
3. Boot a base vm with above flavor and virtio nics
4. Create a flavor with specified extra-specs and dedicated cpu policy
5. Set extra-specs to the flavor
6. Boot a vm with pci-sriov vif model on internal net
7. Ping vm over mgmt and internal nets from base vm

Expected Behavior
------------------
ping success

Actual Behavior
----------------
Ping failed

Reproducibility
---------------
100% Reproducible

System Configuration
--------------------
Regular Multi-node system

Branch/Pull Time/Commit
-----------------------
master as of 2018-08-22_20-18-00

Timestamp/Logs
--------------
[2018-08-23 09:20:19,489] 264 DEBUG MainThread ssh.send :: Send 'nova --os-username 'tenant2' --os-password 'Li69nux*' --os-project-name tenant2 --os-auth-url http://192.168.204.2:5000/v3 --os-user-domain-name Default --os-project-domain-name Default --os-region-name RegionOne boot --boot-volume 72becd2f-0f59-4589-9111-e7e7c206338a --key-name keypair-tenant2 --flavor f2111098-becd-41ce-8b45-eff2fd6e62ec --nic net-id=65e4cab9-afd5-48aa-bc31-3a433740b0b3,vif-model=virtio --nic net-id=24470e89-bb76-40e3-8b22-2d78364b477c,vif-model=pci-sriov tenant2-pci-sriov-8 --poll'

[2018-08-23 09:23:04,535] 264 DEBUG MainThread ssh.send :: Send 'ip addr'
[2018-08-23 09:23:04,641] 391 DEBUG MainThread ssh.expect :: Output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:79:12:ac brd ff:ff:ff:ff:ff:ff
    inet 192.168.185.68/27 brd 192.168.185.95 scope global dynamic eth0
       valid_lft 86251sec preferred_lft 86251sec
    inet6 fe80::f816:3eff:fe79:12ac/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether fa:16:3e:64:0f:f6 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f816:3eff:fe64:ff6/64 scope link
       valid_lft forever preferred_lft forever

Revision history for this message
Ghada Khalil (gkhalil) wrote :

Marked as gating for stx.2018.10 as it's expected that this should work in starlingx

Changed in starlingx:
importance: Undecided → High
status: New → Triaged
assignee: nobody → Steven Webster (swebster-wr)
tags: added: stx.2018.10 stx.networking
Revision history for this message
Steven Webster (swebster-wr) wrote :

Triage notes:

I was able to examine the reporter's lab. It turns out to be an issue on the hardware switch connected to the system, as well as an issue with how the lab was setup to use this (physical) switch in the current configuration.

Resetting the switch as well as configuring the lab as per switch configuration allows traffic between VMs on both the SRIOV enabled ports as well as management.

Revision history for this message
Ghada Khalil (gkhalil) wrote :

Marking as invalid since this is a lab configuration issue.

Changed in starlingx:
status: Triaged → Invalid
Ken Young (kenyis)
tags: added: stx.1.0
removed: stx.2018.10
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.