vhost0 can not communicate with controller when dpdk enabled on master branch

Bug #1766194 reported by Mark Zhu on 2018-04-23
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R5.0
Invalid
High
Jeya ganesh babu J
Trunk
Invalid
Medium
Jeya ganesh babu J

Bug Description

My testbed is: 1 openstack controller node, 1 contrail controller node and 2 compute nodes
OS: redhat 7.5, kernel: 3.10.0-693.21.1.el7.x86_64
1. Build contrail docker images by following https://github.com/Juniper/contrail-dev-env
2. deploy contrail by following https://github.com/Juniper/contrail-ansible-deployer
3. After ansible finished installation, we found rte_kni.ko can not be loaded.
INFO: load uio kernel module
insmod /lib/modules/3.10.0-693.21.1.el7.x86_64/kernel/drivers/uio/uio.ko.xz
INFO: load uio_pci_generic kernel module
insmod /lib/modules/3.10.0-693.21.1.el7.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz
INFO: load rte_kni kernel module
modprobe: FATAL: Module rte_kni not found.
ERROR: failed to load rte_kni driver
WARNING: rte_ini kernel module is unavailable. Please install/insert it for Ubuntu 14.04 manually.
INFO: Probe vhost0... tries left 1
cat: /sys/class/net/vhost0/address: No such file or directory
4. We copied rte_kni driver from build server to compute nodes and made it inserted to kernel successfully.
5. But vhost0 can not be UP. After 'ifup vhost0' manually, vhost0 can be up but it can not communicate with other nodes.

Some logs:
[root@Contrailcp1 ~]# contrail-status
Pod Service Original Name State Status
vrouter agent contrail-vrouter-agent running Up 30 minutes
vrouter agent-dpdk contrail-vrouter-agent-dpdk running Up 30 minutes
vrouter nodemgr contrail-nodemgr running Up 30 minutes

vrouter DPDK module is PRESENT
== Contrail vrouter ==
nodemgr: initializing (Collector connection down)
agent: initializing (XMPP:control-node:192.168.10.20, XMPP:dns-server:192.168.10.20, Collector connection down, No Configuration for self)

[root@Contrailcp1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7a5e524fc60 10.240.133.121:6666/contrail-vrouter-agent:newton-dev "/entrypoint.sh /u..." 4 days ago Up 30 minutes vrouter_vrouter-agent_1
a4f92bf9fddb 10.240.133.121:6666/contrail-vrouter-agent-dpdk:newton-dev "/entrypoint.sh /u..." 4 days ago Up 30 minutes vrouter_vrouter-agent-dpdk_1
7793fedd4f89 10.240.133.121:6666/contrail-nodemgr:newton-dev "/entrypoint.sh /b..." 4 days ago Up 30 minutes vrouter_nodemgr_1
[root@Contrailcp1 ~]#
[root@Contrailcp1 ~]# docker logs vrouter_vrouter-agent-dpdk_1
INFO: dpdk started
vm.nr_hugepages = 1024
vm.max_map_count = 128960
net.ipv4.tcp_keepalive_time = 5
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 1
net.core.wmem_max = 9160000
INFO: load uio kernel module
insmod /lib/modules/3.10.0-693.21.1.el7.x86_64/kernel/drivers/uio/uio.ko.xz
INFO: load uio_pci_generic kernel module
insmod /lib/modules/3.10.0-693.21.1.el7.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz
INFO: load rte_kni kernel module
insmod /lib/modules/3.10.0-693.21.1.el7.x86_64/updates/rte_kni.ko kthread_mode=multiple
INFO: Probe vhost0... tries left 1
cat: /sys/class/net/vhost0/address: No such file or directory
INFO: detecting phys interface parameters... 0/10
cat: /sys/class/net/vhost0/address: No such file or directory
INFO: phys_int=ens3f0 phys_int_mac=a0:36:9f:e1:e6:18, pci=0000:06:00.0, addrs=[fe80::a236:9fff:fee1:e618/64], gateway=192.168.10.1
INFO: Binding device 0000:06:00.0 to driver uio_pci_generic ...
INFO: Add lspci data to /var/run/vrouter/0000:06:00.0
Slot: 06:00.0
Class: Ethernet controller
Vendor: Intel Corporation
Device: Ethernet Controller 10-Gigabit X540-AT2
SVendor: Intel Corporation
SDevice: Ethernet Converged Network Adapter X540-T2
PhySlot: 3
Rev: 01
Driver: ixgbe
Module: ixgbe
NUMANode: 0
INFO: Waiting device 0000:06:00.0 for driver uio_pci_generic ... 1
INFO: Physical interface: ens3f0, mac=a0:36:9f:e1:e6:18, pci=0000:06:00.0
INFO: Waiting device 0000:06:00.0 for driver uio_pci_generic ... 1
INFO: exec '/bin/taskset 0x01 /usr/bin/contrail-vrouter-dpdk --no-daemon --socket-mem 1024'
2018-04-23 05:05:26,182 VROUTER: vRouter version: {"build-info": [{"build-time": "2018-04-17 06:38:40.465930", "build-hostname": "462fc55b297d", "build-user": "root", "build-version": "5.1.0"}]}
2018-04-23 05:05:26,182 VROUTER: DPDK version: DPDK 17.11.0
2018-04-23 05:05:26,229 VROUTER: Bridge Table limit: 262144
2018-04-23 05:05:26,229 VROUTER: Bridge Table overflow limit: 53248
2018-04-23 05:05:26,229 VROUTER: Flow Table limit: 524288
2018-04-23 05:05:26,229 VROUTER: Flow Table overflow limit: 105472
2018-04-23 05:05:26,229 VROUTER: MPLS labels limit: 5120
2018-04-23 05:05:26,229 VROUTER: Nexthops limit: 65536
2018-04-23 05:05:26,229 VROUTER: VRF tables limit: 4096
2018-04-23 05:05:26,229 VROUTER: Packet pool size: 16383
2018-04-23 05:05:26,229 VROUTER: Maximum packet size: 9216
2018-04-23 05:05:26,229 VROUTER: EAL arguments:
2018-04-23 05:05:26,229 VROUTER: -n "4"
2018-04-23 05:05:26,229 VROUTER: --socket-mem "1024"
2018-04-23 05:05:26,229 VROUTER: --lcores "(0-2)@(0-39),(8-9)@(0-39),10@0"
2018-04-23 05:05:26,233 EAL: Detected 40 lcore(s)
2018-04-23 05:05:26,281 EAL: No free hugepages reported in hugepages-1048576kB
2018-04-23 05:05:26,282 EAL: Probing VFIO support...
2018-04-23 05:05:26,844 EAL: PCI device 0000:06:00.0 on NUMA socket 0
2018-04-23 05:05:26,844 EAL: probe driver: 8086:1528 net_ixgbe
2018-04-23 05:05:26,995 EAL: PCI device 0000:06:00.1 on NUMA socket 0
2018-04-23 05:05:26,995 EAL: probe driver: 8086:1528 net_ixgbe
2018-04-23 05:05:27,008 VROUTER: Found 1 eth device(s)
2018-04-23 05:05:27,008 VROUTER: Using 1 forwarding lcore(s)
2018-04-23 05:05:27,008 VROUTER: Using 0 IO lcore(s)
2018-04-23 05:05:27,008 VROUTER: Using 5 service lcores
2018-04-23 05:05:27,008 VROUTER: Max HOLD flow entries set to 1000
2018-04-23 05:05:27,008 VROUTER: set fd limit to 4864 (prev 65536, max 65536)
2018-04-23 05:05:27,024 VROUTER: Starting NetLink...
2018-04-23 05:05:27,025 VROUTER: Lcore 10: distributing MPLSoGRE packets to []
2018-04-23 05:05:27,025 UVHOST: Starting uvhost server...
2018-04-23 05:05:27,025 UVHOST: server event FD is 31
2018-04-23 05:05:27,025 UVHOST: server socket FD is 32
2018-04-23 05:05:27,025 USOCK: usock_alloc[7fb9789fb700]: new socket FD 33
2018-04-23 05:05:27,026 USOCK: usock_alloc[7fb9789fb700]: setting socket FD 33 send buff size.
Buffer size set to 18320000 (requested 9216000)
2018-04-23 05:05:27,026 VROUTER: NetLink TCP socket FD is 33
2018-04-23 05:05:27,026 VROUTER: uvhost Unix socket FD is 34
2018-04-23 05:05:27,026 UVHOST: Handling connection FD 32...
2018-04-23 05:05:27,026 UVHOST: FD 32 accepted new NetLink connection FD 35
2018-04-23 05:05:28,204 VROUTER: Adding vif 0 (gen. 1) eth device 0 (PMD) MAC a0:36:9f:e1:e6:18 (vif MAC a0:36:9f:e1:e6:18)
2018-04-23 05:05:28,204 VROUTER: Using 64 TX queues, 1 RX queues
2018-04-23 05:05:28,204 VROUTER: setup 1 RSS queue(s) and 0 filtering queue(s)
2018-04-23 05:05:28,364 PMD: ixgbe_dev_link_status_print(): Port 0: Link Down
2018-04-23 05:05:28,364 VROUTER: lcore 10 TX to HW queue 0
2018-04-23 05:05:28,364 VROUTER: lcore 8 TX to HW queue 1
2018-04-23 05:05:28,364 VROUTER: lcore 9 TX to HW queue 2
2018-04-23 05:05:28,364 VROUTER: lcore 10 RX from HW queue 0
2018-04-23 05:05:28,380 VROUTER: Adding vif 1 (gen. 2) device vhost0 at eth device 0 MAC a0:36:9f:e1:e6:18 (vif MAC a0:36:9f:e1:e6:18)
2018-04-23 05:05:28,380 VROUTER: initializing KNI with 16 maximum interfaces
2018-04-23 05:05:28,381 VROUTER: bind KNI kernel thread to CPU 1
2018-04-23 05:05:28,381 KNI: pci: 06:00:00 8086:1528
2018-04-23 05:05:28,381 VROUTER: lcore 10 TX to HW queue 0
2018-04-23 05:05:28,381 VROUTER: lcore 8 TX queue 0 to SW ring in lcore 10
2018-04-23 05:05:28,381 VROUTER: lcore 10 now has 1 ring(s) to push
2018-04-23 05:05:28,381 VROUTER: lcore 9 TX queue 0 to SW ring in lcore 10
2018-04-23 05:05:28,381 VROUTER: lcore 10 now has 2 ring(s) to push
2018-04-23 05:05:28,381 VROUTER: lcore 10 RX from HW queue 0
2018-04-23 05:05:28,383 VROUTER: Configuring eth device 0 UP
2018-04-23 05:05:28,755 VROUTER: Configuring eth device 0 DOWN
2018-04-23 05:05:29,022 VROUTER: Configuring eth device 0 UP
2018-04-23 05:05:29,178 PMD: ixgbe_dev_link_status_print(): Port 0: Link Down
2018-04-23 05:05:41,906 DPCORE: vrouter soft reset start
2018-04-23 05:05:41,907 VROUTER: Deleting vif 0 eth device
2018-04-23 05:05:41,907 VROUTER: releasing lcore 8 TX queue 0
2018-04-23 05:05:41,907 VROUTER: releasing lcore 9 TX queue 0
2018-04-23 05:05:41,907 VROUTER: releasing lcore 10 RX queue
2018-04-23 05:05:41,907 VROUTER: releasing lcore 10 TX queue 0
2018-04-23 05:05:42,034 VROUTER: Deleting vif 1 device
2018-04-23 05:05:42,034 VROUTER: releasing lcore 8 TX queue 0
2018-04-23 05:05:42,035 VROUTER: releasing lcore 9 TX queue 0
2018-04-23 05:05:42,035 VROUTER: releasing lcore 10 RX queue
2018-04-23 05:05:42,035 VROUTER: releasing lcore 10 TX queue 0
2018-04-23 05:05:42,035 VROUTER: releasing vif 1 KNI device
2018-04-23 05:05:45,097 DPCORE: vrouter soft reset done (0)
2018-04-23 05:05:45,153 VROUTER: Adding vif 0 (gen. 3) eth device 0 PCI 0000:06:00.0 MAC a0:36:9f:e1:e6:18 (vif MAC a0:36:9f:e1:e6:18)
2018-04-23 05:05:45,153 VROUTER: Using 64 TX queues, 1 RX queues
2018-04-23 05:05:45,153 VROUTER: setup 1 RSS queue(s) and 0 filtering queue(s)
2018-04-23 05:05:45,310 PMD: ixgbe_dev_link_status_print(): Port 0: Link Down
2018-04-23 05:05:45,310 VROUTER: lcore 10 TX to HW queue 0
2018-04-23 05:05:45,310 VROUTER: lcore 8 TX to HW queue 1
2018-04-23 05:05:45,310 VROUTER: lcore 9 TX to HW queue 2
2018-04-23 05:05:45,310 VROUTER: lcore 10 RX from HW queue 0
2018-04-23 05:05:45,310 VROUTER: Adding vif 1 (gen. 4) device vhost0 at eth device 0 MAC a0:36:9f:e1:e6:18 (vif MAC a0:36:9f:e1:e6:18)
2018-04-23 05:05:45,311 VROUTER: bind KNI kernel thread to CPU 1
2018-04-23 05:05:45,311 KNI: pci: 06:00:00 8086:1528
2018-04-23 05:05:45,311 VROUTER: lcore 10 TX to HW queue 0
2018-04-23 05:05:45,311 VROUTER: lcore 8 TX queue 0 to SW ring in lcore 10
2018-04-23 05:05:45,311 VROUTER: lcore 10 now has 1 ring(s) to push
2018-04-23 05:05:45,311 VROUTER: lcore 9 TX queue 0 to SW ring in lcore 10
2018-04-23 05:05:45,311 VROUTER: lcore 10 now has 2 ring(s) to push
2018-04-23 05:05:45,311 VROUTER: lcore 10 RX from HW queue 0
2018-04-23 05:05:45,311 VROUTER: Adding vif 2 (gen. 5) packet device unix
2018-04-23 05:05:45,311 USOCK: usock_alloc[7fb9789fb700]: new socket FD 39
2018-04-23 05:05:45,311 USOCK: usock_alloc[7fb9789fb700]: setting socket FD 39 send buff size.
Buffer size set to 18320000 (requested 9216000)

[root@Contrailcp1 ~]# docker logs d7a5e524fc60
INFO: agent started in dpdk mode
Device "vhost0" does not exist.
INFO: wait DPDK agent to run... 1
INFO: wait DPDK agent to run... 2
INFO: creating vhost0 for dpdk mode: nic: ens3f0, mac=a0:36:9f:e1:e6:18
INFO: Creating ens3f0 interface with mac a0:36:9f:e1:e6:18 via vif utility...
INFO: Adding vhost0 interface with vif utility...
INFO: creating ifcfg-vhost0 and initialize it via ifup
/etc/sysconfig/network-scripts /
/
INFO: Physical interface: ens3f0, mac=a0:36:9f:e1:e6:18, pci=0000:06:00.0
INFO: vhost0 cidr 192.168.10.40/24, gateway 192.168.10.1
INFO: Preparing /etc/contrail/contrail-vrouter-agent.conf
INFO: /etc/contrail/contrail-vrouter-agent.conf
[CONTROL-NODE]
servers=192.168.10.20:5269

[DEFAULT]
collectors=192.168.10.20:8086
log_file=/var/log/contrail/contrail-vrouter-agent.log
log_level=SYS_NOTICE
log_local=1

xmpp_dns_auth_enable=False
xmpp_auth_enable=False

platform=dpdk
physical_interface_mac=a0:36:9f:e1:e6:18
physical_interface_address=0000:06:00.0
physical_uio_driver=uio_pci_generic

tsn_servers = []

[SANDESH]
introspect_ssl_enable=False
sandesh_ssl_enable=False

[NETWORKS]
control_network_ip=10.240.230.40

[DNS]
servers=192.168.10.20:53

[METADATA]
metadata_proxy_secret=contrail

[VIRTUAL-HOST-INTERFACE]
name=vhost0
ip=192.168.10.40/24
physical_interface=ens3f0
gateway=192.168.10.1
compute_node_address=192.168.10.40

[SERVICE-INSTANCE]
netns_command=/usr/bin/opencontrail-vrouter-netns
docker_command=/usr/bin/opencontrail-vrouter-docker

[HYPERVISOR]
type = kvm

[FLOWS]
fabric_snat_hash_table_size = 4096
INFO: Preparing /etc/contrail/contrail-lbaas-auth.conf
Config file </etc/contrail/contrail-vrouter-agent.conf> parsing completed.
log4cplus:ERROR No appenders could be found for logger (SANDESH).
log4cplus:ERROR Please initialize the log4cplus system properly.

###Interface “ens3f0” is used by vhost0.

[root@Contrailcp2 bin]# ./dpdk_nic_bind.py -s

Network devices using DPDK-compatible driver
============================================
0000:06:00.0 'Ethernet Controller 10-Gigabit X540-AT2' drv=uio_pci_generic unused=ixgbe

Network devices using kernel driver
===================================
0000:06:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f1 drv=ixgbe unused=uio_pci_generic
0000:11:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno1 drv=tg3 unused=uio_pci_generic *Active*
0000:11:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno2 drv=tg3 unused=uio_pci_generic
0000:11:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno3 drv=tg3 unused=uio_pci_generic
0000:11:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno4 drv=tg3 unused=uio_pci_generic

Other network devices
=====================
<none>
[root@Contrailcp2 bin]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
    link/ether 08:94:ef:64:12:26 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 08:94:ef:64:12:27 brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 08:94:ef:64:12:28 brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 08:94:ef:64:12:29 brd ff:ff:ff:ff:ff:ff
7: ens3f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether a0:36:9f:e1:ef:f2 brd ff:ff:ff:ff:ff:ff
8: enp0s20u13u5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 0a:94:ef:64:12:2d brd ff:ff:ff:ff:ff:ff
9: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 26:63:32:c5:44:be brd ff:ff:ff:ff:ff:ff
10: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65470 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT qlen 1000
    link/ether 0a:10:07:d6:92:59 brd ff:ff:ff:ff:ff:ff
11: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
   link/ether 62:8a:01:63:5f:4c brd ff:ff:ff:ff:ff:ff
12: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 96:e3:77:a6:2e:47 brd ff:ff:ff:ff:ff:ff
13: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT
    link/ether 02:42:7b:e1:b5:31 brd ff:ff:ff:ff:ff:ff
15: vhost0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000
    link/ether ea:16:c4:01:21:36 brd ff:ff:ff:ff:ff:ff

our instances.yaml is:
===========================================
CONTAINER_REGISTRY: 10.240.133.121:6666
REGISTRY_PRIVATE_INSECURE: True
provider_config:
  bms:
    ssh_pwd: blade123
    ssh_user: root
    ntpserver: 10.240.237.65
    domainsuffix: local
instances:
  centos1:
    provider: bms
    ip: 10.240.230.10
    roles:
      openstack:
  centos2:
    provider: bms
    ip: 10.240.230.20
    roles:
      config_database:
      config:
      control:
      analytics_database:
      analytics:
      webui:
  centos3:
    provider: bms
    ip: 10.240.230.40
    roles:
      vrouter:
        AGENT_MODE: dpdk
        #HUGE_PAGES: 512
        #DPDK_MEM_PER_SOCKET: 512
      openstack_compute:
  centos4:
    provider: bms
    ip: 10.240.230.60
    roles:
      vrouter:
        AGENT_MODE: dpdk
        #HUGE_PAGES: 512
        #DPDK_MEM_PER_SOCKET: 512
      openstack_compute:
contrail_configuration:
  CONTAINER_REGISTRY: 10.240.133.121:6666
  CONFIG_API_VIP: 192.168.10.42
  CONTRAIL_VERSION: dev
  OPENSTACK_VERSION: newton
  CLOUD_ORCHESTRATOR: openstack
  #UPGRADE_KERNEL: true
  CONTROL_DATA_NET_LIST: 192.168.10.0/24
  RABBITMQ_NODE_PORT: 5673
  VROUTER_GATEWAY: 192.168.10.1
  PHYSICAL_INTERFACE: ens3f0
  AUTH_MODE: keystone
  #SSL_ENABLE: true
  KEYSTONE_AUTH_HOST: 10.240.230.10
  #KEYSTONE_AUTH_PROTO: https
  KEYSTONE_AUTH_URL_VERSION: /v3
  KEYSTONE_AUTH_ADMIN_USER: admin
  KEYSTONE_AUTH_ADMIN_PASSWORD: blade123

Mark Zhu (markzhu) on 2018-04-23
information type: Proprietary → Public
Jeba Paulaiyan (jebap) on 2018-04-23
tags: added: dpdk provisioning
Sachchidanand Vaidya (vaidyasd) wrote :

Since bug happened on Master branch. R5.0 is GA with no such issue seen.

Jeya ganesh babu J (jjeya) wrote :

KNI is not supported on any platform for 5.0 and master. Either use uio_pci_generic or vfio-pci. Marking the bug as invalid.

Distel FOGUE (10tel) wrote :
Download full text (7.4 KiB)

Hello, I'm building a setup on RHOSP13GA and Contrail5.0.1(5.0.1-0.214-rhel-queens) with a 2 compute and 1 compute dpdk.
The dpdk compute node is not able to ping the other compute/controller nodes.

I'm using uio_pci_generic driver for contrail vrouter.

On the dpdk compute node:
[heat-admin@vdmblppck1 ~]$ sudo docker exec -it 93682462b27b bash -c "cat /etc/contrail/contrail-vrouter-agent.conf"
[CONTROL-NODE]
servers=10.164.228.91:5269 10.164.228.92:5269 10.164.228.93:5269

[DEFAULT]
collectors=10.164.227.96:8086 10.164.227.97:8086 10.164.227.98:8086
log_file=/var/log/contrail/contrail-vrouter-agent.log
log_level=SYS_NOTICE
log_local=1

xmpp_dns_auth_enable=False
xmpp_auth_enable=False

platform=dpdk
physical_interface_mac=d0:43:1e:aa:d6:c6
physical_interface_address=0000:01:00.1,0000:04:00.0
physical_uio_driver=uio_pci_generic

tsn_servers = []

[SANDESH]
introspect_ssl_enable=False
sandesh_ssl_enable=False

[NETWORKS]
control_network_ip=10.164.228.126

[DNS]
servers=10.164.228.91:53 10.164.228.92:53 10.164.228.93:53

[METADATA]
metadata_proxy_secret=Nn7KarbJA7FUgCzE7H2pFp387

[VIRTUAL-HOST-INTERFACE]
name=vhost0
ip=10.164.228.126/24
physical_interface=bond0
gateway=10.164.228.1
compute_node_address=10.164.228.126

[SERVICE-INSTANCE]
netns_command=/usr/bin/opencontrail-vrouter-netns
docker_command=/usr/bin/opencontrail-vrouter-docker

[HYPERVISOR]
type = kvm

[FLOWS]
fabric_snat_hash_table_size = 4096

[SESSION_DESTINATION]
slo_destination = collector
sample_destination = collector

====vif --list on the dpdk compute node====================
on the sme compute node, i can only see the vif 0/2 interfaces:
Vrouter Interface Table

Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
       Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
       D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
       Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored
       Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled
       Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, Ig=Igmp Trap Enabled

vif0/2 Socket: unix
            Type:Agent HWaddr:00:00:5e:00:01:00 IPaddr:0.0.0.0
            Vrf:65535 Mcast Vrf:65535 Flags:L3Er QOS:-1 Ref:3
            RX port packets:2903 errors:0
            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0
            RX packets:2903 bytes:249658 errors:2903
            TX packets:0 bytes:0 errors:0
            Drops:2903

==========vrouter processs running==============

[root@vdmblppck1 ~]# ps -aef | grep vrouter
root 18464 18447 0 Sep07 ? 00:00:00 /bin/bash /entrypoint.sh /usr/bin/contrail-vrouter-dpdk
root 18654 18464 99 Sep07 ? 3-00:59:43 /usr/bin/contrail-vrouter-dpdk --no-daemon --socket-mem 1024 1024 --vlan_tci 1102 --vlan_fwd_intf_name bond0 --vdev eth_bond_bond0,mode=4,xmit_policy=l23,socket_id=0,mac=d0:43:1e:aa:d6:c6,slave=0000:01:00.1,slave=0000:04:00.0
root 669430 669413 0 10:02 ? 00:00:30 /usr/bin/python /usr/bin/contrail-nodemgr --nodetype=contrail-vrouter
root ...

Read more...

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers