[DPDK] Instances do not have network connectivity if bond is configured for private network

Bug #1566358 reported by Artem Panchenko
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Confirmed
High
Alexander Adamov
Mitaka
Incomplete
High
Alexander Adamov

Bug Description

Fuel version info (9.0 build #157): http://paste.openstack.org/show/493002/

Instances aren't reachable via network if private network is assigned to bond (active-backup) and DPDK is enabled:

<<<<<##############################[ deploy_cluster_with_dpdk_bond ]##############################>>>>>
Deploy cluster with DPDK, active-backup bonding and Neutron VLAN

Failed 2 OSTF tests; should fail 0 tests. Names of failed tests:
  - Check network connectivity from instance via floating IP (failure) Instance is not reachable by IP. Please refer to OpenStack logs for more details.
  - Launch instance with file injection (failure) Execution command on Instance fails with unexpected result. Please refer to OpenStack logs for more details.

Steps to reproduce:

            1. Create cluster (VLAN, KVM)
            2. Add 1 node with controller role
            3. Add 1 node with compute role and 1 node with cinder role
            4. Setup bonding for all interfaces (including admin interface
               bonding)
            5. Enable DPDK for bond with private network and configure huge pages
            6. Run network verification
            7. Deploy the cluster
            8. Run network verification
            9. Run OSTF

Expected result: all health checks passed

Actual result: tests for network connectivity from instance failed

Additional info (I booted instance with floating IP):

http://paste.openstack.org/show/493008/

Also I ran ping on controller (`ip netns exec qdhcp-d5089b5f-d300-4d8e-8347-927a15e2a0bb ping 192.168.111.9`) and sniffed traffic on compute, but I saw no packets from instance:

http://paste.openstack.org/show/493009/

Revision history for this message
Artem Panchenko (apanchenko-8) wrote :
description: updated
Revision history for this message
Atsuko Ito (yottatsa) wrote :

According to instance cmdline from http://paste.openstack.org/show/493008/ instance was launched with non-huge-backed memory, therefore vhostuser doesn't work and OSTF was failed.

QA team should either implement OSTF for DPDK, or adjust flavor before OSTF with `nova flavor-key ${FLAVOR} set hw:mem_page_size=2048`.

Changed in fuel:
assignee: nobody → Fuel QA Team (fuel-qa)
tags: added: team-qa
removed: team-network
tags: added: area-qa
Atsuko Ito (yottatsa)
tags: removed: team-qa
Changed in fuel:
assignee: Fuel QA Team (fuel-qa) → Artem Panchenko (apanchenko-8)
Revision history for this message
Artem Panchenko (apanchenko-8) wrote :

Ok, I'm going to check how Nova schedules instances with common flavors if cloud has both types of computes with enabled DPDK and default. If such instances (without huge pages) are booted on computes with DPDK (won't be accessible via network), then scheduling mechanism must be improved, not tests. Also since DPDK works with lots of modern NICs, maybe should we force users to enable or disable DPDK on *all* computes? Then we will be able to modify default flavors, so they use huge pages at deployment stage.

Revision history for this message
Atsuko Ito (yottatsa) wrote :

> scheduling mechanism must be improved

Scheduling mechanism could be improved by:

1. Creating host aggregates and using attributes
2. Forbid using non-huge instances some way in nova scheduler filters

> since DPDK works with lots of modern NICs, maybe should we force users to enable or disable DPDK on *all* computes
Even with this, DPDK now does not support MTU > 1500 and security groups, so we couldn't do it.

Changed in fuel:
status: New → Confirmed
Revision history for this message
Ksenia Svechnikova (kdemina) wrote :

According to HugePage spec http://specs.openstack.org/openstack/nova-specs/specs/juno/approved/virt-driver-large-pages.html,

"The scheduler will be enhanced to take account of the page size setting on the flavor and pick hosts which have sufficient large pages available when scheduling the instance. Conversely if large pages are not requested, then the scheduler needs to avoid placing the instance on a host which has pre-reserved large pages. The enhancements for the scheduler will be done as part of the new filter that is implemented as part of the NUMA topology blueprint. This involves altering the logic done in that blueprint, so that instead of just looking at free memory in each NUMA node, it instead looks at the free page count for the desired page size."

So mechanism of #2 should be implemented with new filters (AggregateInstanceExtraSpecsFilter,NUMATopologyFilter). This filters are added to nova.conf when we enable HP

Revision history for this message
Artem Panchenko (apanchenko-8) wrote :

I tested case when there both types of computes (w/ and w/o DPDK) in cluster and instance connectivity test still fails, because it's spawned on compute node with enabled DPDK but default flavor 'm1.micro':

AssertionError: Failed 1 OSTF tests; should fail 0 tests. Names of failed tests:
  - Check network connectivity from instance via floating IP (failure) Instance is not reachable by IP. Please refer to OpenStack logs for more details.

>So mechanism of #2 should be implemented with new filters (AggregateInstanceExtraSpecsFilter,NUMATopologyFilter). This filters are added to nova.conf when we enable HP

Using Nova availability zones or some filtering mechanism to prevent incorrect instance scheduling (with default flavors) sounds like a solution for me.

Changed in fuel:
assignee: Artem Panchenko (apanchenko-8) → Fuel Library Team (fuel-library)
tags: added: area-library team-network
removed: area-qa
Revision history for this message
Artem Panchenko (apanchenko-8) wrote :

Created separate bugs for OSTF: bug #1567439 and bug #1567447

Changed in fuel:
milestone: 9.0 → 10.0
Revision history for this message
Ksenia Svechnikova (kdemina) wrote :

You could create instance with aggregates as it's done in CPU pinning tests:

 https://github.com/openstack/fuel-qa/blob/master/fuelweb_test/tests/test_cpu_pinning.py#L79-L108

Revision history for this message
Atsuko Ito (yottatsa) wrote :

There is no way we could make a aggregates and extra specs for them and flavors in order to use AggregateInstanceExtraSpecsFilter. It is not a fuel job right now.

In original scenario you've posted there was only one compute, so you need only flavor modification for hugepages.
In case if you want to make mixed setup (w/DPDK and without) in system test, now you could set up aggregates and extra specs by system test code using nova API.

In 10.0 we're try to address this UX.

Changed in fuel:
assignee: Fuel Library Team (fuel-library) → Fuel QA Team (fuel-qa)
tags: added: area-qa
removed: area-library team-network
Revision history for this message
Artem Panchenko (apanchenko-8) wrote :

Vladimir,

this issue is not about our tests (in the tests we already create falvors use availability zones to schedule instances), it's only about bad UX - after deployment user gets instances w/o network connectivity due to broken scheduling. If we don't plan to fix it in 9.0, please mark in as "Won't fix" for mitaka.

Also please don't assign it to QA team if you agree that VMs scheduling could be improved in the product, we have separate bugs related to tests: bug #1567439 and bug #1567447

tags: added: area-library team-network
removed: area-qa
Changed in fuel:
assignee: Fuel QA Team (fuel-qa) → Fuel Library Team (fuel-library)
Revision history for this message
Aleksandr Didenko (adidenko) wrote :

We should probably document this for 9.0.

Revision history for this message
Svetlana Karslioglu (skarslioglu) wrote :

Please provide a detailed description of what needs to be documented.

Changed in fuel:
assignee: Fuel Library (Deprecated) (fuel-library) → Alexander Adamov (aadamov)
Dmitry Pyzhov (dpyzhov)
tags: added: docs
Revision history for this message
Alexandr Kostrikov (akostrikov-mirantis) wrote :
tags: added: swarm-fail
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.