Impossible to ping vm in one net from another vm in another net by IPv6

Bug #1642906 reported by Kristina Berezovskaia
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Status tracked in 10.0.x
10.0.x
Invalid
High
Kristina Berezovskaia
9.x
Invalid
High
Kristina Berezovskaia

Bug Description

Detailed bug description:
Can't ping vm in one net from another vm in another net. Both nets have IPv6 subnets and connect with each other by router

Steps to reproduce:
1) Create net01
2) Create net02
neutron net-create net06
neutron net-create net07
3) Create 2 new IPv6 address-scopes and subnetpools
neutron address-scope-create —shared address-scope-ip6_6 6
neutron subnetpool-create —address-scope address-scope-ip6_6 —pool-prefix 2001:db8:4321:46::/64 —default-prefixlen 64 public-pool_6

neutron address-scope-create —shared address-scope-ip6_7 6
neutron subnetpool-create —address-scope address-scope-ip6_7 —pool-prefix 2001:db8:4321:47::/64 —default-prefixlen 64 public-pool_7
4) Create 2 new IPv6 subnets for net01 and net02
neutron subnet-create net06 —name subnet__net06_ipv6 —ip_version 6 —ipv6_ra_mode dhcpv6-stateless —ipv6_address_mode dhcpv6-stateless —subnetpool public-pool_6
neutron subnet-create net07 —name subnet__net07_ipv6 —ip_version 6 —ipv6_ra_mode dhcpv6-stateless —ipv6_address_mode dhcpv6-stateless —subnetpool public-pool_7
5) Create router
neutron router-create router06_07
6) Set gatawey
neutron router-gateway-set router06_07 admin_floating_net
7) Add interfaces to the router with subnet01 and subnet02
neutron router-interface-add router06_07 subnet__net06_ipv6
neutron router-interface-add router06_07 subnet__net07_ipv6
8) Boot vm in net01 and vm in net02
nova boot vm_с_2 —flavor 1 —image TestVM —nic net-name=net07 —security-groups 5aa59a86-f6af-41c3-88c8-ad024344e666 —key-name vm_key
nova boot vm_c_1 —flavor 1 —image TestVM —nic net-name=net06 —security-groups 5aa59a86-f6af-41c3-88c8-ad024344e666 —key-name vm_key
9) Go to one vm with dhcp namespace
ip net e qdhcp-0c0b0467-4817-45a4-bc08-327245e2bd4c ssh -6 cirros@2001:db8:4321:46:f816:3eff:fed5:ad55
10) Ping another vm (ping6 <IPv6 address>
Expected results: ping is available
Actual results: ping isn't availavle

Description of the environment:
https://product-ci.infra.mirantis.net/job/9.x.snapshot/465/artifact/snapshots.params/*view*/
CUSTOM_VERSION=snapshot #465
SNAPSHOT_TIMESTAMP=1478088018
MAGNET_LINK=magnet:?xt=urn:btih:bfec808dd71ff42c5613a3527733d9012bb1fabc&dn=MirantisOpenStack-9.0.iso&tr=http%3A%2F%2Ftracker01-bud.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-scc.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-msk.infra.mirantis.net%3A8080%2Fannounce&ws=http%3A%2F%2Fvault.infra.mirantis.net%2FMirantisOpenStack-9.0.iso
FUEL_QA_COMMIT=11155b17cc383ddd6cac28b9a093db49a39cc964
UBUNTU_MIRROR_ID=ubuntu-2016-09-14-213640
CENTOS_MIRROR_ID=centos-7.2.1511-2016-05-31-083834
MOS_UBUNTU_MIRROR_ID=9.0-2016-11-02-104321
MOS_CENTOS_OS_MIRROR_ID=os-2016-06-23-135731
MOS_CENTOS_PROPOSED_MIRROR_ID=proposed-2016-11-02-092322
MOS_CENTOS_UPDATES_MIRROR_ID=updates-2016-10-12-105639
MOS_CENTOS_HOLDBACK_MIRROR_ID=holdback-2016-06-23-140047
MOS_CENTOS_HOTFIX_MIRROR_ID=hotfix-2016-09-23-124321
MOS_CENTOS_SECURITY_MIRROR_ID=security-2016-06-23-140002

Additional information:
The results are the same for DPDK env and without DPDK. It looks like there are no routes to the other nets. If vms are in one net, it's OK
I checked the situation for 3 types. And have the results below:
1) ipv6_ra_mode=not set; ipv6-address-mode=slaac - can't even go to vm
2) ipv6_ra_mode=not set; ipv6-address-mode=dhcpv6-stateful - can't even go to vm
3) ipv6_ra_mode=not set; ipv6-address-mode=dhcpv6-stateless - can't even go to vm
4) ipv6_ra_mode=slaac; ipv6-address-mode=not-set - can't even go to vm
5) ipv6_ra_mode=dhcpv6-stateful; ipv6-address-mode=not-set - can't even do to vm
6) ipv6_ra_mode=dhcpv6-stateless; ipv6-address-mode=not-set - can go to vm, but can't ping vm in the second net
7) ipv6_ra_mode=slaac; ipv6-address-mode=slaac - can go to vm, but can't ping vm in the second net
8) ipv6_ra_mode=dhcpv6-stateful; ipv6-address-mode=dhcpv6-stateful - can't even go to vm
9) ipv6_ra_mode=dhcpv6-stateless; ipv6-address-mode=dhcpv6-stateless - can go to vm, but can't ping vm in the second net

Tags: area-neutron
Changed in mos:
assignee: nobody → MOS Neutron (mos-neutron)
description: updated
Changed in mos:
importance: Undecided → High
Revision history for this message
Sean M. Collins (scollins) wrote :

Test scenarios #1,#2,#3 are to be expected. You did not set ipv6_ra_mode, and based on your configuration you will need to.

Same with #4,#5,#6 -

Please consult the networking guide about the details why -

Overall, please just test subnets with ipv6_address_mode and ipv6_ra_mode set - you will be saving yourself quite a bit of time.

Revision history for this message
Sean M. Collins (scollins) wrote :

Also, do not use the 2001:Db8 prefix - it is meant for documentation purposes only. Ideally, you should have IPv6 infrastructure, such as a routed prefix available for both tenants to use.

Revision history for this message
Sean M. Collins (scollins) wrote :
Revision history for this message
Sean M. Collins (scollins) wrote :

Please dump the routing table for the namespaces that the l3 agent is managing

Revision history for this message
Kevin Benton (kevinbenton) wrote :

This is expected behavior. When you have two subnets that are part of different address scopes, that's a way of saying they are part of different routing domains and there may be overlapping IPs between them etc.

A neutron router won't route between internal subnets that are members of different address scopes. You can observe this same behavior with ipv4 address scopes as well.

Revision history for this message
Kristina Berezovskaia (kkuznetsova) wrote :

Check steps for preparing using stateful mode and other modes. Now it work's. Make bug as invalid

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.