Impossible to ping vm in one net from another vm in another net by IPv6
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Mirantis OpenStack | Status tracked in 10.0.x | |||||
10.0.x |
Invalid
|
High
|
Kristina Berezovskaia | |||
9.x |
Invalid
|
High
|
Kristina Berezovskaia |
Bug Description
Detailed bug description:
Can't ping vm in one net from another vm in another net. Both nets have IPv6 subnets and connect with each other by router
Steps to reproduce:
1) Create net01
2) Create net02
neutron net-create net06
neutron net-create net07
3) Create 2 new IPv6 address-scopes and subnetpools
neutron address-
neutron subnetpool-create —address-scope address-scope-ip6_6 —pool-prefix 2001:db8:
neutron address-
neutron subnetpool-create —address-scope address-scope-ip6_7 —pool-prefix 2001:db8:
4) Create 2 new IPv6 subnets for net01 and net02
neutron subnet-create net06 —name subnet__net06_ipv6 —ip_version 6 —ipv6_ra_mode dhcpv6-stateless —ipv6_address_mode dhcpv6-stateless —subnetpool public-pool_6
neutron subnet-create net07 —name subnet__net07_ipv6 —ip_version 6 —ipv6_ra_mode dhcpv6-stateless —ipv6_address_mode dhcpv6-stateless —subnetpool public-pool_7
5) Create router
neutron router-create router06_07
6) Set gatawey
neutron router-gateway-set router06_07 admin_floating_net
7) Add interfaces to the router with subnet01 and subnet02
neutron router-
neutron router-
8) Boot vm in net01 and vm in net02
nova boot vm_с_2 —flavor 1 —image TestVM —nic net-name=net07 —security-groups 5aa59a86-
nova boot vm_c_1 —flavor 1 —image TestVM —nic net-name=net06 —security-groups 5aa59a86-
9) Go to one vm with dhcp namespace
ip net e qdhcp-0c0b0467-
10) Ping another vm (ping6 <IPv6 address>
Expected results: ping is available
Actual results: ping isn't availavle
Description of the environment:
https:/
CUSTOM_
SNAPSHOT_
MAGNET_
FUEL_QA_
UBUNTU_
CENTOS_
MOS_UBUNTU_
MOS_CENTOS_
MOS_CENTOS_
MOS_CENTOS_
MOS_CENTOS_
MOS_CENTOS_
MOS_CENTOS_
Additional information:
The results are the same for DPDK env and without DPDK. It looks like there are no routes to the other nets. If vms are in one net, it's OK
I checked the situation for 3 types. And have the results below:
1) ipv6_ra_mode=not set; ipv6-address-
2) ipv6_ra_mode=not set; ipv6-address-
3) ipv6_ra_mode=not set; ipv6-address-
4) ipv6_ra_mode=slaac; ipv6-address-
5) ipv6_ra_
6) ipv6_ra_
7) ipv6_ra_mode=slaac; ipv6-address-
8) ipv6_ra_
9) ipv6_ra_
Changed in mos: | |
assignee: | nobody → MOS Neutron (mos-neutron) |
description: | updated |
Changed in mos: | |
importance: | Undecided → High |
Test scenarios #1,#2,#3 are to be expected. You did not set ipv6_ra_mode, and based on your configuration you will need to.
Same with #4,#5,#6 -
Please consult the networking guide about the details why -
Overall, please just test subnets with ipv6_address_mode and ipv6_ra_mode set - you will be saving yourself quite a bit of time.