The bug breaks any reasonable use of ndppd if the initial description is correct, it certainly breaks the setup I'm currently evaluating. I'm using a VM to test a networking setup for an unrelated project, thus it's more complicated than it'd be on real hardware.
My configuration uses ndppd to establish multiple IPv6 subnets in place of the usual /64 ISP assignment for LXD containers, while the ISP doesn't actually delegate the network to my host.
The VirtualBox VM environment is set up as follows:
- 1 ethernet adapter configured as a bridge with my physical nic, there's a DHCP server on the same network
- 1 ethernet adapter on a host only network, ip fc00:1234::ffff:ffff:ffff:ffff/64
The VM itself is configured as:
- Ubuntu 16.04 server x64
- The bridged nic (enp0s3) is configured to use dhcp and practically only used for apt
- The host only nic (enp0s8) is simulating the actual network of interest
- The nic veth1234 is connecting to the LXD container (p2p, set up by LXD)
- net.ipv6.conf.all.forwarding=1 in /etc/sysctl.conf
- LXD, bridging disabled
- ndppd installed
/etc/network/interfaces:
auto lo
iface lo inet loopback
auto enp0s3
iface enp0s3 inet dhcp
auto enp0s8
iface enp0s8 inet6 static
address fc00:1234::1/127
post-up /sbin/ip -6 route add fc00:1234::ffff:ffff:ffff:ffff dev enp0s8
pre-down /sbin/ip -6 route del fc00:1234::ffff:ffff:ffff:ffff dev enp0s8
The LXD container is configured as:
- name test
- Image source images:ubuntu/xenial/amd64
- default nic device removed
- new nic device: lxc config device add test veth0 nic nictype=p2p name=veth0 host_name=veth1234
/etc/network/interfaces:
auto lo
iface lo inet loopback
With Xenial's ndppd 0.2.4 it won't be possible to ping the LXD container from the host, only from the VM or LXD container):
$ ping6 fc00:1234::5:1
PING fc00:1234::5:1(fc00:1234::5:1) 56 data bytes
From fc00:1234::ffff:ffff:ffff:ffff icmp_seq=1 Destination unreachable: Address unreachable
Installing the ndppd 0.2.5 package from the Yakkety repo on the VM fixes it:
$ ping6 fc00:1234::5:1
PING fc00:1234::5:1(fc00:1234::5:1) 56 data bytes
64 bytes from fc00:1234::5:1: icmp_seq=1 ttl=63 time=0.815 ms
By observing the traffic in Wireshark it can be seen that the VM host only nic receives neighbor solicitation requests, but nothing responds - ndppd doesn't pick them up.
The regression risk should be zero as the 16.04 package appears to be unusable in its current form.
The bug breaks any reasonable use of ndppd if the initial description is correct, it certainly breaks the setup I'm currently evaluating. I'm using a VM to test a networking setup for an unrelated project, thus it's more complicated than it'd be on real hardware.
My configuration uses ndppd to establish multiple IPv6 subnets in place of the usual /64 ISP assignment for LXD containers, while the ISP doesn't actually delegate the network to my host.
The VirtualBox VM environment is set up as follows: :ffff:ffff: ffff:ffff/ 64
- 1 ethernet adapter configured as a bridge with my physical nic, there's a DHCP server on the same network
- 1 ethernet adapter on a host only network, ip fc00:1234:
The VM itself is configured as: conf.all. forwarding= 1 in /etc/sysctl.conf interfaces:
- Ubuntu 16.04 server x64
- The bridged nic (enp0s3) is configured to use dhcp and practically only used for apt
- The host only nic (enp0s8) is simulating the actual network of interest
- The nic veth1234 is connecting to the LXD container (p2p, set up by LXD)
- net.ipv6.
- LXD, bridging disabled
- ndppd installed
/etc/network/
auto lo
iface lo inet loopback
auto enp0s3
iface enp0s3 inet dhcp
auto enp0s8
iface enp0s8 inet6 static
address fc00:1234::1/127
post-up /sbin/ip -6 route add fc00:1234: :ffff:ffff: ffff:ffff dev enp0s8 :ffff:ffff: ffff:ffff dev enp0s8
pre-down /sbin/ip -6 route del fc00:1234:
allow-hotplug veth1234 :5:ffff/ 112
iface veth1234 inet6 static
address fc00:1234:
/etc/ndppd.conf:
route-ttl 30000
proxy enp0s8 {
router yes
timeout 500
ttl 30000
rule fc00:1234::1/64 {
auto
}
}
The LXD container is configured as: ubuntu/ xenial/ amd64 interfaces:
- name test
- Image source images:
- default nic device removed
- new nic device: lxc config device add test veth0 nic nictype=p2p name=veth0 host_name=veth1234
/etc/network/
auto lo
iface lo inet loopback
auto veth0 4860::8888 2001:4860: 4860::8844
iface veth0 inet6 static
address fc00:1234::5:1/112
gateway fc00:1234::5:ffff
dns-nameserver 2001:4860:
With Xenial's ndppd 0.2.4 it won't be possible to ping the LXD container from the host, only from the VM or LXD container): :5:1(fc00: 1234::5: 1) 56 data bytes :ffff:ffff: ffff:ffff icmp_seq=1 Destination unreachable: Address unreachable
$ ping6 fc00:1234::5:1
PING fc00:1234:
From fc00:1234:
Installing the ndppd 0.2.5 package from the Yakkety repo on the VM fixes it: :5:1(fc00: 1234::5: 1) 56 data bytes
$ ping6 fc00:1234::5:1
PING fc00:1234:
64 bytes from fc00:1234::5:1: icmp_seq=1 ttl=63 time=0.815 ms
By observing the traffic in Wireshark it can be seen that the VM host only nic receives neighbor solicitation requests, but nothing responds - ndppd doesn't pick them up.
The regression risk should be zero as the 16.04 package appears to be unusable in its current form.