failed to get ip from ironic_dnsmasq container

Bug #1687819 reported by MarginHu
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla
Expired
Undecided
Unassigned

Bug Description

Hi Guys,
I found a ironic node failed to get ip from ironic_dnsmasq container when pxe booting.
but If I run dnsmasq service in host instead of docker container, the issue didn't appear.

my host named "kode4" is a kvm vm, info is as following:

[root@kode4 ironic-dnsmasq]# docker ps | grep dnsmasq
279b394f3bab 192.168.102.16:5000/bgi/centos-binary-dnsmasq:4.0.0.3 "kolla_start" 14 hours ago Up 6 seconds ironic_dnsmasq
[root@kode4 ironic-dnsmasq]# cat dnsmasq.conf
port=0
interface=provision
dhcp-range=192.168.103.110,192.168.103.200,12h
dhcp-sequential-ip
dhcp-match=ipxe,175

# Client is running iPXE; move to next stage of chainloading
dhcp-boot=tag:ipxe,http://192.168.103.254:8088/inspector.ipxe

I capture traffic on host "kode4", and found it didn't appear any response packets from dhcp server.
[root@kode4 ~]# tcpdump -i any port 67 or port 68 -enn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
07:36:09.808533 B 52:54:00:19:3e:49 ethertype IPv4 (0x0800), length 440: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:19:3e:49, length 396
07:36:09.808804 B 52:54:00:19:3e:49 ethertype IPv4 (0x0800), length 440: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:19:3e:49, length 396
07:36:13.716886 B 52:54:00:19:3e:49 ethertype IPv4 (0x0800), length 440: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:19:3e:49, length 396
07:36:13.716886 B 52:54:00:19:3e:49 ethertype IPv4 (0x0800), length 440: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:19:3e:49, length 396
07:36:21.626177 B 52:54:00:19:3e:49 ethertype IPv4 (0x0800), length 440: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:19:3e:49, length 396
07:36:21.626177 B 52:54:00:19:3e:49 ethertype IPv4 (0x0800), length 440: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:19:3e:49, length 396
07:36:37.444903 B 52:54:00:19:3e:49 ethertype IPv4 (0x0800), length 440: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:19:3e:49, length 396
07:36:37.445257 B 52:54:00:19:3e:49 ethertype IPv4 (0x0800), length 440: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:19:3e:49, length 396

but container's log show it has been send out dhcp response packet.

[root@kode4 ironic-dnsmasq]# docker logs -f ironic_dnsmasq
INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
INFO:__main__:Validating config file
INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
INFO:__main__:Copying service configuration files
INFO:__main__:Deleting file /etc/dnsmasq.conf
INFO:__main__:Coping file from /var/lib/kolla/config_files/dnsmasq.conf to /etc/dnsmasq.conf
INFO:__main__:Setting file /etc/dnsmasq.conf owner to root:root
INFO:__main__:Setting file /etc/dnsmasq.conf permission to 0600
INFO:__main__:Writing out command to execute
Running command: 'dnsmasq --no-daemon --conf-file=/etc/dnsmasq.conf'
dnsmasq: started, version 2.66 DNS disabled
dnsmasq: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth
dnsmasq-dhcp: DHCP, IP range 192.168.103.110 -- 192.168.103.200, lease time 12h
dnsmasq-dhcp: DHCPDISCOVER(provision) 52:54:00:19:3e:49
dnsmasq-dhcp: DHCPOFFER(provision) 192.168.103.110 52:54:00:19:3e:49
dnsmasq-dhcp: DHCPDISCOVER(provision) 52:54:00:19:3e:49
dnsmasq-dhcp: DHCPOFFER(provision) 192.168.103.110 52:54:00:19:3e:49
dnsmasq-dhcp: DHCPDISCOVER(provision) 52:54:00:19:3e:49
dnsmasq-dhcp: DHCPOFFER(provision) 192.168.103.110 52:54:00:19:3e:49
dnsmasq-dhcp: DHCPDISCOVER(provision) 52:54:00:19:3e:49
dnsmasq-dhcp: DHCPOFFER(provision) 192.168.103.110 52:54:00:19:3e:49

I'm sure ironic_dnsmasq is listening port 67.
[root@kode4 ~]# netstat -anlp | grep :67
udp 0 0 0.0.0.0:67 0.0.0.0:* 11260/dnsmasq
[root@kode4 ~]# docker inspect ironic_dnsmasq | grep -i pid
            "Pid": 11248,
            "PidMode": "",
            "PidsLimit": 0,
[root@kode4 ~]# ps -elf |grep 11248
4 S root 11248 11233 0 80 0 - 48 sigtim 07:34 pts/10 00:00:00 /usr/local/bin/dumb-init /bin/bash /usr/local/bin/kolla_start
4 S root 11260 11248 0 80 0 - 3880 poll_s 07:34 ? 00:00:00 dnsmasq --no-daemon --conf-file=/etc/dnsmasq.conf
0 S root 11860 9641 0 80 0 - 28163 pipe_w 07:50 pts/5 00:00:00 grep --color=auto 11248

I suspect it is caused by firewall, so flush all iptable rules but the issue still be existed.

[root@kode4 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 133K packets, 19M bytes)
 pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 129K packets, 17M bytes)
 pkts bytes target prot opt in out source destination

Chain DOCKER (0 references)
 pkts bytes target prot opt in out source destination

Chain DOCKER-ISOLATION (0 references)
 pkts bytes target prot opt in out source destination

Chain ironic-inspector (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-filter-top (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-FORWARD (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-INPUT (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-OUTPUT (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-i2940a7ab-6 (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-local (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-o2940a7ab-6 (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-s2940a7ab-6 (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-sg-chain (0 references)
 pkts bytes target prot opt in out source destination

Chain neutron-openvswi-sg-fallback (0 references)
 pkts bytes target prot opt in out source destination
[root@kode4 ~]#

I suspect it is possible about "provision" bridge, but found no packet dropped by ovs flow rules, because "n_packets" in first flow rule hasn't been increased.

[root@kode4 ironic-dnsmasq]# docker exec -it openvswitch_db bash
(openvswitch-db)[root@kode4 /]# ovs-vsctl show
38f5750e-7852-47f3-a6c6-46d04a3472dc
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth5"
            Interface "eth5"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge provision
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-provision
            Interface phy-provision
                type: patch
                options: {peer=int-provision}
        Port "eth0"
            Interface "eth0"
        Port provision
            Interface provision
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-c0a86a17"
            Interface "vxlan-c0a86a17"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.106.24", out_key=flow, remote_ip="192.168.106.23"}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo2940a7ab-60"
            tag: 1
            Interface "qvo2940a7ab-60"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port int-provision
            Interface int-provision
                type: patch
                options: {peer=phy-provision}
        Port br-int
            Interface br-int
                type: internal
(openvswitch-db)[root@kode4 /]#

(openvswitch-db)[root@kode4 /]# ovs-ofctl dump-flows provision
NXST_FLOW reply (xid=0x4):
 cookie=0xb8b4946d3a9bed3d, duration=52585.251s, table=0, n_packets=15, n_bytes=1638, idle_age=42849, priority=2,in_port=2 actions=drop
 cookie=0xb8b4946d3a9bed3d, duration=52585.508s, table=0, n_packets=642283, n_bytes=44953445, idle_age=0, priority=0 actions=NORMAL
(openvswitch-db)[root@kode4 /]#
(openvswitch-db)[root@kode4 /]# ovs-ofctl show provision
OFPT_FEATURES_REPLY (xid=0x2): dpid:000002c480857242
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(eth0): addr:52:54:00:59:be:d9
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max
 2(phy-provision): addr:46:73:57:07:bb:7c
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(provision): addr:02:c4:80:85:72:42
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max

Revision history for this message
Michal Nasiadka (mnasiadka) wrote :

Is this still a problem? If yes - please reproduce with kolla queens/rocky and provide logs.

Changed in kolla:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for kolla because there has been no activity for 60 days.]

Changed in kolla:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.