Excessive dnsmasq-dhcp log entries

Bug #1456792 reported by Matt Thompson
20
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Invalid
Medium
Unassigned

Bug Description

I'm seeing a heap of the following in /var/log/syslog on my controller nodes:

May 19 15:56:22 jenk-heat-247-node1 dnsmasq-dhcp[28643]: not giving name jenk-heat-247-node1_keystone_container-23342668 to the DHCP lease of 10.0.3.153 because the name exists in /etc/hosts with address 172.29.237.189
May 19 15:56:22 jenk-heat-247-node1 dnsmasq-dhcp[28643]: not giving name jenk-heat-247-node1_horizon_container-03ea2807 to the DHCP lease of 10.0.3.192 because the name exists in /etc/hosts with address 172.29.239.184
May 19 15:56:22 jenk-heat-247-node1 dnsmasq-dhcp[28643]: not giving name jenk-heat-247-node1_cinder_api_container-f2301af2 to the DHCP lease of 10.0.3.111 because the name exists in /etc/hosts with address 172.29.239.216
May 19 15:56:22 jenk-heat-247-node1 dnsmasq-dhcp[28643]: not giving name jenk-heat-247-node1_galera_container-874a3e04 to the DHCP lease of 10.0.3.85 because the name exists in /etc/hosts with address 172.29.237.245
May 19 15:56:22 jenk-heat-247-node1 dnsmasq-dhcp[28643]: not giving name jenk-heat-247-node1_nova_conductor_container-832856c3 to the DHCP lease of 10.0.3.145 because the name exists in /etc/hosts with address 172.29.238.175
May 19 15:56:22 jenk-heat-247-node1 dnsmasq-dhcp[28643]: not giving name jenk-heat-247-node1_nova_scheduler_container-70ba2ef0 to the DHCP lease of 10.0.3.200 because the name exists in /etc/hosts with address 172.29.238.236

We could pass LXC's dnsmasq --no-hosts, which will prevent dnsmasq from reading /etc/hosts.

Revision history for this message
Serge van Ginderachter (svg) wrote :

On a side note, why do we need this network? the br-mgmt network should suffice, imho.

no longer affects: openstack-ansible/trunk
Changed in openstack-ansible:
status: New → Invalid
status: Invalid → Triaged
importance: Undecided → Medium
Revision history for this message
Kevin Carter (kevin-carter) wrote :

The fix needs to be added into the the lxc-system-manage tool which is used to bring the dnsmasq oinline for use within lxc

Master:
https://github.com/stackforge/os-ansible-deployment/blob/master/playbooks/roles/lxc_hosts/templates/lxc-system-manage.j2#L141-L158

The same thing can be done in Juno too:
https://github.com/stackforge/os-ansible-deployment/blob/juno/rpc_deployment/roles/lxc_common/files/lxc-system-manage#L134-L150

@Serge
As for the lxcbr0 interface, this is the device that all of the traffic leaving the lxc containers goes through. In master / Kilo this devices could be overriden and or changed using the `lxc_net_bridge` variable. There are several options available for setting up and defining the lxc network interface and will need to be configured for a successful deployment when using physical devices. These options can all be seen here: https://github.com/stackforge/os-ansible-deployment/blob/master/playbooks/roles/lxc_hosts/defaults/main.yml#L16-L31

Changed in openstack-ansible:
status: Triaged → Invalid
no longer affects: openstack-ansible/juno
Revision history for this message
Kamil Szczygiel (kamil-szczygiel) wrote :

I've encountered this issue - dnsmasq was not handling addresses for containers since there were entries for these containers in /etc/hosts. This led to lack of Internet connectivity inside of the containers. After adding --no-hosts to lxc-system-manage as Kevin suggested, issue is no longer present.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.