avahi-daemon reports "Received response with invalid source port # on interface 'eth0.0'" all the time

Bug #447442 reported by avlas
70
This bug affects 13 people
Affects Status Importance Assigned to Milestone
libvirt
Fix Released
High
libvirt (Ubuntu)
Incomplete
Medium
Unassigned

Bug Description

Both in my syslog and in daemon.log these messages are repeated again and again:

Received response with invalid source port 21604 on interface 'eth0.0'
Invalid response packet.
Invalid legacy unicast query packet.

The number of the port changes but the messages don't stop.

Description: Ubuntu 9.04
Release: 9.04

avahi-daemon:

  Installed: 0.6.23-4ubuntu4
  Candidate: 0.6.23-4ubuntu4
  version table:
 *** 0.6.23-4ubuntu4 0
        500 http://us.archive.ubuntu.com jaunty/main Packages
        100 /var/lib/dpkg/status

ProblemType: Bug
Architecture: amd64
DistroRelease: Ubuntu 9.04
NonfreeKernelModules: nvidia
Package: avahi-daemon 0.6.23-4ubuntu4
ProcEnviron:
 LANGUAGE=
 LANG=ca_ES.UTF-8
 SHELL=/bin/bash
SourcePackage: avahi
Uname: Linux 2.6.28-15-generic x86_64

Revision history for this message
avlas (avlas) wrote :
Franz (franz-scherf)
description: updated
Revision history for this message
Adam Gibbins (adamgibbins) wrote :

I'm also receiving a similar problem:
Nov 25 16:00:29 box1.example.com avahi-daemon[3310]: Received response with invalid source port 35558 on interface 'br0.0'
Nov 25 16:00:29 box1.example.com avahi-daemon[3310]: Invalid legacy unicast query packet.
Nov 25 16:00:29 box1.example.com avahi-daemon[3310]: Invalid legacy unicast query packet.
Nov 25 16:00:29 box2.example.com avahi-daemon[4616]: Invalid legacy unicast query packet.
Nov 25 16:00:29 box2.example.com avahi-daemon[4616]: Received response with invalid source port 35558 on interface 'eth0.0'
Nov 25 16:00:29 box2.example.com avahi-daemon[4616]: Invalid legacy unicast query packet.
Nov 25 16:00:29 box2.example.com avahi-daemon[4616]: Invalid legacy unicast query packet.
Nov 25 16:00:30 box1.example.com avahi-daemon[3310]: Received response with invalid source port 35558 on interface 'br0.0'
Nov 25 16:00:30 box1.example.com avahi-daemon[3310]: Received response with invalid source port 35558 on interface 'br0.0'
Nov 25 16:00:30 box2.example.com avahi-daemon[4616]: Received response with invalid source port 35558 on interface 'eth0.0'
Nov 25 16:00:30 box2.example.com avahi-daemon[4616]: Received response with invalid source port 35558 on interface 'eth0.0'
Nov 25 16:00:31 box1.example.com avahi-daemon[3310]: Received response with invalid source port 35558 on interface 'br0.0'
Nov 25 16:00:31 box2.example.com avahi-daemon[4616]: Received response with invalid source port 35558 on interface 'eth0.0'
Nov 25 16:00:32 box1.example.com avahi-daemon[3310]: Received response with invalid source port 35558 on interface 'br0.0'
Nov 25 16:00:32 box2.example.com avahi-daemon[4616]: Received response with invalid source port 35558 on interface 'eth0.0'
Nov 25 16:00:33 box1.example.com avahi-daemon[3310]: Received response with invalid source port 35558 on interface 'br0.0'
Nov 25 16:00:33 box2.example.com avahi-daemon[4616]: Received response with invalid source port 35558 on interface 'eth0.0'

box1:
Package: avahi-daemon
State: installed
Automatically installed: yes
Version: 0.6.23-4ubuntu4

box2:
Package: avahi-daemon
State: installed
Automatically installed: yes
Version: 0.6.23-2ubuntu2.1

Please let me know if I can provide any additional information that may be of use.

Revision history for this message
starslights (starslights) wrote :

Hello,

I run kubuntu karmic 9.10 on x86 64 and i have the same problem, sometimes i become this warn in my log:

 avahi-daemon[1095] Received response from host xxxxxxxxxxxx with invalid source port 42076 on interface 'eth1.0'
  kernel [ 7746.017432] [UFW BLOCK] IN=eth1 OUT= MAC=xxxxxxxxxxxxxxxxxxx SRC=xxxxxxxxxx DST=xxxxxxxxxxx LEN=40 TOS=0x00 PREC=0x00 TTL=94 ID=31337 DF PROTO=TCP SPT=58858 DPT=9090 WINDOW=65535 RES=0x00 ACK FIN URGP=0
  kernel [ 7748.860188] [UFW BLOCK] IN=eth1 OUT= MAC=xxxxxxxxxxxxxxxxxxxxxxxxxSRC=xxxxxxxxxxx DST=xxxxxxxxxxxx LEN=40 TOS=0x00 PREC=0x00 TTL=94 ID=31348 DF PROTO=TCP SPT=59387 DPT=9090 WINDOW=65535 RES=0x00 ACK FIN URGP=0
 moon kernel [ 7754.328035] [UFW BLOCK] IN=eth1 OUT= MAC=0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx SRC=xxxxxxxxxxxxxxx DST=xxxxxxxxxxxxxxxxxx LEN=40 TOS=0x00 PREC=0x00 TTL=94 ID=31367 DF PROTO=TCP SPT=59387 DPT=9090 WINDOW=65535 RES=0x00 ACK FIN URGP=0
  avahi-daemon[1095] Invalid legacy unicast query packet.
  avahi-daemon[1095] Invalid legacy unicast query packet.
  avahi-daemon[1095] Received response from host xxxxxxxxxxxxxxx with invalid source port 42076 on interface 'eth1.0'

I am not sure but it seem that's the bug come if we boot and run a VM (virtualbox oso) , it's the message i become right when i push on starting the VM

2010-02-24 11:40:09 avahi-daemon[1095] Invalid legacy unicast query packet.
2010-02-24 11:40:09 avahi-daemon[1095] Received response from host xxxxxxxxxxxx with invalid source port 34534 on interface 'eth1.0'
2010-02-24 11:40:09 avahi-daemon[1095] Invalid legacy unicast query packet.

Linux moon 2.6.31-20-generic #57-Ubuntu SMP Mon Feb 8 09:02:26 UTC 2010 x86_64 GNU/Linux

Revision history for this message
Ante Karamatić (ivoks) wrote :

Do you guys, by any chance, have multiple machines (including the one with this problem) with the same hostname? Cause if you have two or more machines with the same hostname, avahi will go crazy.

Changed in avahi (Ubuntu):
status: New → Incomplete
importance: Undecided → Medium
Revision history for this message
avlas (avlas) wrote :

No, this is not my case. Let me know if you want me to try something :)

Revision history for this message
Ante Karamatić (ivoks) wrote :

Salva, just to make sure, could you configure /etc/avahi/avahi-daemon.conf to include:

host-name=randomhostname

and then restart avahi-daemon:

service avahi-daemon-restart (lucid)
/etc/init.d/avahi-daemon restart (pre lucid)

Revision history for this message
avlas (avlas) wrote :

I uncommented the hostname in /etc/avahi/avahi-daemon.conf, section [server] and I give it a random name as you asked (different of the real hostname).

Then I restarted the avahi-daemon with sudo service avahi-daemon restart (I use Lucid and I tried with service avahi-daemon-restart, but it complained)

The problem persists afterwards (there are two IPS, #IP1 and #IP2, neither the two is the one I have in my computer):

Aug 25 09:54:09 sis avahi-daemon[2731]: Received response from host #IP with invalid source port 40420 on interface 'eth0.0'
Aug 25 09:54:18 sis wpa_supplicant[1022]: WPS-AP-AVAILABLE
Aug 25 09:54:26 sis avahi-daemon[2731]: Received response from host #IP with invalid source port 9918 on interface 'eth0.0'
Aug 25 09:54:39 sis avahi-daemon[2731]: Invalid legacy unicast query packet.
Aug 25 09:54:39 sis avahi-daemon[2731]: Received response from host #IP with invalid source port 24240 on interface 'eth0.0'
Aug 25 09:54:39 sis avahi-daemon[2731]: Invalid legacy unicast query packet.
Aug 25 09:54:40 sis avahi-daemon[2731]: Invalid legacy unicast query packet.
Aug 25 09:54:40 sis avahi-daemon[2731]: Received response from host #IP with invalid source port 24240 on interface 'eth0.0'
Aug 25 09:54:40 sis avahi-daemon[2731]: Invalid legacy unicast query packet.
Aug 25 09:54:41 sis avahi-daemon[2731]: Invalid legacy unicast query packet.
Aug 25 09:54:41 sis avahi-daemon[2731]: Received response from host #IP with invalid source port 24240 on interface 'eth0.0'
Aug 25 09:55:13 sis avahi-daemon[2731]: last message repeated 7 times
Aug 25 09:55:13 sis avahi-daemon[2731]: Received response from host #IP with invalid source port 40420 on interface 'eth0.0'
Aug 25 09:55:18 sis wpa_supplicant[1022]: WPS-AP-AVAILABLE
Aug 25 09:55:23 sis avahi-daemon[2731]: Invalid legacy unicast query packet.
Aug 25 09:55:27 sis avahi-daemon[2731]: last message repeated 5 times
Aug 25 09:55:27 sis avahi-daemon[2731]: Received response from host #IP with invalid source port 9918 on interface 'eth0.0'
Aug 25 09:55:28 sis avahi-daemon[2731]: Received response from host #IP2 with invalid source port 46459 on interface 'eth0.0'
Aug 25 09:55:28 sis avahi-daemon[2731]: Received response from host #IP2 with invalid source port 45543 on interface 'eth0.0'

and continues all the time...

Revision history for this message
Ante Karamatić (ivoks) wrote : Re: [Bug 447442] Re: avahi-daemon reports "Received response with invalid source port # on interface 'eth0.0'" all the time

On 25.08.2010 16:01, Salva wrote:

> The problem persists afterwards (there are two IPS, #IP1 and #IP2,
> neither the two is the one I have in my computer):

Do you know what OS is on IP1 and IP2?

Revision history for this message
avlas (avlas) wrote :

I'm sorry, I have no idea. What I can tell is that I use my laptop at the university. What they are exactly, I can't tell

Revision history for this message
starslights (starslights) wrote :

Hi,

The bug still on lucid 10.04.1 x86 64 but it happend only with VM box.

I have try to edit avahi-daemon.conf with "randomhost" and it don't fix the problem...

With the right name of the computer , in that case was used "ubuntu", it seem working and not complaint

It the log after have reload the conf .

2010-08-25 19:12:14 moonlights avahi-daemon[17333] Got SIGTERM, quitting.
2010-08-25 19:12:14 moonlights avahi-daemon[17333] Leaving mDNS multicast group on interface eth1.IPv4 with address xxxxxxxxxxxxxxx
2010-08-25 19:12:15 moonlights init avahi-daemon main process (17333) terminated with status 255
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Found user 'avahi' (UID 105) and group 'avahi' (GID 111).
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Successfully dropped root privileges.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] avahi-daemon 0.6.25 starting up.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Successfully called chroot().
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Successfully dropped remaining capabilities.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] No service file found in /etc/avahi/services.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Joining mDNS multicast group on interface eth1.IPv4 with address xxxxxxxxxxxxxxxxx.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] New relevant interface eth1.IPv4 for mDNS.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Network interface enumeration completed.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Registering new address record for xxxxxxxxxxxxxxxxxx on eth1.*.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Registering new address record for xxxxxxxxxxxx on eth1.IPv4.
2010-08-25 19:12:15 moonlights avahi-daemon[17434] Registering HINFO record with values 'X86_64'/'LINUX'.
2010-08-25 19:12:16 moonlights avahi-daemon[17434] Server startup complete. Host name is ubuntu.local. Local service cookie is xxxxxxxxxxxxxxx

Anyway it must be fixed to can reconize it own the domain used by the VM , because it create a few trouble with the VM about reachability. Between i have on my system eth0 and eth1 by defaut with my ethernet card

If i have new complaint , i give news

Best Regards

Revision history for this message
starslights (starslights) wrote :

No , i still warned when i shutdown the VM ..

2010-08-25 20:16:38 moonlights avahi-daemon[17434] Received response from host 192.168.0.199 with invalid source port 49109 on interface 'eth1.0'

Revision history for this message
avlas (avlas) wrote :

This is still happening to me in Kubuntu Lucid 10.04.1 x86 64 (avahi deb version 0.6.25-1ubuntu6) as mentioned above. Did starslights upgrade Lucid from Karmic or installed it from scratch? I'm wondering if it could be happening because of some previous configuration.

I also tried to put the real hostname of my computer but it didn't work either

Revision history for this message
starslights (starslights) wrote :

Salva,

My system come from fresh install of kubuntu Lucid 10.04 LVM LTS 10.04 , the 10.04.1 come just from the recent update but no upgrade or install new

Revision history for this message
In , Arnaud (arnaud-redhat-bugs) wrote :

Description of problem:

Starting libvirtd with its default configuration creates a bridge interface virbr0 with IP 192.168.122.1. It also adds iptables rules to the nat table to allow VMs connected to this bridge to access the external network. These rules catch any incoming packet whose destination is not on the 192.168.122.0/24 subnet, even multicast packets.

As a result, the host sees mDNS datagrams from its guests coming from its own IP address with a (more or less) random source port, whereas avahi expects them to come from port 5353.

The obvious workaround (add a static nat rule like "iptables -t nat -A POSTROUTING -d 224.0.0.0/4 -j RETURN" to /etc/sysconfig/iptables) does not work, as libvirt inserts its rules before the existing ones.

Version-Release number of selected component (if applicable):

libvirt-0.8.2-1.fc13.x86_64

How reproducible:

Always.

Steps to Reproduce:
1. service libvirtd start
2. virsh start myguest
      (here myguest is a guest VM with avahi-daemon enabled)
3. getent hosts myguest.local

Actual results:

The last command times out. Here is the relevant line from /var/log/messages:

Nov 28 12:11:51 carrosse avahi-daemon[22764]: Received response from host 192.168.122.1 with invalid source port 1025 on interface 'virbr0.0'

Expected results:

% getent hosts myguest.local
192.168.122.157 myguest.local

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for avahi (Ubuntu) because there has been no activity for 60 days.]

Changed in avahi (Ubuntu):
status: Incomplete → Expired
Revision history for this message
In , Frank (frank-redhat-bugs) wrote :

If one's willing to use hand-written iptables to accomplish NAT,
is there a way of making libvirtd not try running iptables at all?
I don't see documentation for the /etc/libvirt/libvirtd.conf file,
should that hold the magic door.

Revision history for this message
In , Bug (bug-redhat-bugs) wrote :

This message is a reminder that Fedora 13 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 13. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora
'version' of '13'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version prior to Fedora 13's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that
we may not be able to fix it before Fedora 13 is end of life. If you
would still like to see this bug fixed and are able to reproduce it
against a later version of Fedora please change the 'version' of this
bug to the applicable version. If you are unable to change the version,
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

The process we are following is described here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

Sounds like this is something libvirt could handle itself? Moving to rawhide

Revision history for this message
In , Fedora (fedora-redhat-bugs) wrote :

This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component.

Revision history for this message
In , Eric (eric-redhat-bugs) wrote :

Stefan Berger has been working on related code in the nwfilter components, with several improvements for 0.9.8. I have no idea if those improvements meet your need, or if there is still more to go to the point that you want; I suggest bringing this topic up on <email address hidden> (Stefan is more likely to see this on-list than by finding this BZ).

Revision history for this message
In , Stefan (stefan-redhat-bugs) wrote :

With nwfilter we may be able to prevent a VM from sending multicast traffic, but cannot influence what happens to it in case of NATing.

We could add the above mentioned rule

iptables -t nat -A POSTROUTING -d 224.0.0.0/4 -j RETURN

via utils/iptables.c to the list of 3 rules that libvirt automatically creates in the iptables nat POSTROUTING chain.

HOWEVER:
Typically the multicast traffic would have to go onto the wire to get as many responses as possible. In this case I don't see it going onto the wire at all. I see them on the VM's tap interface but not on the physical interface anymore and while pinging from the VM into the network works and shows a counter increase on the respective masquerading rule I don't see any counter increase for the above rule if it is first in the list of rules. Some logic maybe already discards multicast traffic from entering the iptables NAT table? Adding the rule there at least doesn't make sense considering what I am seeing.

Revision history for this message
In , Arnaud (arnaud-redhat-bugs) wrote :

For my use case, mDNS is link-local anyway, so if the host acts as a router, the guest won't get to talk with the outside world and it is perfectly fine. OTOH the host should definitely get to see mDNS announcements from the guest.

Note multicast routing is quite complicated anyway (needs dedicated protocols, on a LAN this probably means PIM), so I wouldn't expect it to work out of the box with a Linux host as a router.

Revision history for this message
In , Stefan (stefan-redhat-bugs) wrote :

Following my observations in comment 12 of the above iptables rule applied to the NAT table, I don't think that any changes are required to the iptables setup to the host's NAT table. However, I cannot confirm that an application running on the host gets the packets, but for sure I did not see them going on the wire in this network setup.

Revision history for this message
In , Arnaud (arnaud-redhat-bugs) wrote :

As I wrote earlier, the issue is that the packets get rewritten even if they are destined to the host, so the host sees them coming from the wrong UDP port (mDNS relies on source port).

I guess my point is the iptables setup should be configurable, not hardcoded in libvirt.

Revision history for this message
In , Stefan (stefan-redhat-bugs) wrote :

I had done the following test:

iptables -t nat -I POSTROUTING 1 -d 224.0.0.0/4 -s 129.168.122.0/24 -j ACCEPT

This adds a rule that presumably intercepts all multicast traffic from VMs and simply accepts it.

A VM is started and receives the IP address 192.168.122.239, which is in the above source subnet 129.168.122.0/24.

This VM now starts pinging for example 8.8.8.8.

iptables -t nat -L POSTROUTING -n -v

Chain POSTROUTING (policy ACCEPT 64 packets, 4057 bytes)
 pkts bytes target prot opt in out source destination
    0 0 ACCEPT all -- * * 129.168.122.0/24 224.0.0.0/4
    0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
    0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
    1 84 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24

Above shows a counter of '1' indicating the masquerading of the ICMP traffic. Stopping and restarting the ping puts the counter to '2'. (due to connection tracking the rule is only needed once per ICMP stream)

I started a 'tcpdump -i vnet0 -n' to monitor traffic from that VM.

Inside the VM I did

service avahi-daemon restart

which then shows a flood of multicast messages on vnet0. One would think that that traffic now gets 'ACCEPT'ed due to the 1st rule above, but this is not the case. The output of

iptables -t nat -L POSTROUTING -n -v

still shows the same as above -- no counter change in the 1st ryule. The kernel was 2.6.35.14-97.fc14.x86_64.
Changing above first rule to '-j MASQUERADE' or '-j RETURN' doesn't change a anything. The counter remains on '0'.

My conclusion is that adding a rule here (for this kernel version at least) for multicast traffic makes no sense since it doesn't get invoked. Also see comment 12.

Revision history for this message
nils (internationils) wrote :

https://forums.virtualbox.org/viewtopic.php?f=7&t=48040

Here is a possible explanation, VMs running avahi have the same IP as the host (NATed) and this is where the error seems to come from. Recommended solution is to run the VMs in bridge mode, and get them a real unique address via DHCP or whatever. HTH...

Revision history for this message
Kevin Stone (kevin-stone) wrote :

In my case:

- Running several VMs under libvirt.
- Host and VMs have avahi and mDNS setup.
- VMs are in NAT'd network (192.168.122.0/24)
- /var/log/syslog contains

avahi-daemon[17165]: Received response from host 192.168.122.1 with invalid source port 1049 on interface 'virbr0.0'

What's happening here is NAT translation. This is because the destination address for mDNS is 224.0.0.251 and the default iptables setup is:

# iptables -t nat -L POSTROUTING -n
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24

One solution is to not masquerade mDNS packets:

iptables -t nat -I POSTROUTING 1 -m udp -p udp --sport 5353 --dport 5353 -j ACCEPT

A more general fix for multicast and broadcast would probably be better.

Here's tcpdump output from the virbr0 interface showing a mDNS request / response:

13:19:55.618332 IP 192.168.122.114.5353 > 224.0.0.251.5353: 0 A (QM)? test.local. (32)
13:19:55.619205 IP 192.168.122.6.5353 > 224.0.0.251.5353: 0*- [0q] 1/0/0 (Cache flush) A 192.168.122.6 (42)

Revision history for this message
Alex Vorona (alex-vorona) wrote :

>One solution is to not masquerade mDNS packets:
I had this fix working.

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

Moving this to the upstream tracker, since there's been some discussion and it's not really fedora specific (or a high impact bug)

Revision history for this message
Gionn (giovanni.toraldo) wrote :

Workaround works for me too, this was driving me crazy.

Changed in avahi (Ubuntu):
status: Expired → Confirmed
Revision history for this message
In , Brian (brian-redhat-bugs) wrote :

What is the upstream tracker so that we can follow that?

Revision history for this message
In , Dave (dave-redhat-bugs) wrote :

(In reply to comment #18)
> What is the upstream tracker so that we can follow that?

This BZ is the upstream tracker; Cole is just pointing out that he changed the Product field from Fedora to 'Virtualization Tools'.

Revision history for this message
In , Brian (brian-redhat-bugs) wrote :

(In reply to comment #19)
>
> This BZ is the upstream tracker; Cole is just pointing out that he changed
> the Product field from Fedora to 'Virtualization Tools'.

Ahhh. Got it.

Any action on this item? It seems like pretty low hanging fruit. I think I even had a libvirt that had a hack in place to do this, since I am seeing this issue resurfacing after upgrading to FC18.

Revision history for this message
In , Brian (brian-redhat-bugs) wrote :

(In reply to comment #16)
>
> iptables -t nat -I POSTROUTING 1 -d 224.0.0.0/4 -s 129.168.122.0/24 -j ACCEPT

Surely you wanted to do s/129/192/ didn't you?

> iptables -t nat -L POSTROUTING -n -v
>
> Chain POSTROUTING (policy ACCEPT 64 packets, 4057 bytes)
> pkts bytes target prot opt in out source
> destination
> 0 0 ACCEPT all -- * * 129.168.122.0/24 224.0.0.0/4
> 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
> 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
> 1 84 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24
>
...
> My conclusion is that adding a rule here (for this kernel version at least)
> for multicast traffic makes no sense since it doesn't get invoked. Also see
> comment 12.

It probably would if the address in the rule were correct. It works here.

It should be noted that you probably want to add 226.0.0.0/8 to your ACCEPT list.

Revision history for this message
In , Brian (brian-redhat-bugs) wrote :

Created attachment 661710
Exempt multicast networks (224.0.0.0/4) from NATting

How about this patch? It compiles here and has worked in previous versions of libvirt. I don't have occasion to test it right at this moment (to much other work with VMs still in progress) but I'm fairly confident that it should still work.

Revision history for this message
In , Eric (eric-redhat-bugs) wrote :

Can you please also post this patch upstream to <email address hidden>? You'll get a faster review, as not all list readers pay attention to BZ attachments.

Revision history for this message
Alan Jenkins (aj504) wrote :

Upstream bug is <https://bugzilla.redhat.com/show_bug.cgi?id=657918>. I wasn't able to use the launchpad bug link thingy because _this_ (downstream) bug is filed against the wrong package.

no longer affects: avahi
Revision history for this message
Ted (tedks) wrote :

This is actually a bug in libvirt's iptables rules.

affects: avahi (Ubuntu) → libvirt (Ubuntu)
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

The following patch

https://bugzilla.redhat.com/attachment.cgi?id=661710&action=diff

is from the bugzilla bug. The patch author was asked to send it to the list but I don't believe that has happened yet. Perhaps we should test and shepherd it.

Revision history for this message
Chris J Arges (arges) wrote :

Please test with the latest Wily release to see if this issue still persists. If so please attach any relevant information and mark the bug back to the New state.
Thanks!

Changed in libvirt (Ubuntu):
status: Confirmed → Incomplete
Revision history for this message
Rockhorse Park (2-bail-g) wrote :
Download full text (4.7 KiB)

I have this error in Ubuntu 14.04 32-bit.
New to both Ubuntu and launchpad, so bear with me.
Here's the log of it.
Oct 26 13:47:30 rockhorse-AOA110 avahi-daemon[471]: message repeated 2 times: [ server.c: Packet too short or invalid while reading response record. (Maybe a UTF-8 problem?)]
Oct 26 14:13:57 rockhorse-AOA110 avahi-daemon[471]: Invalid legacy unicast query packet.
Oct 26 14:13:58 rockhorse-AOA110 avahi-daemon[471]: message repeated 2 times: [ Invalid legacy unicast query packet.]
Oct 26 14:13:58 rockhorse-AOA110 avahi-daemon[471]: Received response from host ipA.DD.rE.SS with invalid source port 63992 on interface 'eth0.0'
Oct 26 14:13:59 rockhorse-AOA110 avahi-daemon[471]: Received response from host ipA.DD.rE.SS with invalid source port 63992 on interface 'eth0.0'
Oct 26 14:14:04 rockhorse-AOA110 avahi-daemon[471]: Invalid legacy unicast query packet.
Oct 26 14:14:04 rockhorse-AOA110 avahi-daemon[471]: message repeated 2 times: [ Invalid legacy unicast query packet.]
Oct 26 14:14:04 rockhorse-AOA110 avahi-daemon[471]: Received response from host ipA.DD.rE.SS with invalid source port 64934 on interface 'eth0.0'
Oct 26 14:17:09 rockhorse-AOA110 CRON[5542]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Oct 26 14:16:11 rockhorse-AOA110 avahi-daemon[471]: message repeated 8 times: [ Received response from host ipA.DD.rE.SS with invalid source port 64934 on interface 'eth0.0']
Oct 26 14:17:30 rockhorse-AOA110 avahi-daemon[471]: server.c: Packet too short or invalid while reading response record. (Maybe a UTF-8 problem?)
Oct 26 14:17:39 rockhorse-AOA110 avahi-daemon[471]: Invalid legacy unicast query packet.
Oct 26 14:17:40 rockhorse-AOA110 avahi-daemon[471]: message repeated 2 times: [ Invalid legacy unicast query packet.]
Oct 26 14:17:40 rockhorse-AOA110 avahi-daemon[471]: Received response from host ipA.DD.rE.SS with invalid source port 62285 on interface 'eth0.0'
Oct 26 14:18:43 rockhorse-AOA110 avahi-daemon[471]: message repeated 7 times: [ Received response from host ipA.DD.rE.SS with invalid source port 62285 on interface 'eth0.0']
Oct 26 14:19:42 rockhorse-AOA110 avahi-daemon[471]: Invalid legacy unicast query packet.
Oct 26 14:19:43 rockhorse-AOA110 avahi-daemon[471]: Invalid legacy unicast query packet.
Oct 26 14:19:43 rockhorse-AOA110 avahi-daemon[471]: Received response from host ipA.DD.rE.SS with invalid source port 62285 on interface 'eth0.0'
Oct 26 14:19:43 rockhorse-AOA110 avahi-daemon[471]: Invalid legacy unicast query packet.
Oct 26 14:19:43 rockhorse-AOA110 avahi-daemon[471]: Received response from host ipA.DD.rE.SS with invalid source port 62285 on interface 'eth0.0'
Oct 26 14:21:50 rockhorse-AOA110 avahi-daemon[471]: message repeated 7 times: [ Received response from:
lid source port 63992 on interface 'eth0.0'
Oct 26 14:13:59 rockhorse-AOA110 avahi-daemon[471]: Received response from host ipA.DD.rE.SS with invalid source port 63992 on interface 'eth0.0'
Oct 26 14:14:04 rockhorse-AOA110 avahi-daemon[471]: Invalid legacy unicast query packet.
Oct 26 14:14:04 rockhorse-AOA110 avahi-daemon[471]: message repeated 2 times: [ Invalid legacy unicast query packet.]
Oct 26 14:14:04 rockhorse-AOA110 avahi-daemo...

Read more...

Revision history for this message
Rockhorse Park (2-bail-g) wrote :

I found a possibly related solution (?) at http://lime-technology.com/forum/index.php?topic=29841.0 but am uncertain how to use that information as my EdgeMax PoE router firmware is up to date.

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

I think this was eventually fixed by:

commit 51e184e9821c3740ac9b52055860d683f27b0ab6
Author: Laszlo Ersek <email address hidden>
Date: Wed Sep 25 12:45:26 2013 +0200

    bridge driver: don't masquerade local subnet broadcast/multicast packets

Changed in libvirt:
importance: Unknown → High
status: Unknown → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.