Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
NetworkManager |
Expired
|
Medium
|
|||
network-manager (Ubuntu) |
Fix Released
|
High
|
Unassigned | ||
Xenial |
Won't Fix
|
Undecided
|
Unassigned | ||
Bionic |
Fix Released
|
High
|
Till Kamppeter | ||
Cosmic |
Fix Released
|
High
|
Unassigned | ||
systemd (Ubuntu) |
Fix Released
|
High
|
Unassigned | ||
Xenial |
Won't Fix
|
Undecided
|
Unassigned | ||
Bionic |
Fix Released
|
High
|
Dan Streetman | ||
Cosmic |
Fix Released
|
High
|
Dan Streetman |
Bug Description
[Impact]
When using a VPN the DNS requests might still be sent to a DNS server outside the VPN when they should not
[Test case]
1) Set up a VPN with split tunneling:
a) Configure VPN normally (set up remote host, any ports and options needed for the VPN to work)
b) Under the IPv4 tab: enable "Use this connection only for the resources on its network".
c) Under the IPv6 tab: enable "Use this connection only for the resources on its network".
2) Connect to the VPN.
3) Run 'systemd-resolve --status'; note the DNS servers configured:
a) For the VPN; under a separate link (probably tun0), note down the IP of the DNS server(s). Also note the name of the interface (link).
b) For the "main" connection; under the link for your ethernet or wireless devices (wl*, en*, whatever it may be), note down the IP of the DNS server(s). Also note the name of the interface (link).
4) In a separate terminal, run 'sudo tcpdump -ni <the main interface> port 53'; let it run.
5) In a separate terminal, run 'sudo tcpdump -ni <the VPN interface> port 53'; let it run.
6) In yet another terminal, issue name resolution requests using dig:
a) For a name known to be reachable via the public network:
'dig www.yahoo.com'
b) For a name known to be reachable only via the VPN:
'dig <some DNS behind the VPN>'
7) Check the output of each terminal running tcpdump. When requesting the public name, traffic can go through either. When requesting the "private" name (behind the VPN), traffic should only be going through the interface for the VPN. Additionally, ensure the IP receiving the requests for the VPN name is indeed the IP address noted above for the VPN's DNS server.
If you see no traffic showing in tcpdump output when requesting a name, it may be because it is cached by systemd-resolved. Use a different name you have not tried before.
[Regression potential]
The code change the handling of DNS servers when using a VPN, we should check that name resolution still work whne using a VPN in different configurations
-----------------
In 16.04 the NetworkManager package used to carry this patch:
http://
It fixed the DNS setup so that when I'm on the VPN, I am not sending unencrypted DNS queries to the (potentially hostile) local nameservers.
This patch disappeared in an update. I think it was present in 1.2.2-0ubuntu0.
This security bug exists upstream too: https:/
It's not a *regression* there though, as they didn't fix it yet (unfortunately!)
CVE References
In bugzilla.gnome.org/ #746422, Psimerda (psimerda) wrote : | #66 |
In my opinion it is useful to use split DNS view in all cases and only use never-default setting to decide the global DNS.
Rationale: There is no such think as sending all traffic across VPN, only default route traffic, i.e. traffic for which there's no specific route over a specific interface. As specific routes (as found in the routing table) are still used even with default route over VPN, I believe that specific zones (as found in per-connection lists of domains) should be maintained as well.
In bugzilla.gnome.org/ #746422, warthog9 (warthog9-eaglescrag) wrote : | #67 |
Pavel, I'll admit to not 100% following what you've suggested, so please excuse me if I've horribly miss-understood. I disagree with the assertion that "There is no such think as sending all traffic across VPN". The parent interface's adapter will have a local route mainly so you can get to the gateway, as well as a route for vpn endpoint you need to push traffic at however, there are some mitigating circumstances that forcing split-dns, so that the DNS on the VPN is ONLY serving the search spaces pushed, is actually exactly the opposite of what a user likely wants and/or causes some rather broken behavior.
- VPNs can, and often do, have IP space overlap issues. So if the parent interface's network you are on happens to be in the 10.0.0.
- DNS is not equal at all locations, which your assumption about split DNS I think assumes. DNS zones mean that something that resolves externally one way, may resolve completely differently (and potentially). example.com, to an external resolver may go to a coloed and public instance, while the same dns entry from an internal dns server may not. Assuming the VPN only pushes a search of internal.
- In a more casual environment, lets say a hotel, part of the reasons to use a VPN is because the user EXPLICITLY doesn't trust anything about the network they are on, up to and including the DNS servers. The user, if they are routing everything, is pretty likely to trust the DNS servers on the far side of the VPN. There's actually plenty of security concerns there, and I for one would have assumed that if I'm routing all my traffic over the VPN, my DNS traffic was as well (meaning I wouldn't have been relying on, or trusting, DNS servers I don't trust).
These are the three issues I'm most concerned about keeping split-dns as the default without choice. There are situations where I want split-dns, it's great and it's a fantastic feature, but in the case of a vpn that's intending to route all traffic, I'd argue the expected case is that all traffic, including all DNS queries, go over the vpn and it is not split.
In bugzilla.gnome.org/ #746422, Alexander E. Patrakov (patrakov-gmail) wrote : | #68 |
There is one more interesting use case currently unsupported by NetworkManager.
SafeDNS Inc. (a company that provides a filtered DNS server for parental control, with configurable filter settings) currently runs private beta testing of their VPN service, based on OpenVPN, with mobile phones and tablets being the primary target.
Their VPN is a split-tunnel setup, i.e. the pushed config explicitly routes their subnet (which contains their DNS server and various types of the "site is blocked" page) through the VPN, and nothing else. Their DNS server uses the fact that the queries come through the VPN in order to reliably identify the user and to apply his/her personal filtering settings.
For the filter to work, it is crucial that there are no other DNS servers used in the system. That's how OpenVPN Connect on both Android and iOS behaves, but, unfortunately, not how NetworkManager works.
So, in summary, they absolutely need all other DNS servers to be forgotten by the client, even though they don't want to route all traffic through their VPN servers.
For the record, here is what their ovpn file looks like:
client
remote vpn.safedns.com 1194 udp
nobind
dev tun
persist-tun
persist-key
verify-x509-name vpn.safedns.com name
remote-cert-tls server
cipher AES-128-CBC
<ca>
-----BEGIN CERTIFICATE-----
...snip...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...snip...
-----END CERTIFICATE-----
</ca>
<cert>
...snip...
-----END CERTIFICATE-----
</cert>
<key>
-----BEGIN PRIVATE KEY-----
...snip...
-----END PRIVATE KEY-----
</key>
They push routes to 195.46.39.0/29 and 195.46.39.39, and push 195.46.39.39 as the DNS server.
In bugzilla.gnome.org/ #746422, Dcbw-y (dcbw-y) wrote : | #69 |
Thanks for the data point Alexander. We probably need another option on the IP4 and IP6 settings for all connections indicating whether or not split DNS should be used if it is available for that connection. There would be a default value that would preserve existing behavior.
For Pavel's case (with a VPN grabbing all default-route traffic, but where some routes/devices are not routed over the VPN) that property would indicate that split DNS should be used.
For the SafeDNS case (where grabs all default-route traffic, but no other DNS should be used) that property would indicate that no split DNS should be used, and the DNS manager would merge/ignore DNS based on connection priority (since we will very soon support multiple VPN connections).
If the property was left as default, then ipX.never-default would control whether split DNS was used or not.
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #70 |
(In reply to Alexander E. Patrakov from comment #3)
> So, in summary, they absolutely need all other DNS servers to be forgotten
> by the client, even though they don't want to route all traffic through
> their VPN servers.
For the record, this is now possible by bug 758772
https:/
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #71 |
*** Bug 780913 has been marked as a duplicate of this bug. ***
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #72 |
If a VPN connection is set to take all traffic then we should definitely be using its DNS servers for all lookups. We might not even be able to *reach* the DNS server advertised by the "local" network. We *might* if it's physically on the local subnet, but almost certainly not further afield.
Note also that if you *wanted* to do split DNS, you have no idea which domains to do split DNS *FOR*. You have a list of default search domains, but that is a DIFFERENT THING. A search domain of example.com means "if the user looks up foo and it doesn't exist, then also look for foo.example.com before failing". It doesn't mean any more than that. In particular, there can be domains which exist in a VPN (such as example.internal) which are *not* added to the search domains.
If you want to add an option for doing split-DNS, it can't be a boolean and abuse the search domains. It has to be an explicit list of the domains for which you want to use that DNS service. Unless we have a separate list of "domains which exist here" for each connection, which is *distinct* from the search domains?
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #73 |
(In reply to David Woodhouse from comment #7)
> Note also that if you *wanted* to do split DNS, you have no idea which
> domains to do split DNS *FOR*. You have a list of default search domains,
> but that is a DIFFERENT THING. A search domain of example.com means "if the
> user looks up foo and it doesn't exist, then also look for foo.example.com
> before failing". It doesn't mean any more than that.
It also means that foo.example.com should be queried over the
interface that has it the search list. I don't think it makes any
sense that interface X specifies domain D in the search list but we
try to resolve it on interface Y.
> In particular, there
> can be domains which exist in a VPN (such as example.internal) which are
> *not* added to the search domains.
Yes, basically the search list is only a subset of the domains for
which we want to do split DNS.
> If you want to add an option for doing split-DNS, it can't be a boolean and
> abuse the search domains. It has to be an explicit list of the domains for
> which you want to use that DNS service. Unless we have a separate list of
> "domains which exist here" for each connection, which is *distinct* from the
> search domains?
I looked at systemd-resolved documentation and it introduces the
concept of "routing-only" domains, which are those domains only used
for deciding the output interface, as opposed to "search" domains that
are used for both completing unqualified names and for routing:
https:/
and I think we could do something similar. For example, interpret
domains starting with a tilde in ipvX.dns-search as routing-only. All
other domains automatically obtained from DHCP/VPN/... would still be
considered as standard search domains.
When the DNS backend supports split DNS (i.e. dnsmasq or
systemd-resolved) I think we should always use split-DNS for the
domains in the search list; and with always I mean also for non-VPN
connections. In this way, queries for a domain in the search list will
be forwarded only to the interface that specified it. Then, of course,
we need to add wildcard rules to forward non-matching queries to all
interfaces.
Borrowing another concept from systemd-resolved, we could support the
wildcard routing domain "~." that means "send all all queries that
don't match specific search domains to this interface". To keep
backwards compatibility, if no connection provides a wildcard routing
domain, we would forward all non-matching queries to all interfaces,
except to VPNs that provides a search list. In this way:
- we still do split DNS for VPNs by default
- this https:/
don't push any domains should get all queries) keeps working as is
In case of a full-tunnel VPN, one could set ipv4.dns-search to "~*" on
the VPN connection to direct all to the VPN DNS server. Queries for a
domain provided by a local connection would still go on through local
interface.
Any opinions about this idea? I pushed a draft implementation to branch
bg/dns-
cases mentioned in this bz.
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #74 |
(In reply to Beniamino Galvani from comment #8)
> - we still do split DNS for VPNs by default
> - this https:/
> don't push any domains should get all queries) keeps working as is
VPNs which don't push any *routing* domains should get all queries. So that's *all* existing VPN configs. From the automatic configuration of VPNs we only ever get *search* domains.
> In case of a full-tunnel VPN, one could set ipv4.dns-search to "~*" on
> the VPN connection to direct all to the VPN DNS server.
This needs to be the default, surely?
> Queries for a domain provided by a local connection would still go on
> through local interface.
Doesn't that leave me with the same problem, that it's trying to perform DNS queries to the "local" DNS server which is actually upstream (e.g. 4.2.2.1), and I can't even *route* to that IP address because all my traffic is going to the VPN?
At the very least, this logic would need to be based on whether the VPN takes the default route or not, wouldn't it? If a VPN takes the default route, it *definitely* needs all DNS traffic. If it doesn't, it probably still should unless explicitly configured otherwise.
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #75 |
(In reply to David Woodhouse from comment #9)
> (In reply to Beniamino Galvani from comment #8)
> > - we still do split DNS for VPNs by default
> > - this https:/
> > don't push any domains should get all queries) keeps working as is
>
> VPNs which don't push any *routing* domains should get all queries. So
> that's *all* existing VPN configs. From the automatic configuration of VPNs
> we only ever get *search* domains.
I think a search domain should also be implicitly used for routing,
and thus VPNs do push routing domains.
IOW, if a connection provides search domain X, queries for names
ending in X should only go through that connection, no?
> > In case of a full-tunnel VPN, one could set ipv4.dns-search to "~*" on
> > the VPN connection to direct all to the VPN DNS server.
>
> This needs to be the default, surely?
See below.
> > Queries for a domain provided by a local connection would still go on
> > through local interface.
>
> Doesn't that leave me with the same problem, that it's trying to perform DNS
> queries to the "local" DNS server which is actually upstream (e.g. 4.2.2.1),
> and I can't even *route* to that IP address because all my traffic is going
> to the VPN?
The scenario I'm referring to is: I'm connected to a VPN getting the
default route. I configure "~." as search domain on it to perform all
queries through the VPN. At the same time, the DHCP server on LAN
network announces a local DNS server with domain "local.foobar.com". I
want that every query ending in this domains is resolved locally, not
using the VPN DNS server.
If the DNS server announced by DHCP is not on LAN, I don't expect any
search domain to be present for the LAN connection and so every DNS
query will go through the VPN.
> At the very least, this logic would need to be based on whether the VPN
> takes the default route or not, wouldn't it? If a VPN takes the default
> route, it *definitely* needs all DNS traffic.
First, if we start to decide DNS policy based on routing, this would
be a change in behavior and will possibly break users'
configurations. If we restrict the change only to VPNs with default
routes, probably it's less of a problem, and I think we can do it.
> If it doesn't, it probably still should unless explicitly configured
> otherwise.
I think many people using split tunnel VPNs would complain about this
change in behavior because suddenly (and possibly, without knowing it)
they would start sending all DNS queries to the VPN, which can have
bad privacy implications (e.g. when it's a corporate VPN).
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #76 |
I've reworked a bit the branch. The main change is a new
ipv{4,6}
previous version.
I've also written a short description of the changes at:
https:/
Please have a look at branch bg/dns-
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #77 |
>> dns: introduce routing domains
+ if (domain[0] == '~' && search_only)
+ continue;
+ if (domain[0] == '~') {
+ if (search_only)
+ continue;
+ }
I don't mind which, but let's make it consistent.
>> libnm-core,cli: add support for DNS default property
+ * name servers specified by this connection. When set to %FALSE, such
+ * queries are sent through all connections, excluding VPNs that provide
+ * a search list.
this is not clear to me. Instead of "When set to %FALSE,...", should it be:
"when no currently active connections have this property set to %TRUE, ..."
>> dns: add 'default' attribute to exported DNS entries
+ /* Add default */
+ g_variant_
+ "{sv}",
+ "default",
+ g_variant_
maybe only add the "default" key, if the value is actually TRUE. A missing value already means FALSE. The reason would be, not to teach users to rely on "default" being present.
minor fixup commits pushed.
the rest lgtm
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #78 |
(In reply to Beniamino Galvani from comment #11)
> I've reworked a bit the branch. The main change is a new
> ipv{4,6}
> previous version.
I'm still really unhappy with this. DNS is *not* a per-protocol thing. It doesn't make sense to have separate ipv4 vs. ipv6 dns defaults, any more than it does to have defaults for different IP ranges (i.e. DNS for 10.x.x.x vs. for 11.x.x.x). (Sure we have those for reverse-DNS but not like this.)
I understand that we have inherited this to a certain extent but I'd really like to avoid entrenching it further.
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #79 |
(In reply to David Woodhouse from comment #13)
> (In reply to Beniamino Galvani from comment #11)
> > I've reworked a bit the branch. The main change is a new
> > ipv{4,6}
> > previous version.
>
> I'm still really unhappy with this. DNS is *not* a per-protocol thing. It
> doesn't make sense to have separate ipv4 vs. ipv6 dns defaults, any more
> than it does to have defaults for different IP ranges (i.e. DNS for 10.x.x.x
> vs. for 11.x.x.x). (Sure we have those for reverse-DNS but not like this.)
>
> I understand that we have inherited this to a certain extent but I'd really
> like to avoid entrenching it further.
that is true.
but we cannot change API, so, all we could do is deprecate the current properties and add a new one. But the old one still has to be supported. It's not clear that this is cleaner in the result.
As for adding new DNS related settings, it probably makes sense to instead add a new "ip" section and place it there. One day, we might deprecated ipv4.dns and ipv6.dns in favor of a ip.dns.
So let's do ^^?
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #80 |
Or
[network]
dns-default=...
like, [Network] in `man systemd.network`
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #81 |
(In reply to Thomas Haller from comment #14)
> but we cannot change API, so, all we could do is deprecate the current
> properties and add a new one. But the old one still has to be supported.
> It's not clear that this is cleaner in the result.
>
> As for adding new DNS related settings, it probably makes sense to instead
> add a new "ip" section and place it there. One day, we might deprecated
> ipv4.dns and ipv6.dns in favor of a ip.dns.
Or maybe DNS parameters should have their own "DNS" setting as:
dns.servers
dns.domains
dns.options
dns.priority
dns.is-default
The old properties would be copied to the new setting when the connection is normalized, to provide backwards compatibility.
An "IP" setting would instead make sense if there are other protocol-agnostic IP properties we plan to add (but I can't imagine which ones now).
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #82 |
proxy...
Each setting instance has a large implementation over-head. We could have instead
[network]
proxy-*=
dns-*=
arguably,
`nmcli connection modify $NAME dns.*`
is much nicer then
`nmcli connection modify $NAME network.dns-*`
(while nmcli is not required to expose nm-settings exactly the same way as they are in libnm and on D-Bus, it makes sense to do that).
tl;dr: +1 for a "dns" section.
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #83 |
(In reply to Thomas Haller from comment #17)
> tl;dr: +1 for a "dns" section.
Ok, I'll add the new 'default' property to a 'dns' setting and I'll also move the existing connection.mdns property there, since we haven't done any official release that includes it.
I pushed some preliminary patches to branch bg/dns-
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #84 |
(In reply to Beniamino Galvani from comment #18)
> (In reply to Thomas Haller from comment #17)
> > tl;dr: +1 for a "dns" section.
>
> Ok, I'll add the new 'default' property to a 'dns' setting and I'll also
> move the existing connection.mdns property there, since we haven't done any
> official release that includes it.
I agree.
> I pushed some preliminary patches to branch bg/dns-
> please review.
+ if (search_only && domain_
+ continue;
it's a bit confusing that the parameter is called "search_only", while it compares it with "routing_only". Could you rename "search_only"?
Also, the inverse naming is confusing to me:
add_dns_domains (array, ip_config, FALSE, FALSE);
has search_only=FALSE, the double-inverse will result in ~all~. Could we rename "search_only" to "with_routing_only" (and inverse meaning)?
+ if (!domain_is_valid (str, FALSE))
+ continue;
@str has possible a leading tilde. Shouldn't you strip it with "nm_utils_
- /* If this link is never the default (e.g. only used for resources on this
- * network) add a routing domain. */
- route_only = addr_family == AF_INET
- ? !nm_ip4_
- : !nm_ip6_
-
this behaviour came originally from https:/
pushed one fixup.
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #85 |
(In reply to Thomas Haller from comment #19)
> (In reply to Beniamino Galvani from comment #18)
> > (In reply to Thomas Haller from comment #17)
> > > tl;dr: +1 for a "dns" section.
> >
> > Ok, I'll add the new 'default' property to a 'dns' setting and I'll also
> > move the existing connection.mdns property there, since we haven't done any
> > official release that includes it.
>
> I agree.
>
> > I pushed some preliminary patches to branch bg/dns-
> > please review.
>
> + if (search_only && domain_
> + continue;
>
> it's a bit confusing that the parameter is called "search_only", while it
> compares it with "routing_only". Could you rename "search_only"?
> Also, the inverse naming is confusing to me:
>
> add_dns_domains (array, ip_config, FALSE, FALSE);
>
> has search_only=FALSE, the double-inverse will result in ~all~. Could we
> rename "search_only" to "with_routing_only" (and inverse meaning)?
Changed.
> + if (!domain_is_valid (str, FALSE))
> + continue;
>
> @str has possible a leading tilde. Shouldn't you strip it with
> "nm_utils_
Good catch, fixed.
> - /* If this link is never the default (e.g. only used for resources on
> this
> - * network) add a routing domain. */
> - route_only = addr_family == AF_INET
> - ? !nm_ip4_
> (config))
> - : !nm_ip6_
> (config));
> -
>
> this behaviour came originally from
> https:/
> ?id=c4864ba63f4
> message, I don't understand why we would set routing-only if
> nm_ip4_
> more sense to me.
The original behavior didn't make much sense and caused troubles (as in bug 783024 and bug 782469). I think it's better if we add domain as search-domains by default.
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #86 |
bg/dns-
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #87 |
First part merged to master:
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #88 |
So... a VPN configuration which wants *all* DNS queries to go to the VPN's nameservers would add '~.' to the dns search list? Or just '~'?
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #89 |
(In reply to David Woodhouse from comment #23)
> So... a VPN configuration which wants *all* DNS queries to go to the VPN's
> nameservers would add '~.' to the dns search list? Or just '~'?
Yes, that is the plan ('~.') for part 2 (not implemented yet).
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #90 |
Per discussion on IRC, I really don't like this very much. There is too much confusion already.
We should have a *clear* separation of two entirely different fields.
• Search list
This is the list of domains that will be appended to failing lookups and tried again, before eventually failing. So if my search list contains "company.com, company.internal" and I attempt to look up "foo", then I get to wait while all these lookups are done in turn:
foo.
foo.company.com.
foo.
Note that this is an *ordered* list. And it should be kept as short as possible because we don't *want* to wait for all those extra lookups. And it might not be set at all, because we might want people to use fully qualified domain names in hyperlinks and other places, without relying on this auto-completion.
• Lookup list
This is the list of domains for which this connection's nameserver should be used. For a large company which has merged with lots of others, there may be *many* domains which are only visible internally, or which have schizo-DNS presenting a different view to the internal network. This may contain all of the elements in the search list, but would *also* contain others like "some-company-
This is an unordered list. And for full-tunnel VPNs we should use the VPN DNS for *everything*, which is the subject of this bug. This is quite a serious security bug, as described in https:/
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #91 |
Some further notes:
• I can make NetworkManager-
'split dns domain' in separate config items. Right now we mache them all
into the search domains because we know NM abuses that field for both,
but we should fix that too.
• We might perhaps take this opportunity to kill the weirdness that we have
redundant DNS information in *both* IPv4 and IPv6 configs. That was always
wrong, and a clean break to make it sane might be better than letting it
persist for ever.
• For a split-tunnel VPN, it's OK to have the split-dns lookup list default
to the same as the search list (plus all the reverse domains which we do
already get right, I believe). It should be possible to *add* entries to
the split-dns lookup list that aren't in the search list, both manually
in the config, and automatically from the VPN plugin (see first point above).
• for a full-tunnel VPN, the split-dns lookup list should cover everything.
There *might* be an exception list — for example IF we allow routing to
the local network (cf. bug 767288 which asks for an option not to) then
the reverse lookup ranges in .ip6.arpa and the split-dns domain list of
the local network would be the only thing that's permitted *not* to go
to the VPN's nameservers.
In the general case, DNS lookups for reverse ranges in .ip6.arpa and .in-addr.arpa should go to the nameservers of the connection to which we are routing those addresses. We *mostly* get that right, except for full-tunnel VPN right now.
Forward DNS lookups by default should go to the DNS servers of the connection which has the default route. Again, we don't get that right for full-tunnel VPN right now.
Forward DNS lookups for domains which are listed in some connection's "split-dns domains list" (which I called the 'lookup list' above), could go to that connection's DNS server UNLESS there's a reason not to (like a full tunnel VPN wanting to disable *all* local access as discussed in bug 767288).
information type: | Private Security → Public Security |
Mathieu Trudel-Lapierre (cyphermox) wrote : | #1 |
Confirming this is broken. Dropping the patch 0001-dns-
Changed in network-manager (Ubuntu): | |
status: | New → Confirmed |
importance: | Undecided → High |
tags: | added: regression-update |
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #92 |
Hm... since commit db9b7e10a for bug 707617, NMIP4Config already *has* separate fields for 'domains' vs. 'searches'.
Surely the "domains" list is already intended to be "routing domains", i.e. "domains for which we should do the DNS lookup to this connection's nameservers"?
While "searches" is purely the auto-completion one?
However, we seem to process them all the same. Perhaps we should stop doing that, then we don't need the '~' prefix hack...
dwmw2 (dwmw2) wrote : | #2 |
This is CVE-2018-1000135. For some reason the 'Link to CVE' option above doesn't seem to work.
https:/
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #93 |
What is currently missing in my opinion is a flexible way to decide
which connections are used for default DNS queries (those not matching
any lookup domain).
A possible way to do this is to choose connections that have the
highest value of a new 'dns.default-
to have default values that work for most users, the default value of
the property would be 'auto' (0), which means:
* 1000 for full-tunnel VPNs
* 500 for non-VPN connections
* -1 for split-tunnel VPNs. -1 means that the connection is never
used for default DNS lookups
For example, if you have a full-tunnel VPN with search domain
'example.com' and a local connection with search domain 'local.com',
the following entries would be added to dnsmasq:
/example.
/local.
VPN-nameserver # default
But if the VPN is split-tunnel (doesn't get the default route):
/example.
/local.
local-nameserver # default
If you want that all queries go through the full-tunnel VPN with no
exceptions, also set ipvx.dns-priority -1 for the VPN and dnsmasq will
be configured with:
/example.
VPN-nameserver # default
BTW, for ipvx.dns-priority we consider lower values with higher
priority while for dns.default-
believe doing ipvx.dns-priority that way was a mistake because it is
counterintuitive.
Users can also set custom value for dns.default-
configuration to their needs.
What do you think? Any other ideas?
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #94 |
(In reply to David Woodhouse from comment #26)
> Some further notes:
>
> • I can make NetworkManager-
> 'split dns domain' in separate config items. Right now we mache them all
> into the search domains because we know NM abuses that field for both,
> but we should fix that too.
You can put them all in the NM_VPN_
> • We might perhaps take this opportunity to kill the weirdness that we have
> redundant DNS information in *both* IPv4 and IPv6 configs. That was always
> wrong, and a clean break to make it sane might be better than letting it
> persist for ever.
Changing this is a huge pain for users, and developers too (but this probably doesn't matter much). I can't think of a way to achieve a smooth transition from the separate DNS properties into a new common setting.
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #95 |
> For example, if you have a full-tunnel VPN with search domain
> 'example.com' and a local connection with search domain 'local.com',
> the following entries would be added to dnsmasq:
Please let's not talk about search domains. Those are completely different things, not related to what we're talking about here.
Search domains are purely a way to enable users to be lazy. I can type 'intranet" into my browser and it gets autocompleted to "intranet.
They (should) have *nothing* to do with the choice of which DNS lookups get sent out on which connection. (Apart from the fact that we're doing this horrid thing with mixing them all together and prefixing one with ~, which is a technical detail.)
A full-tunnel VPN should end up with a *LOOKUP* domain of "" or "*" or "." or however you want to represent the default (currently there's no way for me to specify that even manually to work around this issue, I think).
I think that implementing the "~." support as suggested in comment 24 and then making full-tunnel VPNs automatically add that, would go a long way to dealing with this problem.
I'm not sure I understand the benefit of adding 'dns.default-
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #96 |
(In reply to Beniamino Galvani from comment #29)
> > • We might perhaps take this opportunity to kill the weirdness that we have
> > redundant DNS information in *both* IPv4 and IPv6 configs. That was always
> > wrong, and a clean break to make it sane might be better than letting it
> > persist for ever.
>
> Changing this is a huge pain for users, and developers too (but this
> probably doesn't matter much). I can't think of a way to achieve a smooth
> transition from the separate DNS properties into a new common setting.
It's a short-term pain, which will eventually go away.
Can't we start by shadowing the ipv?.dns-* options into a generic dns.* option set so that they're identical? We can give people a long time to start using the new location, before eventually taking away the old ones.
We definitely shouldn't be making the problem *worse* by doing things like the ~ prefix hack or adding any *more* fields to ipv?.dns-*. Can't we at least add that as dns.lookup-domain even if it's all by itself in the "DNS" settings for now?
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #97 |
I think this can work without a new default-priority field, and just a simple set of lookup domains per connection in conjunction with the existing dns-priority.
You have a set of { domain, connection, priority } tuples. (Where all lookup domains of a given connection have the *same* priority right now; there's no real need to make them individually configurable I think).
Where multiple lookup rules exist for a given domain, the highest priority (numerically lowest) one wins. And when a given rule is for a domain which is a *subdomain* of another domain with a negative dns-priority, that subdomain also loses (and is dropped).
So your first example in comment 28 would look like this:
{ "example.com", vpn0, 50 }
{ local.com, eth0, 100 }
{ ".", vpn0, 50 }
{ ".", eth0, 100 } # This one gets dropped due to the previous one
Your second example looks like this (because a split tunnel VPN doesn't add the "~." lookup domain:
{ "example.com", vpn0, 50 }
{ "local.com", eth0, 100 }
{ ".", eth0, 100 }
And your final example looks like this, because the user has set dns-priority=-1:
{ "example.com", vpn0, -1 }
{ local.com, eth0, 100 } # This one gets dropped due to the next one
{ ".", vpn0, -1 }
{ ".", eth0, 100 } # This one gets dropped due to the previous one
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #98 |
The idea is that VPNs would automatically have the "~." default lookup domain added, according to the never-default flag.
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #99 |
(In reply to David Woodhouse from comment #31)
> (In reply to Beniamino Galvani from comment #29)
> > > • We might perhaps take this opportunity to kill the weirdness that we have
> > > redundant DNS information in *both* IPv4 and IPv6 configs. That was always
> > > wrong, and a clean break to make it sane might be better than letting it
> > > persist for ever.
> >
> > Changing this is a huge pain for users, and developers too (but this
> > probably doesn't matter much). I can't think of a way to achieve a smooth
> > transition from the separate DNS properties into a new common setting.
>
> It's a short-term pain, which will eventually go away.
Maybe, but existing properties are part of the API and so they will never be
dropped because we don't break API between releases. We'll have to maintain
them, together with the new properties and the code to sync them forever.
> Can't we start by shadowing the ipv?.dns-* options into a generic dns.*
> option set so that they're identical? We can give people a long time to
> start using the new location, before eventually taking away the old ones.
If you mean that ipv4.dns-*, ipv6.dns-* and dns.* should be all identical,
that is a change in behavior and would badly break users.
If by shadowing you mean keeping them is sync (with the dns.* as the union
of ipv4.dns-* and ipv6.dns-*), that is possible but would create some other
problems in my opinion.
> We definitely shouldn't be making the problem *worse* by doing things like
> the ~ prefix hack or adding any *more* fields to ipv?.dns-*. Can't we at
> least add that as dns.lookup-domain even if it's all by itself in the "DNS"
> settings for now?
Ok, we aren't going to add any new properties to ipvx.dns-*. Yes, I think we
can add a dns.lookup-domain property.
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #100 |
Right, the idea was the dns.* properties would be the union of the ipv4.dns-* and ipv6.dns-* (which right now are treated identically and just appended to each other, I believe?).
Note: it's "lookup domains" we should be using for the proxy setup we poke into PacRunner, not "search domains". Can we fix that too please?
tags: | added: incoming rs-bb- |
tags: |
added: rls-bb-incoming removed: incoming rs-bb- |
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #101 |
I pushed branch bg/dns-bgo746422. I think it should solve the leaks in
case of full-tunnel VPNS when using dnsmasq or systemd-resolved. It
basically:
- allows users to specify that connections get the default lookup
domain through the special entry "~."
- automatically adds "~." to connections with the default route
- applies rules from comment 32 to decide which domains are used
based on DNS priority.
Other things that I didn't do, but can be done later (if necessary)
are:
- as noticed by Thomas, you can't override the NM decision of
automatically adding "~." to connections with default route.
Actually, you can add "~." to another connection with a lower
priority value, and it will override the "~." added by NM on the
first connection. Perhaps this is enough. Otherwise we could
introduce another special domain.
- I haven't added a new dns setting with duplicates of the existing
ipvx properties. I think it's really a different issue and should
be solved separately.
- Also I didn't add the dns.lookup-domain property, as the "~."
domain is sufficient and it is just another case of the
routing-only domains we already support.
What do you think?
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #102 |
> dns: dnsmasq: fix adding multiple domains
can you add a "Fixes" comment?
> core: allow '~.' domain in ip4 and ip6 configs
+ if (!nm_streq (search, "~.")) {
len = strlen (search);
if (search[len - 1] == '.')
+ }
this seems wrong, for example if "search" is "~..".
Either, do a full normalization and drop such entires (or clean up the duplicate dots). Or: normalize "~." to "~" too. It's odd that the special domain is treated entirely different regardingt the trailing dot.
> dns: use dns-priority to provide a preprocessed domain list to plugins
_LOGD ("update-dns: updating plugin %s", plugin_name);
+ rebuild_
if (!nm_dns_
do we need to rebuild the list every time? We know when an update changes something. Can we cache the result, generate it when needed, and clear it when something changes?
+ struct {
+ const char **search;
+ char **reverse;
+ } domains;
} NMDnsIPConfigData;
I think these are leaked.
> dns: dnsmasq: honor dns-priority
+ int addr_family, i, j, num;
let's use guint for index variables of arrays? Also matches
num = nm_ip_config_
> dns: sd-resolved: honor dns-priority
- NMIPConfig *ip_config = elem->data;
+ NMDnsIPConfigData *data = elem->data;
+ NMIPConfig *ip_config = data->ip_config;
this seems wrong.
Overall, lgtm
Changed in network-manager (Ubuntu): | |
assignee: | nobody → Olivier Tilloy (osomon) |
tags: | removed: rls-bb-incoming |
Changed in network-manager: | |
importance: | Unknown → Medium |
status: | Unknown → Confirmed |
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #103 |
(In reply to Thomas Haller from comment #37)
> > dns: dnsmasq: fix adding multiple domains
>
> can you add a "Fixes" comment?
Done.
> > core: allow '~.' domain in ip4 and ip6 configs
>
> + if (!nm_streq (search, "~.")) {
> len = strlen (search);
> if (search[len - 1] == '.')
> search[len - 1] = 0;
> + }
>
> this seems wrong, for example if "search" is "~..".
> Either, do a full normalization and drop such entires (or clean up the
> duplicate dots). Or: normalize "~." to "~" too. It's odd that the special
> domain is treated entirely different regardingt the trailing dot.
What do you mean by "full normalization"?
We can convert 'domain.' into 'domain' because it's equivalent, but
'domain..' or 'my..domain' are invalid and they should be dropped. I
think this would be the best approach.
If we normalize '~.' to '~', once we strip the ~ prefix the domain
would become empty, which is not desirable in my opinion.
> > dns: use dns-priority to provide a preprocessed domain list to plugins
>
> _LOGD ("update-dns: updating plugin %s", plugin_name);
> + rebuild_
> if (!nm_dns_
>
> do we need to rebuild the list every time? We know when an update changes
> something. Can we cache the result, generate it when needed, and clear it
> when something changes?
If nothing changes, the DNS configuration hash will be the same and
update_dns() won't be called at all by nm_dns_
Ok, nm_dns_
the hash, but do we need another caching mechanism different from the
existing one just for this case?
> + struct {
> + const char **search;
> + char **reverse;
> + } domains;
> } NMDnsIPConfigData;
>
> I think these are leaked.
Ops, fixed.
> > dns: dnsmasq: honor dns-priority
>
> + int addr_family, i, j, num;
>
> let's use guint for index variables of arrays? Also matches
> num = nm_ip_config_
Ok.
> > dns: sd-resolved: honor dns-priority
>
> - NMIPConfig *ip_config = elem->data;
> + NMDnsIPConfigData *data = elem->data;
> + NMIPConfig *ip_config = data->ip_config;
>
> this seems wrong.
Why? Looks fine to me.
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #104 |
(In reply to Beniamino Galvani from comment #38)
> (In reply to Thomas Haller from comment #37)
> What do you mean by "full normalization"?
>
> We can convert 'domain.' into 'domain' because it's equivalent, but
> 'domain..' or 'my..domain' are invalid and they should be dropped. I
> think this would be the best approach.
I mean, to handle "my..domain". Either normalize the double . away, or verify it and drop it as invalid. If you don't do either, then "~.." will end up being treated like "~." which is wrong.
> If we normalize '~.' to '~', once we strip the ~ prefix the domain
> would become empty, which is not desirable in my opinion.
Yes, after dropping the default domain becomes "". It's not that you currently use the domain "." as-is. You do:
nm_streq (domain, ".") ? NULL : domain)
that could also be:
nm_streq (domain, "") ? NULL : domain)
or
#define DEFAULT_DOMAIN ""
nm_streq (domain, DEFAULT_DOMAIN) ? NULL : domain)
(maybe a define is in order either way).
> > > dns: use dns-priority to provide a preprocessed domain list to plugins
> >
> > _LOGD ("update-dns: updating plugin %s", plugin_name);
> > + rebuild_
> > if (!nm_dns_
> >
> > do we need to rebuild the list every time? We know when an update changes
> > something. Can we cache the result, generate it when needed, and clear it
> > when something changes?
>
> If nothing changes, the DNS configuration hash will be the same and
> update_dns() won't be called at all by nm_dns_
>
> Ok, nm_dns_
> the hash, but do we need another caching mechanism different from the
> existing one just for this case?
Maybe the hashing mechanism is anyway ugly, and should be dropped (in a future commit). Instead of implementing a SHA-hashing of ~some~ parameters, implement a cmp() function. It's more efficient, and easier to get right.
> > > dns: sd-resolved: honor dns-priority
> >
> > - NMIPConfig *ip_config = elem->data;
> > + NMDnsIPConfigData *data = elem->data;
> > + NMIPConfig *ip_config = data->ip_config;
> >
> > this seems wrong.
>
> Why? Looks fine to me.
you are right.
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #105 |
(In reply to Thomas Haller from comment #39)
> (In reply to Beniamino Galvani from comment #38)
> > (In reply to Thomas Haller from comment #37)
>
> > What do you mean by "full normalization"?
> >
> > We can convert 'domain.' into 'domain' because it's equivalent, but
> > 'domain..' or 'my..domain' are invalid and they should be dropped. I
> > think this would be the best approach.
>
> I mean, to handle "my..domain". Either normalize the double . away, or
> verify it and drop it as invalid. If you don't do either, then "~.." will
> end up being treated like "~." which is wrong.
>
> > If we normalize '~.' to '~', once we strip the ~ prefix the domain
> > would become empty, which is not desirable in my opinion.
>
> Yes, after dropping the default domain becomes "". It's not that you
> currently use the domain "." as-is. You do:
>
> nm_streq (domain, ".") ? NULL : domain)
>
> that could also be:
>
> nm_streq (domain, "") ? NULL : domain)
>
> or
>
> #define DEFAULT_DOMAIN ""
> nm_streq (domain, DEFAULT_DOMAIN) ? NULL : domain)
>
> (maybe a define is in order either way).
Yeah, considering "" internally as the wildcard domain simplifies things a bit. I've updated the branch.
> Maybe the hashing mechanism is anyway ugly, and should be dropped (in a
> future commit). Instead of implementing a SHA-hashing of ~some~ parameters,
> implement a cmp() function. It's more efficient, and easier to get right.
A agree, this could be an improvement (for the future).
Olivier Tilloy (osomon) wrote : | #3 |
There's active work going on upstream (see https:/
https:/
Once in master, it would probably be doable to backport those changes (including https:/
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #106 |
> core: allow '~.' domain in ip4 and ip6 configs
commit bebafff7844228d
lgtm, but the commit message no longer seams to match, does it?
> dns: use dns-priority to provide a preprocessed domain list to plugins
+ c_list_
the list head is accessed on every iteration. Could we pre-cache the value like:
CList *ip_config_
ip_
c_list_
(and below again)
+ int i, n, n_domains = 0;
these variables are used to iterate over a number of guint values. Like
n = nm_ip_config_
where get_num_searches() returns a guint. Could we consistently use the correct type? (yes, I know that C might optimize signed for loops better, because it assumes that signed cannot overflow. But IMO consistent use of types is more important than what the compiler might do).
+ priority = nm_ip_config_
+ nm_assert (priority != 0);
this invariant is not enforced by NMIPxConfig, so, you basically assert that all callers that create an NMIPxConfig properly initialize priority to a non-zero value. It asserts something that is two layers of code away. That is non-obvious. Not sure what to do about this. Ok, fine as is :)
add the wilcard domain to all non-VPN
^^^^^^^
+ parent_priority = GPOINTER_TO_INT (g_hash_
+ if (parent_priority < 0 && parent_priority < priority) {
+ *out_parent = ".";
+ *out_parent_
is "." still right? Should this be "" to mean the wildcard domain?
could you add a
nm_assert (!g_hash_
to domain_
g_free (ip_data-
I always like avoiding unnecessary copies. But in this case, "search" array will point inside strings owned by NMIPxConfig. That seems quite fragile to me. Should we not clone them? In practice it's not necessary, but it feels fragile. What can we do about that?
»···/* Free the array and return NULL if the only element was the ending NULL */
»···strv = (char **) g_ptr_array_free (domains, (domains->len == 1));
»···return _nm_utils_
skip_repeated=TRUE likely won't help much, because duplicated domains are probably not sorted after each other. Maybe sort them first? Maybe by cherry-picking "shared: add nm_utils_
In bugzilla.gnome.org/ #746422, Michael Biebl (mbiebl) wrote : | #107 |
Will those changes be pulled into the nm-1-10 branch as well?
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #108 |
(In reply to Thomas Haller from comment #41)
> > core: allow '~.' domain in ip4 and ip6 configs
>
> commit bebafff7844228d
> lgtm, but the commit message no longer seams to match, does it?
> the list head is accessed on every iteration. Could we pre-cache the value
> like:
> + int i, n, n_domains = 0;
>
> these variables are used to iterate over a number of guint values. Like
>
> n = nm_ip_config_
>
> where get_num_searches() returns a guint. Could we consistently use the
> correct type?
> add the wilcard domain to all non-VPN
> is "." still right? Should this be "" to mean the wildcard domain?
> could you add a
> nm_assert (!g_hash_
> to domain_
> calling domain_
Fixed all the above.
> I always like avoiding unnecessary copies. But in this case, "search" array
> will point inside strings owned by NMIPxConfig. That seems quite fragile to
> me. Should we not clone them? In practice it's not necessary, but it feels
> fragile. What can we do about that?
Now the list is cleared just after nm_dns_
that elements in the list don't become stale. What do you think?
> »···/* Free the array and return NULL if the only element was the ending
> NULL */
> »···strv = (char **) g_ptr_array_free (domains, (domains->len == 1));
>
> »···return _nm_utils_
>
> skip_repeated=TRUE likely won't help much, because duplicated domains are
> probably not sorted after each other. Maybe sort them first? Maybe by
> cherry-picking "shared: add nm_utils_
> https:/
I don't understand, duplicate domains don't have to be consecutive with
current _nm_utils_
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #109 |
I've added some NM-CI tests to ensure the behavior is the one we expect:
https:/
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #110 |
(In reply to Michael Biebl from comment #42)
> Will those changes be pulled into the nm-1-10 branch as well?
Let's wait until the branch is merged to master and see if there are
any issues. If there are no complaints, we can think about backporting
it to a stable branch.
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #111 |
> I don't understand, duplicate domains don't have to be consecutive with
> current _nm_utils_
How do you mean?
calling "_nm_utils_
so either: do not drop any duplicates and do skip_repeated=FALSE
or: drop all duplicates, by sorting the array first, followed by skip_repeated=TRUE.
In bugzilla.gnome.org/ #746422, Lubomir Rintel (lkundrak) wrote : | #112 |
bg/dns-
In bugzilla.gnome.org/ #746422, Lubomir Rintel (lkundrak) wrote : | #113 |
That is, bg/dns-bgo746422 LGTM
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #115 |
FWIW the original situation appears to be fairly broken regardless of the security situation. I've lost count of the number of times in my recent travels that the DNS servers of the airport/hotel/etc. in which I find myself are *not* directly on the local subnet. And thus aren't accessible as soon as I join the VPN.
If that situation is going to be allowed to persist in any form (especially if it's the default and we have to take special action to give the VPN nameservers top priority), then we should fix that by explicitly adding routes to them, as I've been having to do manually:
$ ip route
default dev vpn0 proto static scope link metric 50
default via 10.246.8.1 dev wlp2s0 proto static metric 600
8.8.4.4 via 10.246.8.1 dev wlp2s0
8.8.8.8 via 10.246.8.1 dev wlp2s0
Olivier Tilloy (osomon) wrote : | #4 |
A fix was merged to the upstream master branch: https:/
Changed in network-manager: | |
status: | Confirmed → Fix Released |
In bugzilla.gnome.org/ #746422, Olivier Tilloy (osomon) wrote : | #116 |
(In reply to Beniamino Galvani from comment #45)
> (In reply to Michael Biebl from comment #42)
> > Will those changes be pulled into the nm-1-10 branch as well?
>
> Let's wait until the branch is merged to master and see if there are
> any issues. If there are no complaints, we can think about backporting
> it to a stable branch.
Now that this has been in master and 1.12 for a few months, is this something you would consider backporting to 1.10 ?
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #117 |
(In reply to Olivier Tilloy from comment #51)
> (In reply to Beniamino Galvani from comment #45)
> > (In reply to Michael Biebl from comment #42)
> > > Will those changes be pulled into the nm-1-10 branch as well?
> >
> > Let's wait until the branch is merged to master and see if there are
> > any issues. If there are no complaints, we can think about backporting
> > it to a stable branch.
>
> Now that this has been in master and 1.12 for a few months, is this
> something you would consider backporting to 1.10 ?
Can you describe who would pick up the upstream patches if they get backported?
Would 1.10 branch be far enough?
Would you then rebase to a minor 1.10.z release, or would you just cherry-pick the patches?
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #118 |
I'd like to see them in a Fedora release, which is preferably via a 1.10.z release rather than just cherry-picking the patches.
Also Ubuntu but they basically never fix anything so I suppose that's unlikely. I'd still like it in a 1.10.z release so they don't have any *excuse* though.
In bugzilla.gnome.org/ #746422, Sebastien Bacher (seb128) wrote : | #119 |
@David, I don't think that comment is either constructive or true. We have somewhat limited resources and struggle a bit sometime to keep up with network-manager bugfixes/update but we would backport a fix for this bug to our stable serie if there is one landing in the 1.10 vcs.
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #120 |
The backport is quite complicated due to several changes that made the
DNS code in nm-1-12 diverge from nm-1-10. The th/policy-and-mdns
branch [1] alone changed:
src/dns/
src/dns/
src/dns/
so, unless we decide to also backport that branch, the backport of the
fix requires reimplementing several non-trivial patches on top of
1.10, with the high risk of breaking things.
If we also backport the th/policy-and-mdns branch, we would need to
backport a total of 30+ commits, including mDNS support and related
libnm API.
Any opinions? This latter approach seems doable, but it is a big
change for a minor release.
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #121 |
(In reply to Beniamino Galvani from comment #55)
>
> Any opinions? This latter approach seems doable, but it is a big
> change for a minor release.
Backporting th/policy-and-mdns too, seems favourable to me. As long, as the cherry-picks are trivial, the result will all be code which is on master and tested for a while already. I'd rather cherry-pick more patches just to make them apply, instead of reimplementing them. YMMV.
Gijs Molenaar (gijzelaar) wrote : | #5 |
Is it possible to upload a fixed package to bionic backports?
fessmage (fessmage) wrote : | #6 |
Same question, will it be backported to Ubuntu 18.04 ?
Olivier Tilloy (osomon) wrote : | #7 |
See the discussion in the upstream bug report. The fix is in the master branch and needs to be backported to the 1.10 series so that we can pick it up in bionic.
Olivier Tilloy (osomon) wrote : | #8 |
This is fixed in the 1.12 series of network-manager (1.12.0 release), so cosmic and dingo are not affected.
Changed in network-manager (Ubuntu): | |
status: | Confirmed → Fix Released |
assignee: | Olivier Tilloy (osomon) → nobody |
In bugzilla.gnome.org/ #746422, Olivier Tilloy (osomon) wrote : | #122 |
I confirm that the Ubuntu desktop team would be interested in picking up the update if this fix and related branches were to be backported to the 1.10 branch.
Is this something we can reasonably expect?
In bugzilla.gnome.org/ #746422, Thomas Haller (thaller-1) wrote : | #123 |
it's backported to nm-1-10 branch: https:/
In bugzilla.gnome.org/ #746422, Olivier Tilloy (osomon) wrote : | #124 |
Excellent, thank you Thomas!
Olivier Tilloy (osomon) wrote : | #9 |
The fix was backported to the upstream 1.10 series.
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #125 |
The fix is included also in the 1.10.14 release.
In bugzilla.gnome.org/ #746422, Olivier Tilloy (osomon) wrote : | #126 |
That's handy. The backport to Ubuntu 18.04 is being tracked by https:/
Sebastien Bacher (seb128) wrote : | #10 |
I've updated the description for the SRU but if someone had a better description of a testcase that would be welcome
description: | updated |
Brian Murray (brian-murray) wrote : Please test proposed package | #11 |
Hello dwmw2, or anyone else affected,
Accepted network-manager into bionic-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-
Further information regarding the verification process can be found at https:/
N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.
Changed in network-manager (Ubuntu Bionic): | |
status: | Confirmed → Fix Committed |
tags: | added: verification-needed verification-needed-bionic |
Olivier Tilloy (osomon) wrote : | #12 |
Please test and share your feedback on this new version here, but refrain from changing the verification-
Steve Langasek (vorlon) wrote : | #13 |
How does this proposed change relate to LP: #1726124? Are users who are currently relying on correct split DNS handling by network-
fessmage (fessmage) wrote : | #14 |
I installed package of network-manager 1.10.14-0ubuntu1 from bionic-proposed, and can confirm that version fixed dns leak: now when vpn connection established it gets `DNS Domain: ~.` in systemd-resolve automatically, so no more needed to manually apply command `systemd-resolve -i tun0 --set-domain=~.`. This positively fix dns leakage, verified by dnsleaktest.com
Olivier Tilloy (osomon) wrote : | #15 |
@Steve (sorry for the late reply): not sure how that relates to bug #1726124, but in my limited understanding of the changes, they shouldn't regress the split-DNS use case.
Some relevant pointers to better understand the fixes and their context:
- https:/
- https:/
- https:/
dwmw2 (dwmw2) wrote : | #16 |
network-
dwmw2 (dwmw2) wrote : | #17 |
Hm, that didn't last long. Now it isn't looking up *anything* in the VPN domains. It's all going to the local VPN server. I don't know what changed.
dwmw2 (dwmw2) wrote : | #18 |
Not sure what happened there. It was looking up *some* names in the $COMPANY.com domain on the VPN, but others not, consistently. I couldn't see a pattern.
I have manually set ipv4.dns-
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #127 |
I've finally managed to test the Ubuntu 18.04 backport; apologies for the delay.
At first it seemed to work. I tried an internal domain which isn't the main $COMPANY.com and isn't in the default search domains for the VPN, and it worked after the upgrade.
Some hours later, I was unable to get a Kerberos ticket because various other DNS lookups, even within the main $COMPANY.com domain, were being done on the local network and not the VPN.
I manually set ipv4.dns-
(Sebastian, apologies for the somewhat grumpy comment earlier. I am very happy to be proved wrong.)
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #128 |
Gr, and apologies too for spelling your name wrong, Sebastien.
fessmage (fessmage) wrote : | #19 |
@dwmw2, as far as i understand, you should configuring DNS through systemd-resolve only. Try remove your edits from `/etc/NetworkMa
```
Link 3 (tun0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: xx.xx.xx.xx
DNS Domain: ~.
Link 2 (enp3s0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.1.1
DNS Domain: local.domain
```
Where local.domain was received from DHCP server in local network. In that case you will send DNS requests in local.domain to local DNS server, and all other DNS requests - over VPN. That is expected behaviour. If you get this, but you have needs for redirecting DNS requests for some domain through other route (let's say, requests to local2.domain2, without VPN), you can do this with next command: `systemd-resolve -i enp3s0 --set-domain=
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #129 |
Hi,
can you please attach the log file generated in this way:
# nmcli general logging level trace
# journalctl -u NetworkManager -f > dns-log.txt &
# killall -HUP NetworkManager
# kill %1
when the DNS is wrongly configured? Thanks.
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #130 |
Before I do that, do we agree that I shouldn't have needed to manually set dns-priority and dns-search as described in comment 62, for a VPN connection which is full-tunnel?
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #131 |
(In reply to David Woodhouse from comment #65)
> Before I do that, do we agree that I shouldn't have needed to manually set
> dns-priority and dns-search as described in comment 62, for a VPN connection
> which is full-tunnel?
Yes, the full-tunnel VPN should have higher priority.
We have an automated CI test to check this scenario:
https:/
and it seems to be working well.
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #132 |
Ack, then I will remove those settings and test again. With dnsmasq, OK? This can never work with plain resolv.conf anyway.
Thanks.
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #133 |
(In reply to David Woodhouse from comment #67)
> Ack, then I will remove those settings and test again. With dnsmasq, OK?
Yes, thanks.
Taylor Raack (track16) wrote : | #20 |
I can also confirm that the network-manager package version 1.10.14-0ubuntu1 from bionic-proposed fixes the issue.
dwmw2 (dwmw2) wrote : | #21 |
Is there a 16.04 package? This was a regression there caused by an earlier update.
I have users reporting the same bizarre behaviour I wasn't able to clearly describe before — essentially, DNS being sent out seemingly random interfaces (sometimes VPN, sometimes local). My advice to just install this package *and* manually set dns-priority=
And yes, when other things stop being on fire I need to undo those settings and try to work out what's going wrong. We aren't using systemd-resolve here because historically it also hasn't worked right while dnsmasq did.
Sebastien Bacher (seb128) wrote : | #22 |
@dwmw2, 'This was a regression there caused by an earlier update.' would give some details ont that? you should probably open another report specifically about that if there was a regression in a xenial update
dwmw2 (dwmw2) wrote : | #23 |
@seb128 please see "In 16.04 the NetworkManager package used to carry this patch..." in the bug description above.
Mathew Hodson (mhodson) wrote : | #24 |
Looking at the upstream bug, it looks like the fix relies on reworking large parts of the code and wouldn't be easy to SRU to Xenial.
tags: |
added: verification-done-bionic removed: verification-needed verification-needed-bionic |
Steve Langasek (vorlon) wrote : | #25 |
Based on comment #12 I am not sure that this is considered "verification-done" by the relevant developers and there was no comment given when the tags were changed. Resetting.
I also think there should be an affirmative test as part of this SRU that the use case I described in comment #13 has not been regressed.
tags: |
added: verification-needed verification-needed-bionic removed: verification-done-bionic |
description: | updated |
Till Kamppeter (till-kamppeter) wrote : | #26 |
I have now done the test under [Test Case] in the initial description of this bug report.
I have a completely updated (including -proposed) Bionic machine (real iron, a Lenovo X1 Carbon 2nd gen from 2015) with network-manager 1.10.14-0ubuntu1
I have configured the Canonical VPN, both UK and US. I have turned on only the UK one. It is configured to be used only for the internal destinations on both IPv4 and IPv6.
The system in this configuration I have rebooted to be assure that all processes including the kernel are using the newest software.
Then I have followed the instructions of the test case.
When running "dig <a Canonical-internal host name>" I get immediately an answer with exit code 0 ("echo $?"), so the request was successful.
When I look into the "tcpdump" terminals, the host name gets polled through both interfaces, but naturally the answer only comes from the DNS of the VPN.
So to my understanding the bug is not fixed as the private host name gets also sent to the public DNS.
"systemd-resolve --status" lists the VPN DNS first, as link 4 and afterwards the public DNS as link3.
Till Kamppeter (till-kamppeter) wrote : | #27 |
- systemd_237-3ubuntu10.21_237-3ubuntu10.22.debdiff Edit (6.3 KiB, text/plain)
Good news, the network-manager SRU is not broken or wrong, but an additional SRU, on systemd, is needed to actually fix this bug.
I got a hint from Iain Lane (Laney, thank you very much) to the following fix in systemd upstream:
https:/
and backported it to Bionic's systemd package (debdiff attached). With the network-manager SRU from -proposed attached plus the patched systemd package installed the problem goes away. If I repeat the test of [Test Case] (after a reboot) the DNS requests to any of the VPN's domains go actually only to the VPN's DNS.
Changed in systemd (Ubuntu): | |
status: | New → Fix Released |
Changed in systemd (Ubuntu Bionic): | |
status: | New → Triaged |
Changed in systemd (Ubuntu): | |
importance: | Undecided → High |
Changed in systemd (Ubuntu Bionic): | |
importance: | Undecided → High |
Timo Aaltonen (tjaalton) wrote : | #28 |
Hello dwmw2, or anyone else affected,
Accepted network-manager into bionic-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-
Further information regarding the verification process can be found at https:/
N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.
Łukasz Zemczak (sil2100) wrote : | #29 |
Will be releasing network-manager without the systemd part for now as it poses no threat to the user.
Launchpad Janitor (janitor) wrote : | #30 |
This bug was fixed in the package network-manager - 1.10.14-0ubuntu2
---------------
network-manager (1.10.14-0ubuntu2) bionic; urgency=medium
[ Till Kamppeter ]
* debian/tests/nm: Add gi.require_
and NMClient to avoid stderr output which fails the test.
[ Iain Lane ]
* debian/
too.
network-manager (1.10.14-0ubuntu1) bionic; urgency=medium
* New stable version (LP: #1809132), including:
- Support private keys encrypted with AES-{192,256}-CBC in libnm
(LP: #942856)
- Fix leak of DNS queries to local name servers when connecting to a
full-tunnel VPN (CVE-2018-1000135) (LP: #1754671)
* Dropped patch applied upstream:
- debian/
- debian/
* Refreshed patches:
- debian/
- debian/
- debian/
- debian/
- debian/
-- Till Kamppeter <email address hidden> Fri, 10 May 2019 13:34:00 +0200
Changed in network-manager (Ubuntu Bionic): | |
status: | Fix Committed → Fix Released |
Adam Conrad (adconrad) wrote : | #31 |
The original bug report was about a regression in 16.04 with the dnsmasq integration. While I'm glad this got the ball rolling on the bionic networkd integration, let's not forget that we broke xenial? Added a xenial task for network-manager accordingly.
Changed in systemd (Ubuntu Xenial): | |
status: | New → Invalid |
dwmw2 (dwmw2) wrote : | #32 |
I am receiving reports that it isn't fixed in 18.04 either. Users are still seeing DNS lookups on the local network, until they manually edit the VPN config to include:
[ipv4]
dns-priority=-1
dns-search=~.;
I thought that wasn't going to be necessary?
Till Kamppeter (till-kamppeter) wrote : | #33 |
dwmw2, did you apply the systemd fix from comment #27? For this bug to be fixed you need BOTRH the fixed packages of network-manager and systemd.
dwmw2 (dwmw2) wrote : | #34 |
These systems are using dnsmasq not systemd-resolver. This was done for historical reasons; I'm not sure of the specific bug which caused that choice.
Till Kamppeter (till-kamppeter) wrote : | #35 |
Unfortunately, the SRU for systemd did not yet get processed. Therefore I have now uploaded this version of systemd to my PPA so that you can already test/get your problem solved. Please tell here whether it actually fixes the bug.
Here is my PPA:
https:/
Please follow this link, follow the instructions in the section "Adding this PPA to your system", then update your system with the command
sudo apt dist-upgrade
This will update only systemd as I did not upload any other package for Bionic to my PPA.
Make also sure you have the update of network-manager (1.10.14-0ubuntu2) installed. Reboot and check whether everything works correctly now.
dwmw2 (dwmw2) wrote : | #36 |
We aren't using systemd-resolver for various historical reasons; we are using dnsmasq which should be expected to work. It isn't, but we have manually added the dns-priority=
Till Kamppeter (till-kamppeter) wrote : | #37 |
dwmw2, the systemd fix was mainly meant for people with standard configuration where this fix is actually needed and solve the problem.
You are writing that adding "dns-priority=
dwmw2 (dwmw2) wrote : | #38 |
This is Bionic.
After last week's update to 1.10.14-0ubuntu2 all my VPN users (who are using dnsmasq) reported that DNS supported working for them while they were on the VPN. Some internal names were looked up correctly, others weren't.
I resolved it for them as follows:
$ sudo nmcli con modify "$COMPANY VPN" ipv4.dns-priority -1 ipv4.dns-search ~.
This matches the observations I made in comment #18 on 2019-02-04.
I believe that with 1.10.6 all $company.com DNS did get sent to the VPN and it was lookups outside the company search domains which were leaked. So it was mostly functional, but insecure. Since 1.10.14 it got worse and many (but not all) of the $company.com lookups are being leaked too. Which is a functional problem.
(For Xenial, my advice to users has been the same since March 2018 when this ticket was first filed: tell apt to hold network-
Steve Langasek (vorlon) wrote : | #39 |
> These systems are using dnsmasq not systemd-resolver.
> This was done for historical reasons; I'm not sure of
> the specific bug which caused that choice.
NetworkManager in Ubuntu 16.04 and earlier defaulted to integrating with dnsmasq. But on 18.04 and later, this integration has been deliberately replaced with integration with systemd-resolved. If you are overriding this default integration to force the use of dnsmasq instead of systemd-resolved, that is likely not a supportable configuration.
In contrast, any bug in the systemd-resolved integration in 18.04 that would force you to work around it by switching to dnsmasq is almost certainly an SRUable bug. If you can find the information about why you switched to dnsmasq, please report this as a bug against systemd (with 'ubuntu-bug systemd') and provide a link to the bug here.
Steve Langasek (vorlon) wrote : | #40 |
Changed in network-manager (Ubuntu Bionic): | |
status: | Fix Released → In Progress |
tags: |
added: verification-failed verification-failed-bionic removed: verification-needed verification-needed-bionic |
dwmw2 (dwmw2) wrote : | #41 |
On the switch to using dnsmasq: that decision predates my tenure so I have limited visibility. I can try to get our IT team to expend effort in moving to systemd-resolved and see what breaks. It may even be completely unnecessary in xenial, and is merely inherited to make our bionic setups less different.
I completely agree with the general observation that they should be filing bugs upstream and not working around them. But if I tell them that, I suspect they're going to point at this security regression in Xenial that still isn't fixed 14 months later, and tell me that working around things locally is much more effective. Right now, I don't know that I can tell them they're wrong.
Let's show them the process works, *then* I'll tell them they have to use it :)
dwmw2 (dwmw2) wrote : | #42 |
Dammit, "completely unnecessary in bionic but inherited from xenial"...
dwmw2 (dwmw2) wrote : | #43 |
On the 1.10.14 regression.... simply making those dns-priority/
Till Kamppeter (till-kamppeter) wrote : | #44 |
Please create the following files (and directories if needed for them):
1. /etc/systemd/
RateLimitInterv
RateLimitBurst=0
2. /etc/NetworkMan
[logging]
level=TRACE
domains=ALL
Then restart journald:
sudo systemctl restart systemd-journald
and NetworkManager:
sudo systemctl restart network-manager
Then you get the full debug log of NetworkManager via
journalctl -u NetworkManager
After all that, reboot and/or connect to your VPN and do
journalctl -u NetworkManager > log.txt
and attach the log.txt file to this bug report. Do not compress the file and do not package it together with other files.
dwmw2 (dwmw2) wrote : | #45 |
Till, you want that for the case where dnsmasq is being used and is misbehaving?
Till Kamppeter (till-kamppeter) wrote : | #46 |
dwmw2, yes, exactly for this case.
dwmw2 (dwmw2) wrote : | #47 |
And (in case any of my colleagues are paying attention and inclined to do it before the next time I get to spend any real time in front of a computer, next week), without the dns-priority and dns-search settings that made it work again after the recent NM update.
Changed in systemd (Ubuntu Cosmic): | |
assignee: | nobody → Dan Streetman (ddstreet) |
Changed in systemd (Ubuntu Bionic): | |
assignee: | nobody → Dan Streetman (ddstreet) |
Changed in systemd (Ubuntu Xenial): | |
assignee: | nobody → Dan Streetman (ddstreet) |
assignee: | Dan Streetman (ddstreet) → nobody |
Changed in systemd (Ubuntu Cosmic): | |
importance: | Undecided → High |
status: | New → In Progress |
Changed in systemd (Ubuntu Bionic): | |
status: | Triaged → In Progress |
tags: | added: ddstreet-next |
Dan Streetman (ddstreet) wrote : | #48 |
Uploaded patched systemd to b/c queues.
Timo Aaltonen (tjaalton) wrote : | #49 |
systemd accepted to bionic/
tags: |
added: verification-needed verification-needed-bionic verification-needed-cosmic removed: verification-failed verification-failed-bionic |
Changed in systemd (Ubuntu Cosmic): | |
status: | In Progress → Fix Committed |
Changed in systemd (Ubuntu Bionic): | |
status: | In Progress → Fix Committed |
Paul Smith (psmith-gnu) wrote : | #50 |
Is this going to be fixed in disco?
Dan Streetman (ddstreet) wrote : | #51 |
> Is this going to be fixed in disco?
speaking for systemd only, the commit needed is a97a3b256cd6c56
https:/
that's included starting at v240, so is already in disco.
Sebastien Bacher (seb128) wrote : | #52 |
bug #1831261 is also described as a potential side effect from this change
Dan Streetman (ddstreet) wrote : | #53 |
@dwmw2 and/or @till-kamppeter, can you verify the systemd upload for this bug for b and c?
Sebastien Bacher (seb128) wrote : | #54 |
We are not going to do cosmic/n-m changes at this point, best to upgrade to Disco if you need that issue resolved
Changed in network-manager (Ubuntu Bionic): | |
assignee: | Olivier Tilloy (osomon) → Till Kamppeter (till-kamppeter) |
Changed in network-manager (Ubuntu Cosmic): | |
status: | New → Won't Fix |
dwmw2 (dwmw2) wrote : | #55 |
@ddstreet We don't use systemd-resolver here. It's fairly trivial to set up a VPN service; the openconnect 'make check' uses ocserv automatically, for example. You shouldn't have difficulty reproducing this locally.
Till Kamppeter (till-kamppeter) wrote : | #56 |
I have checked again on Bionic, making sure that the installed systemd actually comes from the bionic-proposed repository, that the behavior according to the test case shown in the initial description of this bug is correct, DNS queries of destinations in the VPN done through the VPN's DNS and DNS queries to public destinations being done through the public DNS.
This works correctly and so the systemd update together with the network-manager update fixes the bug described here. So I am marking this bug as verified in Bionic.
tags: |
added: verification-done verification-done-bionic removed: verification-needed verification-needed-bionic verification-needed-cosmic |
Dan Streetman (ddstreet) wrote : | #57 |
This was fixed in systemd 237-3ubuntu10.22 for bionic, and 239-7ubuntu10.14 for cosmic. I missed a "#" in the changelog (sorry) so the tooling didn't automatically mark this bug as fix released.
Changed in systemd (Ubuntu Bionic): | |
status: | Fix Committed → Fix Released |
Changed in systemd (Ubuntu Cosmic): | |
status: | Fix Committed → Fix Released |
tags: | removed: ddstreet-next |
dwmw2 (dwmw2) wrote : | #58 |
Do we have any idea when this will be fixed? Most of my users used to get away with the DNS leakage and it was "only" a security problem but stuff actually worked. Then the NM and other updates were shipped, we set ipv4.dns-
An ETA for having this properly working again would be very much appreciated.
Sebastien Bacher (seb128) wrote : | #59 |
> Then the NM update was pulled, and new installations aren't working at all, even if we don't set the DNS config as described.
That's weird, do you understand why? The update was deleted so you should be back to initial situation, we had no change to the previous package build
Also Till is still trying to understand what the regressions reported are about and what we should do about those
Till Kamppeter (till-kamppeter) wrote : | #60 |
seb128, it seems that dwmw2 NEEDS this SRU, without he does not get his environment working correctly, with SRU he gets it at least working setting the parameters he mentioned. I asked the posters of the regressions whether they get their situation fixed when using this SRU, the systemd SRU and dwmw2's settings, but no one answered.
dwmw2 (dwmw2) wrote : | #61 |
> That's weird, do you understand why? The update was deleted so you should be back to initial
> situation, we had no change to the previous package build
Other package changes? Certainly systemd-resolver although we don't use that (because of a previous VPN DNS leak problem) we use dnsmasq.
My original thought was that it was the VPN config change that we'd made to cope with the new NM, but testing seems to show it isn't that.
Now we have a failure mode which some people had *occasionally* reported before, where even VPN lookups which *must* go to the VPN, for the company domain, are not. This was just occasional before; now it seems to happen all the time. I haven't done a thorough investigation since just putting the updated NM back has been enough to fix it.
Launchpad Janitor (janitor) wrote : | #62 |
Status changed to 'Confirmed' because the bug affects multiple users.
Changed in network-manager (Ubuntu Xenial): | |
status: | New → Confirmed |
dwmw2 (dwmw2) wrote : | #63 |
Any word on when this CVE will be fixed?
In the meantime I have put the 1.10.14-0ubuntu2 package into an apt repository at http://
In the short term can someone please at least confirm that no new update will be shipped for Bionic which *doesn't* fix this, so that I don't have to play games with keeping a package in that repository "newer" than the latest in bionic-updates?
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #134 |
Without the ipv4.dns-priority and ipv4.dns-search settings being manually set, I see the wired device still being used for its own search domain. This is what causes the problem.
It seems to work for me if I set *only* ipv4.dns-
My biggest practical problem here was that we had set ipv4.dns-priority and ipv4.dns-search options in an emergency deployment to our users after Ubuntu shipped the 1.10.14 update to 18.04 to fix this... and then when they *pulled* the update, new installations got the new config but older NM, and that didn't work correctly either.
I'm going to experiment with *just* setting ipv4.dns-
We should be setting ipv4.dns-priority=1 by default for full-tunnel VPNs, but at least if I can find a way to work around it that works for everyone, that'll be an improvement.
dwmw2 (dwmw2) wrote : | #64 |
I have worked out the problem with the new NetworkManager which required me to set ipv4.dns-
The new NM sets ipv4.dns-search=~. automatically for full-tunnel VPNs but it doesn't also set ipv4.dns-
This is wrong; NetworkManager should also set ipv4.dns-
The reason this was consistently problematic for our users is that we have set up /etc/dhcp/
This realisation does give me a way out of my current problem, until a newer version of NM correctly sets the priority automatically. Instead of manually configuring ipv4.dns-
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #135 |
No, ipv4.dns-
As it happens, we configure our DHCP client to always override the discovered search domains to our corporate AD domain dom.company.com, so I can work around NM's failure to automatically set ipv4.dns-
In bugzilla.gnome.org/ #746422, Bgalvani (bgalvani) wrote : | #136 |
> My biggest practical problem here was that we had set
> ipv4.dns-priority and ipv4.dns-search options in an emergency
> deployment to our users after Ubuntu shipped the 1.10.14 update to
> 18.04 to fix this... and then when they *pulled* the update, new
> installations got the new config but older NM, and that didn't work
> correctly either.
> I'm going to experiment with *just* setting ipv4.dns-
> not ipv4.dns-search. Do we expect that to work for everyone, whether
> they have the updated package or not?
Yes, this should work the same on 1.10.14 and 1.12+ releases.
> We should be setting ipv4.dns-priority=1 by default for full-tunnel
> VPNs, but at least if I can find a way to work around it that works
> for everyone, that'll be an improvement.
Do you mean -1? Why? This will cause an opposite problem: local
queries leaked to the VPN, i.e. I look up nas.myhome and the query
goes through the VPN.
> No, ipv4.dns-
Which version is 'older'?
In bugzilla.gnome.org/ #746422, dwmw2 (dwmw2) wrote : | #137 |
In my case, 'older' is 1.10.6. At this point I'm just trying to get things working for my users on Ubuntu 18.04. So I'm comparing 1.10.6 (the current Ubuntu package) with 1.10.14 (the update, that was briefly shipped in 18.04 updates and then withdrawn).
When the 1.10.14 update went out, it broke our users because lookups for the dom.company.com AD domain were now always going to the local network (because of our dhclient config override). We immediately rolled out a manual setting of ipv4.dns-
Then the Ubuntu 1.10.14 update was withdrawn, and new installations got 1.10.6 which *doesn't* work with ipv4.dns-
(We don't care about local lookups not working. In fact we'd prefer local lookups *not* to work, which might be part of the reason we've overridden the search domain obtained by DHCP. If we could stop *routing* to the local network from working, we would. It's an open feature request, I believe.)
But *because* I know the search domain on the local physical network will always be just 'dom.company.com', adding that explicitly in the VPN config is enough and I don't need ipv4.dns-
Anyway, we now know that the reason my initial testing of the Ubuntu 1.10.14 backport was unsuccessful, was because of the search domain on the *local* network. I've been able to work around that, and things are now working OK.
Changed in network-manager: | |
status: | Fix Released → Confirmed |
Dariusz Gadomski (dgadomski) wrote : | #138 |
- bionic_network-manager_1.10.6-2ubuntu1.2.debdiff Edit (133.0 KiB, text/plain)
I have backported what was listed as nm-1-10 fix for the bug in the upstream bugzilla [1].
I have also applied fixes for bug #1825946 and bug #1790098 to it.
After testing this build for some time (available at ppa:dgadomski/
@Till I'd appreciate you having a look at it. Thanks!
Till Kamppeter (till-kamppeter) wrote : | #139 |
Great work, thank you very much!
It will need some testing of which I can only test the reproducer in the initial description of this bug report, not any regressions which the first attempt of upstream-
So I would say to take this as a new proposed SRU and also ask the reporters of the regressions whether this version does not cause them.
Mathew Hodson (mhodson) wrote : | #140 |
This fix was first included in upstream 1.12.0, so this was actually fixed in Cosmic with network-manager 1.12.2-0ubuntu3
Changed in network-manager (Ubuntu Cosmic): | |
importance: | Undecided → High |
status: | Won't Fix → Fix Released |
Till Kamppeter (till-kamppeter) wrote : | #141 |
Sorry for the late reply, I was on a conference last week.
I installed the PPA now and tested with the reproducer of the initial posting. This works for me. Also the machine in general seems to work OK with this version of network-manager.
Thank you very much Dariusz for packaging this version.
So now the 1.10.14 should be removed from -proposed (to avoid need of an epoch), and the version from the PPA of Dariusz should get uploaded into -proposed, and then the reporters of the regressions in the 1.10.14 SRU informed (by comments in their bug reports) for the new SRU being verified.
Could someone from the release team initiate the process by removing 1.10.14 from -proposed? Thanks.
Timo Aaltonen (tjaalton) wrote : Please test proposed package | #142 |
Hello dwmw2, or anyone else affected,
Accepted network-manager into bionic-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-
Further information regarding the verification process can be found at https:/
N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.
Changed in network-manager (Ubuntu Bionic): | |
status: | In Progress → Fix Committed |
tags: |
added: verification-needed verification-needed-bionic removed: verification-done verification-done-bionic |
Ubuntu SRU Bot (ubuntu-sru-bot) wrote : Autopkgtest regression report (network-manager/1.10.6-2ubuntu1.2) | #143 |
All autopkgtests for the newly accepted network-manager (1.10.6-2ubuntu1.2) for bionic have finished running.
The following regressions have been reported in tests triggered by the package:
network-
systemd/
netplan.
Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUp
[1] https:/
Thank you!
Eric Desrochers (slashd) wrote : | #144 |
The netplan.io (arm64) autopkgtest failure (due to timeout) has been retried today, it passed:
http://
No more failure reported in pending sru page.
- Eric
Till Kamppeter (till-kamppeter) wrote : | #145 |
Now network-manager is hanging on (all autopkg tests passed):
Not touching package due to block request by freeze (contact #ubuntu-release if update is needed)
Which freeze do we currently have on Bionic?
Till Kamppeter (till-kamppeter) wrote : | #146 |
No worries about my previous comment, it is solved.
Dariusz Gadomski (dgadomski) wrote : | #147 |
I have just run the test case from this bug description on the bionic-proposed version 1.10.6-2ubuntu1.2.
tcpdump does not show any leak of the VPN-specific queries. I have not observed other issues in my tests.
tags: |
added: verification-done verification-done-bionic removed: verification-needed verification-needed-bionic |
Launchpad Janitor (janitor) wrote : | #148 |
This bug was fixed in the package network-manager - 1.10.6-2ubuntu1.2
---------------
network-manager (1.10.6-2ubuntu1.2) bionic; urgency=medium
[ Till Kamppeter ]
* debian/tests/nm: Add gi.require_
and NMClient to avoid stderr output which fails the test. (LP: #1825946)
[ Dariusz Gadomski ]
* d/p/fix-
* d/p/lp1790098.
managed. (LP: #1790098)
-- Dariusz Gadomski <email address hidden> Sat, 07 Sep 2019 16:10:59 +0200
Changed in network-manager (Ubuntu Bionic): | |
status: | Fix Committed → Fix Released |
Łukasz Zemczak (sil2100) wrote : Update Released | #149 |
The verification of the Stable Release Update for network-manager has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.
Joe Hohertz (jhohertz) wrote : | #150 |
1.10.6-2ubuntu1.2 has cause a regression in functionality.
Anyone using a "split" VPN, where there is no default route, AND wish to have DNS services supplied by the server to be honoured, via use of the ipv4.dns-priority parameter, will have this broken. This is a bit of a sore point considering the hoops one has to jump through to make this work at all.
Reverting to previous version restores functionality.
In bugzilla.gnome.org/ #746422, Andre Klapper (a9016009) wrote : | #151 |
bugzilla.gnome.org is being shut down in favor of a GitLab instance.
We are closing all old bug reports and feature requests in GNOME Bugzilla which have not seen updates for a long time.
If you still use NetworkManager and if you still see this bug / want this feature in a recent and supported version of NetworkManager, then please feel free to report it at https:/
Thank you for creating this report and we are sorry it could not be implemented (workforce and time is unfortunately limited).
Changed in network-manager: | |
status: | Confirmed → Expired |
Changed in network-manager (Ubuntu Xenial): | |
status: | Confirmed → Won't Fix |
Changed in systemd (Ubuntu Xenial): | |
status: | Invalid → Won't Fix |
If the VPN routes all traffic (eg, its ipv4.never- default= false) that usually indicates that the VPN's nameservers should be used instead of the parent interface's nameservers, since the parent interface's nameservers would be accessed over the VPN anyway (since it's routing all traffic).
But with dns=dnsmasq, the dnsmasq plugin always does split DNS regardless of the never-default value of the VPN's IPv4 config:
/* Use split DNS for VPN configs */
for (iter = (GSList *) vpn_configs; iter; iter = g_slist_next (iter)) {
if (NM_IS_IP4_CONFIG (iter->data))
add_ip4_config (conf, NM_IP4_CONFIG (iter->data), TRUE);
else if (NM_IS_IP6_CONFIG (iter->data))
add_ip6_config (conf, NM_IP6_CONFIG (iter->data), TRUE);
}
instead I think that each config should be added with split DNS only if ipv4.never- default= true for that config. That would ensure that when the VPN was routing all traffic, split DNS was not used, but when the VPN was not routing all traffic, split DNS was used.
If the user really does want to use the parent interface's nameservers even though they will be contacted over the VPN, they can either add custom dnsmasq options to /etc/NetworkMan ager/dnsmasq. d or enter them manually for the connection.
ISTR that the behavior I'm suggesting was always intended, but apparently we changed that behavior a long time ago and possibly didn't realize it?