k8s charm sees no egress-subnets on its own relation

Bug #1830252 reported by Stuart Bishop
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Ian Booth

Bug Description

A k8s charm cannot determine its egress subnets by looking at its end of the relation (testing with a cross model relation).

If it checks with relation-get, it sees no juju managed relation data such as egress-subnets or private-address. However, it will see any relation data that it set itself with relation-set.

If the charm checks the remote end of the relation with relation-get, it sees all expected relation data.

This breaks interface:pgsql. The provider publishes the egress addresses that have been granted access to the database, and the provider sees the egress address of the client just fine and publishes it. However, the client cannot determine its own egress address and is unable to determine if it has been granted access or not.

Tags: k8s
Changed in juju:
status: New → Triaged
importance: Undecided → High
assignee: nobody → Ian Booth (wallyworld)
Revision history for this message
Stuart Bishop (stub) wrote :

The charm-helpers methods charmhelpers.core.hookenv.ingress_address() and charmhelpers.core.hookenv.egress_subnets() do not work in k8s charms.

Ian Booth (wallyworld)
tags: added: k8s
Revision history for this message
Stuart Bishop (stub) wrote :

Issue is that IP address information is not published on the relation until after the pod is spun up. Charms used to be able to rely on this information being present on the relation during relation-joined and later hooks, but it is no longer the case with k8s charms.

Per discussion with @wallyworld, it may be possible to publish the egress information before the pod is spun up. This should be enough to allow the PostgreSQL charm and interface:pgsql to complete the relation handshaking.

Ian Booth (wallyworld)
Changed in juju:
assignee: Ian Booth (wallyworld) → Yang Kelvin Liu (kelvin.liu)
milestone: none → 2.6.3
Revision history for this message
Yang Kelvin Liu (kelvin.liu) wrote :

https://github.com/juju/juju/pull/10250 will be released in 2.6.3 to fix this bug.

Changed in juju:
status: Triaged → In Progress
status: In Progress → Fix Committed
Revision history for this message
Ian Booth (wallyworld) wrote :

This fix we tried for this doesn't work because the EnterScope() operation happens before the relation joined hook is run which means for any charm waiting for that hook before it spins up its pods, there will not be the address information available.

The only thing we can do appears to be to have juju react to k8s service address changes and run the relation-changed hook to give the charms a chance to call network-get to then update the relation data

Changed in juju:
status: Fix Committed → Triaged
Ian Booth (wallyworld)
Changed in juju:
milestone: 2.6.3 → 2.6.4
Revision history for this message
Ian Booth (wallyworld) wrote :

We'll make some tweaks with respect to what addresses are returned (more below), but the basic premise is:

- when a unit enters scope during relation joined, there may not be all address info available eg no pods started yet. The relation data will be populated with what's known at the time.

- juju watches the k8s service address and will fire a config-changed hook; network-get can be used to fetch the address info and any necessary relation data can be updated.

Juju has been including the pod addresses in the ingress address list. From now, it will just use the k8s service address. Further, for a cross model relation, if the k8s service type is load balancer or external IP, Juju will poll for these addresses to allow the load balancer etc to start up (it does similar for IAAS CMR public addresses).

Till now, Juju has been watching the pod address changes as a trigger to fire config-changed. This will now be service address changes.

k8s charms cannot assume that all address info is available in the relation data bag or via network-get at the time of relation joined. They need to handle the fact that address info may only be known later and a config-changed hook is fired.

Ian Booth (wallyworld)
Changed in juju:
assignee: Yang Kelvin Liu (kelvin.liu) → Ian Booth (wallyworld)
status: Triaged → Fix Committed
Changed in juju:
status: Fix Committed → Fix Released
Revision history for this message
Stuart Bishop (stub) wrote :

Per https://code.launchpad.net/~davigar15/interface-prometheus/+git/interface-prometheus/+merge/374282 , it seems that using network-get is now working, but the information on the relation is not being updated. charm-helpers is currently pulling the data from the relation, and hookenv.ingress_address() is failing.

Stuart Bishop (stub)
Changed in juju:
status: Fix Released → New
Revision history for this message
Ian Booth (wallyworld) wrote :

I raised bug 1848628 to track this related issue. The core fix to ensure network-get works appears sound and the egress subnets I think also are ok so let's leave this as fix committed and work in the new bug.

Changed in juju:
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.