Update bindings on deployed applications

Bug #1796653 reported by Alvaro Uria
58
This bug affects 11 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Unassigned

Bug Description

Environment is Juju 2.4.3 (upgraded from Juju1), MAAS 1.9.5+bzr4599-0ubuntu1~14.04, and OpenStack cloud:trusty-mitaka.

[Goal]
Be able to modify the network space bound to a Juju interface without needing to redeploy an application.

[Issue]
An application deployed on a host with multiple addresses (privnet, pubnet...) has a "wrong" ingress-address (and private-address). For such reason, "network-get <binding_name>" returns the "pubnet" when we want it to return the "privnet". "endpoint-bindings" shows that all bindings are "", and we want a certain binding (or all of them) to be on "privnet-space" (using the private network).

nrpe:monitors provides a public network to nagios:monitors, but nagios container only has a leg on the private network (so, routing fails).

[Use cases]
1) On upgraded Juju environments (from Juju1 to Juju2), endpoint-bindings are "", and we want to set bindings (MAAS provider is still 1.9)

2) In general, modify a binding from space-public to space-private (or any other space)

[Expectation]
1) Be able to reconfigure a binding without the need to redeploy an application
2) Once a binding is reconfigured, config-changed hook would be triggered

Revision history for this message
Richard Harding (rharding) wrote :

Thanks, this is definitely something on our roadmap todo list and we could use field requests for this during planning sprints.

Changed in juju:
status: New → Triaged
importance: Undecided → Wishlist
Revision history for this message
Ryan Beisner (1chb1n) wrote :

We have a scenario which has surfaced as a release blocker for OpenStack Charms, with this bug at its core.

Here's the story and why this is blocking us.

1. Charms nova-cloud-controller and memcached already exist in a deployment.

2. At charm revision 18.11, the charms are not related to one another, and never had a reason to be.

3. At charm revision 19.04, the charms are required to be related to one another in order to enable token caching for the nova console in HA scenarios.

4. If network space bindings are not in play, the charm upgrade is successful, and the applications function as expected.

5. If network space bindings are necessary, the charm upgrade succeeds from the Juju perspective, but the user has no way to declare bindings on the relation, and the workload fails due to predictable connectivity issues.

6. We can't release OpenStack Charms 19.04 without addressing this.

Options?:

6a. Expect users to remove and re-add the application as a workaround, though it is a bit heavy-handed.

6b. Revert the new charm features so that the rest of the product can ship. I don't think this is a viable path.

6c. Open to other ideas.

Revision history for this message
Richard Harding (rharding) wrote :

Thanks for the updated information.

In a typical deployment if you bind endpoints and you deploy into containers you end up needing to clarify any ambiguous endpoints. So in this case, these are new endpoints on the charms not bound at the deploy time?

Can the memcache charm in this case be deployed alongside a previous install (e.g. new container?) and bound during this new deploy properly to align with the nova-clcoud-controller endpoints?

Being that this is a caching layer is a redeploy in a binding (complex network case) a burden beyond what the upgrade steps requiring manual scripting? I'd like to fully understand the impact of that path.

Revision history for this message
Richard Harding (rharding) wrote :

After a conversation there's two potential use cases/issues at play here.

1) Mutability of an existing binding on a deployed unit. If the network changes, or if requirements change, having a controlled path to updating bindings (as long as Juju determines all units are capable of getting to the updated spaces/etc) would be useful.

2) There's also the use case of a charm adding a new endpoint that requires a binding to a space in a deployment. This again needs much the same sanity checks as #1, but would mean that during charm upgrade you would need to be able to specify updated binding specs for this new endpoint that the charm author has added.

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1796653] Re: Update bindings on deployed applications

Note that if you have a default binding ("": foo) I believe that all new
endpoints currently inherit the default binding. So often it isn't there is
*no* binding, but certainly it could be the *wrong* binding.

Supporting changing bindings of endpoints of an endpoint that don't change
the set of bindings for the application as a whole is likely pretty easy.
We've just avoided it to avoid worrying about someone changing bindings
such that they violate the existing machines. (And figuring out what events
need to be alerted that the bindings have changed.)

On Sat, Apr 13, 2019 at 12:41 AM Richard Harding <email address hidden>
wrote:

> After a conversation there's two potential use cases/issues at play
> here.
>
> 1) Mutability of an existing binding on a deployed unit. If the network
> changes, or if requirements change, having a controlled path to updating
> bindings (as long as Juju determines all units are capable of getting to
> the updated spaces/etc) would be useful.
>
> 2) There's also the use case of a charm adding a new endpoint that
> requires a binding to a space in a deployment. This again needs much the
> same sanity checks as #1, but would mean that during charm upgrade you
> would need to be able to specify updated binding specs for this new
> endpoint that the charm author has added.
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1796653
>
> Title:
> Update bindings on deployed applications
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1796653/+subscriptions
>

Revision history for this message
Ryan Beisner (1chb1n) wrote :
Revision history for this message
Ryan Beisner (1chb1n) wrote :

This will become an more urgent issue as many production clouds today leverage network space bindings, and will need to be able to transform applications in those clouds, including new relations on previously-undeclared bindings.

Felipe Reyes (freyes)
tags: added: sts
tags: added: cpe-onsite
tags: added: cdo-qa foundations-engine
Revision history for this message
Alvaro Uria (aluria) wrote :

This bug is also related to bug 1827397. After an upgrade to OS Charms 19.04, nova-cloud-controller could not connect to the memcached application. "Connection refused" errors were not reported by charm-nova-cloud-controller nor the Nova API logs. In the end, we had to redeploy the memcached application with a matching binding, but it would have been less disruptive to update the "cache" binding.

Revision history for this message
Alvaro Uria (aluria) wrote :

As a follow-up to comment #8, we also had gnocchi and designate related to memcached, but their bindings was different that nova-cloud-controller. That sounds like a mistake, but the only ways forward to fix the problem were:
1) redeploy gnocchi and designate to match the same nova-cloud-controller binding, or
2) deploy a new memcached application (ie. memcached-internal) to work with gnocchi and designate (while leaving the memcached application related to n-c-c)

Changed in juju:
milestone: none → 2.7-beta1
importance: Wishlist → High
Revision history for this message
Jorge Niedbalski (niedbalski) wrote :

For further references the mentioned issue with the memcached binding has been documented here: https://docs.openstack.org/charm-guide/latest/1904.html#adding-nova-cloud-controller-memcached-relation

Revision history for this message
Xav Paice (xavpaice) wrote :

Note, it's possible to edit the constraints on a deployed application - e.g. juju set-constraints cinder spaces=ceph-access-space. This doesn't affect the bindings at all, but can at least mean that the unit has an interface in the needed space.

Revision history for this message
Alvaro Uria (aluria) wrote :

We have hit this bug in a different customer were barbican-vault didn't have a correct binding for the secrets-storage relation to vault. When sending the secrets_relation_request to vault it used the incorrect IP (i.e. an IP it is not using to communicate with the vault unit). The barbican-vault charm, in turn, assigned the incorrect IP to the cidr bounds for the role and secret ID.

It seems that the only option is again redeploying barbican-vault, but it would be handy to allow updating bindings so hooks can use the new information.

Revision history for this message
Richard Harding (rharding) wrote :

Thanks for the update. We're actively working on this and have it going into the 2.7 release. We're glad that this will be useful/helpful.

Changed in juju:
milestone: 2.7-beta1 → 2.7-rc1
Revision history for this message
Richard Harding (rharding) wrote :

This is in 2.7-rc1 so marking fix-committed as part of the networking improvements in the 2.7 series.

Changed in juju:
status: Triaged → Fix Committed
Changed in juju:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.