ERROR charm "octavia-diskimage-retrofit" does not support jammy, force not used

Bug #2008248 reported by Diko Parvanov
24
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
Undecided
Joseph Phillips
charm-octavia-diskimage-retrofit
Invalid
Undecided
Unassigned

Bug Description

Doing series upgrade from focal to jammy on yoga/stable charm revision 101 got the following error:

ERROR charm "octavia-diskimage-retrofit" does not support jammy, force not used

Then principal glance-simplestreams-sync and all subordinates, including octavia-diskimage-retrofit went into error state with:

unit-glance-simplestreams-sync-0: 12:53:32 ERROR juju.worker.uniter.operation error updating workload status before pre-series-upgrade hook: upgrade series status "prepare running"
unit-glance-simplestreams-sync-0: 12:53:32 ERROR juju.worker.uniter resolver loop error: executing operation "run pre-series-upgrade hook" for glance-simplestreams-sync/0: upgrade series status "prepare running"
unit-glance-simplestreams-sync-0: 12:53:32 INFO juju.worker.uniter unit "glance-simplestreams-sync/0" shutting down: executing operation "run pre-series-upgrade hook" for glance-simplestreams-sync/0: upgrade series status "prepare running"
unit-glance-simplestreams-sync-0: 12:53:32 ERROR juju.worker.dependency "uniter" manifold worker returned unexpected error: executing operation "run pre-series-upgrade hook" for glance-simplestreams-sync/0: upgrade series status "prepare running"

Tags: bseng-1081
Revision history for this message
Felipe Reyes (freyes) wrote (last edit ):

> Doing series upgrade from focal to jammy on yoga/stable charm revision 101

could you share the set of steps you are using?, because effectively revision 101 doesn't support jammy, it's a focal only charm, although the yoga/stable track also has the charm's revision 102 [0] that's jammy only, I wonder if juju should be doing something different now that charms may support a single series per revision.

[0] $ charmcraft status octavia-diskimage-retrofit | grep stable
yoga ubuntu 20.04 (amd64) stable 1426553 101 core18 (r0), octavia-diskimage-retrofit (r0), snapd (r0)
          ubuntu 20.04 (arm64) stable 1426553 107 core18 (r0), octavia-diskimage-retrofit (r0), snapd (r0)
          ubuntu 20.04 (ppc64el) stable 1426553 104 core18 (r0), octavia-diskimage-retrofit (r0), snapd (r0)
          ubuntu 20.04 (s390x) stable 1426553 103 core18 (r0), octavia-diskimage-retrofit (r0), snapd (r0)
          ubuntu 22.04 (amd64) stable 1426553 102 core18 (r0), octavia-diskimage-retrofit (r0), snapd (r0)
          ubuntu 22.04 (arm64) stable 1426553 108 core18 (r0), octavia-diskimage-retrofit (r0), snapd (r0)
          ubuntu 22.04 (ppc64el) stable 1426553 105 core18 (r0), octavia-diskimage-retrofit (r0), snapd (r0)
          ubuntu 22.04 (s390x) stable 1426553 106 core18 (r0), octavia-diskimage-retrofit (r0), snapd (r0)

Revision history for this message
Juan M. Tirado (tiradojm) wrote :

I will set this as incomplete. We need juju version, and the steps to reproduce as mentioned by freyes

Changed in juju:
status: New → Incomplete
Revision history for this message
Billy Olsen (billy-olsen) wrote :

I'm marking the charm-octavia-diskimage-retrofit task as incomplete as well. I expect that this is a Juju issue with the binary revisions and not a charm issue per the commentary in comment #1

Changed in charm-octavia-diskimage-retrofit:
status: New → Incomplete
Revision history for this message
Diko Parvanov (dparv) wrote :

Upgrading to yoga/stable gives me this:

$ juju upgrade-charm octavia-diskimage-retrofit --switch ch:octavia-diskimage-retrofit --channel yoga/stable
Added charm-hub charm "octavia-diskimage-retrofit", revision 101 in channel yoga/stable, to the model
Leaving endpoints in "alpha": certificates, identity-credentials, juju-info

So charm gets into revision 101, not 102 (or 108 what looks to be latest for yoga/stable as of now)

This can be reproduced easily, e.g.:

juju deploy cs:octavia --series focal
juju deploy cs:octavia-diskimage-retrofit --series focal
juju add-relation octavia-diskimage-retrofit octavia

juju upgrade-charm octavia-diskimage-retrofit --switch ch:octavia-diskimage-retrofit --channel yoga/stable
juju upgrade-charm octavia --switch ch:octavia --channel yoga/stable
juju upgrade-series 49 prepare jammy
WARNING: This command will mark machine "49" as being upgraded to series "jammy".
This operation cannot be reverted or canceled once started.
Units running on the machine will also be upgraded. These units include:
  - octavia-diskimage-retrofit/0
  - octavia/0

Leadership for the following applications will be pinned and not
subject to change until the "complete" command is run:
  - octavia
  - octavia-diskimage-retrofit

Continue [y/N]?y
ERROR charm "octavia-diskimage-retrofit" does not support jammy, force not used

I can't get to revision later than 101.
juju upgrade-charm octavia-diskimage-retrofit --channel yoga/stable
charm "octavia-diskimage-retrofit": already up-to-date

Changed in juju:
status: Incomplete → New
Changed in charm-octavia-diskimage-retrofit:
status: Incomplete → New
Revision history for this message
Diko Parvanov (dparv) wrote :

I have a workaround for this bug now:

juju upgrade-charm octavia-diskimage-retrofit --switch ch:amd64/focal/octavia-diskimage-retrofit-108 --channel yoga/stable

this is how I get to revision 108 (or 102+ anyways that supports jammy)

I think this might be either a juju or a charmhub bug. Not sure which.

Changed in charm-octavia-diskimage-retrofit:
status: New → Invalid
Revision history for this message
Heather Lanigan (hmlanigan) wrote (last edit ):

Given what I found and listed below, juju is doing what it's been told to

A few notes.

1. If I ask charmhub for bases supported by octavia-diskimage-retrofit in the yoga/stable channel, I get 22.04 and 18.04.

curl -s https://api.snapcraft.io/v2/charms/refresh -H 'Content-type: application/json' -d '{"context":[],"actions":[{"action":"install","instance-key":"28243d18-076a-4711-8b26-09c4f83e7eea","name":"octavia-diskimage-retrofit","channel":"yoga/stable","base":{"architecture":"amd64","name":"NA","channel":"NA"}}],"fields":["bases","id","metadata-yaml","name","revision","version"]}' | jq .

2. If I ask charmhub for octavia-diskimage-retrofit on channel yoga/stable with ubuntu@22.04, I get revision 102.

curl -s https://api.snapcraft.io/v2/charms/refresh -H 'Content-type: application/json' -d '{"context":[],"actions":[{"action":"install","instance-key":"28243d18-076a-4711-8b26-09c4f83e7eea","name":"octavia-diskimage-retrofit","channel":"yoga/stable","base":{"architecture":"amd64","name":"ubuntu","channel":"22.04"}}],"fields":["bases","id","metadata-yaml","name","revision","version"]}' | jq ."

3. If I ask charmhub for octavia-diskimage-retrofit on channel yoga/stable with ubuntu@20.04, I get revision 101.

curl -s https://api.snapcraft.io/v2/charms/refresh -H 'Content-type: application/json' -d '{"context":[],"actions":[{"action":"install","instance-key":"28243d18-076a-4711-8b26-09c4f83e7eea","name":"octavia-diskimage-retrofit","channel":"yoga/stable","base":{"architecture":"amd64","name":"ubuntu","channel":"20.04"}}],"fields":["bases","id","metadata-yaml","name","revision","version"]}' | jq .

4. Asking for a specific revision by doing `--switch ch:amd64/focal/octavia-diskimage-retrofit-108` overrides the use of the channel supplied. Juju will supply the charm at the specified revision. A charm revision may be in different channels. We ask for a channel to know which channel to refresh from in the future.

Revision history for this message
Diko Parvanov (dparv) wrote (last edit ):

This contradicts with what's on charmhub (screenshot attached)

The following command should give you revision 108, not 102, right?

juju deploy octavia-diskimage-retrofit --channel yoga/stable
Located charm "octavia-diskimage-retrofit" in charm-hub, revision 102
Deploying "octavia-diskimage-retrofit" from charm-hub charm "octavia-diskimage-retrofit", revision 102 in channel yoga/stable on jammy

+ For the Openstack team: focal/yoga is a valid option for a cloud - the yoga/stable branch should support focal as well, otherwise you can't juju series-upgrade from focal/yoga to jammy/yoga for example.

Changed in charm-octavia-diskimage-retrofit:
status: Invalid → New
Revision history for this message
Joseph Phillips (manadart) wrote :

Machine base upgrades are still not completely congruent with the change to Charmhub.

If you upgrade the machine, then use the set-application-base to change octavia-diskimage-retrofit to ubuntu@22.04, I suspect the charm upgrade will work.

Can you confirm?

Will set this as incomplete pending the new info.

Changed in juju:
status: New → Incomplete
Andrea Ieri (aieri)
tags: added: bseng-1081
Revision history for this message
JamesLin (jneo8) wrote (last edit ):

> Machine base upgrades are still not completely congruent with the change to Charmhub.

> If you upgrade the machine, then use the set-application-base to change octavia-diskimage-retrofit to ubuntu@22.04, I suspect the charm upgrade will work.

> Can you confirm?

> Will set this as incomplete pending the new info.

The octavia-diskimage-retrofit is a subordinate charm which not support set-application-base.

```
$ juju set-application-base octavia-diskimage-retrofit jammy
ERROR "octavia-diskimage-retrofit" is a subordinate application, update-series not supported
```

So even I force upgrade the machine to jammy, the set-application-base can't be used in this case. Also the upgrade-charm still using version 115 on octavia and 101 for octavia-diskimage-retrofit.

Steps:

```
juju deploy cs:octavia --series focal
juju deploy cs:octavia-diskimage-retrofit --series focal
juju add-relation octavia-diskimage-retrofit octavia

juju upgrade-series 0 prepare jammy --force

juju upgrade-series 0 complete jammy

# This would not work because it's subordinate
juju set-application-base octavia-diskimage-retrofit jammy

# Using revision 115
juju upgrade-charm octavia --switch ch:octavia --channel yoga/stable
# Using revision 101
juju upgrade-charm octavia-diskimage-retrofit --switch ch:octavia-diskimage-retrofit --channel yoga/stable
```

I am still not understand why `juju upgrade-charm octavia-diskimage-retrofit --switch ch:octavia-diskimage-retrofit --channel yoga/stable` will use revision 101. At least it should use revision 107.

If I try

```
# Give me revision 107 on yoga channel
$ juju info octavia-diskimage-retrofit --series focal | grep yoga/stable
  yoga/stable: 1426553 2023-01-23 (107) 10MB

# Only support jammy
$ juju info octavia-diskimage-retrofit --channel yoga/stable | grep supports
supports: jammy

```

JamesLin (jneo8)
Changed in juju:
status: Incomplete → New
Revision history for this message
Joseph Phillips (manadart) wrote :

Sorry, I missed that this a subordinate.

In that case, run the command against the principal application, which will update its subordinates.

Changed in juju:
status: New → Incomplete
Revision history for this message
JamesLin (jneo8) wrote (last edit ):

Sorry, I may lost some information in my previous message

Follow the steps:

```
juju deploy cs:octavia --series focal
juju deploy cs:octavia-diskimage-retrofit --series focal
juju add-relation octavia-diskimage-retrofit octavia

juju upgrade-series 0 prepare jammy --force
# Fail on this step
juju upgrade-series 0 complete
```

The octavia/0 and octavia-diskimage-retrofit/0 will get into error status: hook failed: "post-series-upgrade"

The error message: [0]

Sorry, Joseph, just want to confirm the steps you mentioned is like below, right?

```
juju deploy cs:octavia --series focal
juju deploy cs:octavia-diskimage-retrofit --series focal
juju add-relation octavia-diskimage-retrofit octavia

juju upgrade-series 0 prepare jammy --force
juju upgrade-series 0 complete

# Then

juju set-application-base octavia jammy
```

---

[0]: https://pastebin.ubuntu.com/p/6Hxn7gFJwR/

Changed in juju:
status: Incomplete → New
Revision history for this message
Joseph Phillips (manadart) wrote :

So the version of the charm on disk is probably not compatible between focal and jammy.

Resolve the hook error with --no-retry to get the upgrade to complete.

Then you should be able to use "juju-set-application-base octavia ubuntu@22.04", after which you upgrade the charms.

Changed in juju:
status: New → Triaged
assignee: nobody → Joseph Phillips (manadart)
Revision history for this message
Felipe Reyes (freyes) wrote : Re: [Bug 2008248] Re: ERROR charm "octavia-diskimage-retrofit" does not support jammy, force not used

On Thu, 2023-04-20 at 10:35 +0000, Joseph Phillips wrote:
> So the version of the charm on disk is probably not compatible between
> focal and jammy.

this is correct, we have 2 different charm builds, one for focal and another one for jammy, the
charm contains python binary wheels.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

This is not an octavia-diskimage-retrofit bug, marking as invalid

Changed in charm-octavia-diskimage-retrofit:
status: New → Invalid
Revision history for this message
Trent Lloyd (lathiat) wrote :

This may have been fixed by this PR:
https://github.com/juju/juju/pull/15948

Needs confirmation

Revision history for this message
Aliaksandr Vasiuk (valexby) wrote (last edit ):

Hi,

Got affected by this bug with `keystone-saml-mellon` and `grafana-agent-k8s` charms which have different builds and revisions for 22.04 and 20.04 during upgrade Focal-Ussuri -> Jammy-Yoga. There I'm using Juju 2.9.45.

I reproduced the isssue `octavia-disk-retrofit` and `keystone-saml-mellon` charms. I'm using Juju controller 2.9.49:
```
juju deploy octavia --series focal --channel='yoga/stable' octavia2
juju deploy octavia-diskimage-retrofit --series focal --channel='yoga/stable' octavia-diskimage-retrofit2
juju add-relation octavia-diskimage-retrofit2 octavia2
juju upgrade-series 42 prepare jammy
WARNING: This command will mark machine "42" as being upgraded to series "jammy".
This operation cannot be reverted or canceled once started.
Units running on the machine will also be upgraded. These units include:
  - octavia-diskimage-retrofit2/0
  - octavia2/0

Leadership for the following applications will be pinned and not
subject to change until the "complete" command is run:
  - octavia-diskimage-retrofit2
  - octavia2

Continue [y/N]?y
ERROR charm "octavia-diskimage-retrofit" does not support jammy, force not used
```

```
juju deploy keystone-saml-mellon --channel="yoga/stable" --series="focal"
juju deploy keystone --series=focal --config openstack-origin=cloud:focal-yoga
juju add-relation keystone-saml-mellon keystone

juju upgrade-series 41 prepare jammy
WARNING: This command will mark machine "41" as being upgraded to series "jammy".
This operation cannot be reverted or canceled once started.
Units running on the machine will also be upgraded. These units include:
  - keystone-saml-mellon/0
  - keystone/0

Leadership for the following applications will be pinned and not
subject to change until the "complete" command is run:
  - keystone
  - keystone-saml-mellon

Continue [y/N]?y
ERROR charm "keystone-saml-mellon" does not support jammy, force not used
```

So it still reproduces. It is a bummer for series upgrades.

FTR, how I upgraded my keystone.
1. Had my keystone-saml-mellon on revision that supports only Focal
2. Ran `juju upgrade-series <machine> prepare jammy` with `--force` to ignore charm doesn't support Jammy
3. `do-release-upgrade` on the machine
4. Ran `juju upgrade-series <machine> complete`.
5. `keystone-saml-mellon` went to error state because of broken dependencies. Helped it with `juju resolved --no-retry keystone-saml-mellon/XX`
6. Upgrade is completed
7. After all keystone units are upgraded to jammy, finally refresh `keystone-saml-mellon` to revision that supports Jammy `juju refresh keystone-saml-mellon --revision=98`

Now having my `keystone-saml-mellon` in blocked state with "Ready for do-release-upgrade and reboot. Set complete when finished." but I guess it is lesser of evils.

Revision history for this message
Aliaksandr Vasiuk (valexby) wrote :

And then reproducing the same with grafana-agent. This is the fastest reproduces from all three I posted, and also it is the issue that affects production clouds the most, because there are usually only a handful of `octavia-diskimage-retrofit` and `keystone-saml-mellon` in a cloud, but if there is COS installed, then on every machine and LXD series upgrade this issue is reproduced.
```
juju deploy ubuntu --series=focal
juju deploy grafana-agent --channel="latest/beta" --series=focal
juju add-relation grafana-agent ubuntu

juju upgrade-series 43 prepare jammy
WARNING: This command will mark machine "43" as being upgraded to series "jammy".
This operation cannot be reverted or canceled once started.
Units running on the machine will also be upgraded. These units include:
  - grafana-agent/0
  - ubuntu/0

Leadership for the following applications will be pinned and not
subject to change until the "complete" command is run:
  - grafana-agent
  - ubuntu

Continue [y/N]?y
ERROR charm "grafana-agent" does not support jammy, force not used
```

Here is the bundle of my test setup for the revisions record: https://pastebin.ubuntu.com/p/qqwbY4yWfX/

The current workaround (or workthrough even) is to do `juju upgrade-series <> prepare jammy --force` to allow to upgrade in any case. Then when `juju upgrade-series <> complete` stuck on grafana-agent, do `juju resolved --no-retry grafana-agent` and leave it as is. All `grafana-agent` units on Jammy will get to error state, while you are doing upgrades, but the grafana-agent will work and you will keep observability over the cloud. Then, when all principles are on Jammy, refresh the `grafana-agent` charm, or even redeploy it.

Revision history for this message
Trent Lloyd (lathiat) wrote :

Tried to give this a quick test myself today, and even with a juju 3.4.2 this issue persists. Same error.

I know we are trying to sunset 2.9 but if we fix this, it would really really help the large base of Charmed OpenStack deployments to get this fix into 2.9 as well. So that we can get people onto Jammy-Yoga wth 2.9 before moving to 3.x.

lathiat@zlab:~$ juju status octavia-diskimage-retrofit/0
Model Controller Cloud/Region Version SLA Timestamp
odi maas maas/default 3.4.3 unsupported 06:05:43Z

App Version Status Scale Charm Channel Rev Exposed Message
glance 24.2.1 active 1 glance yoga/stable 612 no Unit is ready
glance-mysql-router 8.0.37 active 0 mysql-router 8.0/stable 189 no Unit is ready
octavia-diskimage-retrofit 1.0.1+git4.g... active 1 octavia-diskimage-retrofit yoga/stable 101 no Unit is ready
telegraf active 0 telegraf latest/stable 75 yes Monitoring glance/0 (source version/commit 23.10)

Unit Workload Agent Machine Public address Ports Message
glance/0* active idle 1 172.16.0.105 9292/tcp Unit is ready
  glance-mysql-router/0* active idle 172.16.0.105 Unit is ready
  octavia-diskimage-retrofit/0* active idle 172.16.0.105 Unit is ready
  telegraf/0* active idle 172.16.0.105 9103/tcp Monitoring glance/0 (source version/commit 23.10)

Machine State Address Inst id Base AZ Message
1 started 172.16.0.105 neat-kite ubuntu@20.04 default Deployed
lathiat@zlab:~$ juju upgrade-machine 1 prepare ubuntu@22.04
WARNING: This command will mark machine "1" as being upgraded to "ubuntu@22.04".
This operation cannot be reverted or canceled once started.
Units running on the machine will also be upgraded. These units include:
  - glance-mysql-router/0
  - glance/0
  - octavia-diskimage-retrofit/0
  - telegraf/0

Leadership for the following applications will be pinned and not
subject to change until the "complete" command is run:
  - glance
  - glance-mysql-router
  - octavia-diskimage-retrofit
  - telegraf

Continue [y/N]? y
ERROR charm "octavia-diskimage-retrofit" does not support ubuntu@22.04/stable, force not used

Revision history for this message
Aliaksandr Vasiuk (valexby) wrote :

Hi,

Just supplying this thread with workarounds I did during production cloud upgrade. I faced this issue with the following charms:
* octavia-diskimage-retrofit which is related to glance-simplestream-sync unit
* lldpd which is related to nova-compute
* grafana-agent which is related to every primary charm in OpenStack model
* keystone-saml-mellon which is related to keystone

1. Before series upgrades all charms were refreshed to their latest/stable or yoga/latest channels while machines and LXDs were still on Focal. lldpd I refreshed to `latest/candidate` because it is shown in charmhub as the one supported by Jammy.
2. During the series upgrade I had to run "juju upgrade-series <> prepare jammy --force -y" for all my principal charms
3. After series were upgraded, some charms were broken during "juju upgrade-series <> complete". The following ones were in error state and I helped `upgrade-sereis` to complete with `juju resolved --no-retry <>`: grafana-agent, octavia-diskimage-retrofit, keystone-saml-mellon. lldpd figured out by itself
4. After all series upgrades were done, I fixed the charms:
  * octavia-diskimage-retrofit I just redeployed, because it doesn't produce any downtime. Just removed and added relation with principal
  * grafana-agent I resolved to revision 28 (!) and it just worked. I tried to switch between channels, but nothing helped, `latest/stable` produce missing `cryptography` dependency error, `latest/edge` wants Juju 3. Just by blindly guessing I found this revision 28 which works.
  * keystone-saml-mellon units still show "Ready for do-release-upgrade and reboot. Set complete when finished." to me. But that doesn't seem to affect charm behavior, it reacts to config changes. I don't want to redeploy it not to cause any Keystone auth downtimes. I tried to run `hooks/post-series-upgrade` hook manually, tried to use `status-set`, but it still goes to "Blocked"; nothing obvious in juju MongoDB.

Hope that will help somebody to get from Focal to Jammy and to Juju 3 eventually.

Revision history for this message
Alan Baghumian (alanbach) wrote :
Download full text (5.7 KiB)

I am not quite sure how I ended up in a similar situation, but I have a Yoga cloud that was upgraded from Focal to Jammy. Today, while adding a new nova-compute unit I encountered this bug.

Upgrading my controller and models to 3.3.5 did not fix the issue, so I had to do a bit of database fix.

Documenting the process here in case it is useful for someone else:

$ juju set-application-base nova-compute ubuntu@22.04
ERROR updating application base: base "ubuntu@22.04" not supported by charm, the charm supported bases are: ubuntu@20.04, ubuntu@18.04

$ juju add-unit nova-compute --to 120
ERROR acquiring machine to host unit "nova-compute/19": cannot assign unit "nova-compute/19" to machine 120: base does not match: unit has "ubuntu@20.04", machine has "ubuntu@22.04"

juju:PRIMARY> db.applications.find({"name":"nova-compute"});
{ "_id" : "98f30d5a-977c-4ce9-80ef-c97152b096f0:nova-compute", "name" : "nova-compute", "model-uuid" : "98f30d5a-977c-4ce9-80ef-c97152b096f0", "subordinate" : false, "charmurl" : "ch:amd64/focal/nova-compute-740", "charm-origin" : { "source" : "charm-hub", "type" : "charm", "id" : "ubPbvtErosR9P4NGEt24wak8LDsczRi4", "hash" : "cc089c4492344dc6301b40a1b8ec7c7fd3280f273053b31c3b18e945c87e2f15", "revision" : 740, "channel" : { "track" : "yoga", "risk" : "stable" }, "platform" : { "architecture" : "amd64", "os" : "ubuntu", "channel" : "20.04" } }, "charmmodifiedversion" : 11, "forcecharm" : false, "life" : 0, "unitcount" : 16, "relationcount" : 16, "minunits" : 0, "txn-revno" : NumberLong(19), "metric-credentials" : BinData(0,""), "exposed" : false, "scale" : 0, "passwordhash" : "", "provisioning-state" : null }

juju:PRIMARY> db.units.find({"name":"nova-compute/19"});
{ "_id" : "98f30d5a-977c-4ce9-80ef-c97152b096f0:nova-compute/19", "name" : "nova-compute/19", "model-uuid" : "98f30d5a-977c-4ce9-80ef-c97152b096f0", "base" : { "os" : "ubuntu", "channel" : "20.04/stable" }, "application" : "nova-compute", "charmurl" : null, "principal" : "", "subordinates" : [ ], "storageattachmentcount" : 0, "machineid" : "", "resolved" : "", "life" : 0, "passwordhash" : "", "txn-revno" : 2 }

(Find All)
juju:PRIMARY> db.units.find({ "name": { $regex: /^nova-compute\/.*/} });

(Override Focal > Jammy, keeping the charmurl intact)

juju:PRIMARY> db.applications.updateOne({"name":"nova-compute"}, { $set: { "name" : "nova-compute", "model-uuid" : "98f30d5a-977c-4ce9-80ef-c97152b096f0", "subordinate" : false, "charmurl" : "ch:amd64/focal/nova-compute-740", "charm-origin" : { "source" : "charm-hub", "type" : "charm", "id" : "ubPbvtErosR9P4NGEt24wak8LDsczRi4", "hash" : "cc089c4492344dc6301b40a1b8ec7c7fd3280f273053b31c3b18e945c87e2f15", "revision" : 740, "channel" : { "track" : "yoga", "risk" : "stable" }, "platform" : { "architecture" : "amd64", "os" : "ubuntu", "channel" : "22.04" } }, "charmmodifiedversion" : 11, "forcecharm" : false, "life" : 0, "unitcount" : 16, "relationcount" : 16, "minunits" : 0, "txn-revno" : NumberLong(19), "metric-credentials" : BinData(0,""), "exposed" : false, "scale" : 0, "passwordhash" : "", "provisioning-state" : null } } );
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }

juju:PRIMARY> db.units.u...

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.