invalid cmr macaroon when getting cmr secret

Bug #2065761 reported by Guillaume Boutry
20
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
High
Yang Kelvin Liu

Bug Description

This happened on a sunbeam deployed after a few days:
Command '('/var/lib/juju/tools/unit-openstack-hypervisor-0/secret-get', 'secret://29fcc45b-0a2a-4d46-81c2-cf9b508daa3c/cou5laofd6dn9f3mocv0', '--format=json')' returned non-zero exit status 1.

ops.model.ModelError: ERROR invalid cmr macaroon

juju: 3.4.2
cloud: maas provider

All 3 units are failing to read the secrets and are in error state. Rebooting the controller fixed it.

Tags: cross-model
Revision history for this message
Ian Booth (wallyworld) wrote :

To help diagnose this, we really need a bit more information:
- logs from controller and affected models
- possibly a db dump of the application collection from the consuming model

Can we start by getting the logs and we can take a look and see if anything relevant reveals itself?

Changed in juju:
status: New → Incomplete
Revision history for this message
Guillaume Boutry (gboutry) wrote :

Here's the controller logs

Revision history for this message
Guillaume Boutry (gboutry) wrote :

Here's the affected model

Revision history for this message
Guillaume Boutry (gboutry) wrote :

Here's the model offering the CMR.

The secret is created by Keystone and sent over the CMR.

Revision history for this message
Ian Booth (wallyworld) wrote :

Unfortunately there's not enough in the logs to pin point the problem. If the problem were ongoing, or reproducible, we could increase the logging and get some extra diagnostics to look at. But as it's now fixed after a reboot, it's hard to say exactly what happened. One guess is clock skew, but it's just a guess.

Revision history for this message
Christopher Bartz (bartz) wrote :

I have this problem too. The consuming model is on juju 3.1 and the offering model is on juju 3.4. See https://pastebin.canonical.com/p/sncwXRrcmW/

Revision history for this message
Ian Booth (wallyworld) wrote :

Can we get debug logs from both controllers (if cross contoller cmr) with #cmr and #cmr-auth set to TRACE level logging? ie juju model-config -m controller logging-config="#cmr=TRACE; #cmr-auth=TRACE"

Did you check for clock skew across containers / machines?

Does a reboot / controller pod restart fix it?

Revision history for this message
Laurent Sesquès (sajoupa) wrote :

@Ian I implemented the suggested model-config, grabbed the controller logs and put it in you home dir on private fileshare.

Revision history for this message
Ian Booth (wallyworld) wrote :

The logs don't contain the info I would expect to see for validating a macaroon used for cross model secrets. Cross model secrets can be read if the relation the grant is scoped to is accessible by the supplied macaroon. Macaroon checks result in logs like this:

check 1 macaroons with required attrs: map[offer-uuid:c8291190-3af0-4321-8149-89e1a1e0bf83 source-model-uuid:d39dcfa1-cae7-4a15-8324-deb667934b38]

(that's from the logs).

There's only 2 such lines, and neither contain a relation tag which would be expected when checking secret access. So it seems like those lines are for other cmr operations.

Can we get logs which correspond to the timestamps of when the charm errors for secret-get happened? And an indication of when the operation was attempted to so we know where to look in the logs?

I should have asked the first time, can we please also turn on TRACE logging for #secrets

Just to check - all other cmr aspects are working as expected? There's no error surfaced in status for any of the saas entries?

Revision history for this message
Christopher Bartz (bartz) wrote :

The SAAS entry in the consuming model shows an error

```
juju status

SAAS Status Store URL
mongodb error juju-34-controller admin/stg-github-runner-mq.mongodb
```

and

juju status mongodb --format yaml

gives
```
  mongodb:
    url: juju-34-controller:admin/stg-github-runner-mq.mongodb
    endpoints:
      database:
        interface: mongodb_client
        role: provider
    life: dying
    application-status:
      current: error
      message: 'cannot get discharge from "https://controller-address:17070/offeraccess":
        third party refused discharge: cannot discharge: permission denied'
      since: 15 Jul 2024 10:41:18Z
    relations:
      database:
      - github-runner-webhook-router
```

fwiw, I also get a permission error in the offer model for a juju grant command

stg-github-runner-mq@bastion:~$ juju grant -v stg-github-runner-mq consume juju-34-controller:admin/stg-github-runner-mq.mongodb
ERROR permission denied

So maybe the root cause is related to permissions.

Revision history for this message
Ian Booth (wallyworld) wrote :

juju list-offers --application foo --format yaml

etc

should show who has access to the offer so you can check permissions

Revision history for this message
Christopher Bartz (bartz) wrote :

Thanks. The permissions seem to be fine.

```
stg-github-runner-mq@bastion:~$ juju list-offers --application=mongodb --format yaml
mongodb:
  application: mongodb
  store: juju-34-controller
  charm: ch:amd64/jammy/mongodb-173
  offer-url: admin/stg-github-runner-mq.mongodb
  endpoints:
    database:
      interface: mongodb_client
      role: provider
  connections:
  - source-model-uuid: foo
    username: stg-github-runner-mq
    relation-id: 2
    endpoint: database
    status:
      current: joined
      since: "2024-07-15"
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
    stg-github-runner-mq:
      access: admin
```

Revision history for this message
Ian Booth (wallyworld) wrote :

Attempting to create a grant

stg-github-runner-mq@bastion:~$ juju grant -v stg-github-runner-mq consume juju-34-controller:admin/stg-github-runner-mq.mongodb
ERROR permission denied

Is the stg-github-runner-mq user a controller superuser? Or are they are model admin on the model hosting the offer? Or have they been granted admin access just to the offer?

Is it possible to get a json or yaml dump of the permissions, applicationOffers, users collections?

Can we get the logs with #secrets set to TRACE and corresponding to a time when the permissions errors are observed?

Revision history for this message
Laurent Sesquès (sajoupa) wrote :

I've added TRACE for #secrets, waited for an issue to be reported (done in: https://pastebin.canonical.com/p/7cjY6wxFht/ today), grabbed the logs, and put them in the same place @Ian.

Revision history for this message
Ian Booth (wallyworld) wrote :

The only logs I can see in controller-34-staging are for 16-7-2024 or earlier, eg

2024-07-16 13:18:27 DEBUG juju.apiserver.common.crossmodelauth auth.go:338 generating discharge macaroon because: invalid cmr macaroon

Can we get logs for the 18th when it happened?

The pastebin has a time but no date

unit-github-runner-webhook-router-0: 13:07:48 ERROR unit.github-runner-webhook-router/0.juju-log Uncaught exception while in charm code:

I assume that date is 18-7-2024?

Revision history for this message
Christopher Bartz (bartz) wrote :

Yes, these logs are from 18-7-2024.

The problem is ongoing, it appears on every run of config-changed in the consumer (the hook is in error state and gets re-executed), so logs for a particular day should suffice. Here are logs from right now (with date/time stamp): https://pastebin.canonical.com/p/p9RPkSkSMb/

What I also need to mention is that the following problem (https://pastebin.canonical.com/p/pXCjJMVbPK/) happened at the beginning and is still happening now.

Revision history for this message
Ian Booth (wallyworld) wrote :

I'm confused. How can the controller-34-staging log file be from the 18th when the last line in the log file has a timestamp from the 16th

2024-07-16 13:19:48 ERROR juju.kubernetes.provider.application application.go:159 Ensure StorageClass.storage.k8s.io "stg-netbox-microk8s-microk8s-hostpath" is invalid: provisioner: Required value

The logs for the controller-beta-ps6 file have timestamps from the 18th.

We need to understand why the permission checks are failing when the macaroon discharge endpoint is called. It could be the TTL caveat cause by clock skew between controllers - I assume they are in sync. It could be juju permission checks are failing. It could be a macaroon decoding issue. The fact the error occurred early on watching the offer status indicates the issue is not secrets related but a general problem with cross controller auth.
There's not a lot to go on. Can we get the mongo collection info asked for in comment 13?

Revision history for this message
Christopher Bartz (bartz) wrote :

Sorry, I do not have access to the controllers, perhaps @Laurent can look into this. My comment was about https://pastebin.canonical.com/p/7cjY6wxFht/.

I also suspect a general problem with cross-controller auth.

Revision history for this message
Christopher Bartz (bartz) wrote :

I was able to remove the current integration, offer and respective applications (github-runner-webhook-router and mongodb). But even after a redeploy I still have problems consuming the offer: https://pastebin.canonical.com/p/JjpqPHXRMJ/ https://pastebin.canonical.com/p/2PF9btHJ63/

Revision history for this message
Ian Booth (wallyworld) wrote :

We'll need the TRACE logs (#cmr, #cmr-auth, #secrets) and collections dumps to start digging into this.
https://bugs.launchpad.net/juju/+bug/2065761/comments/13

Revision history for this message
Nikolaos Sakkos (nsakkos) wrote :

Hi Ian, concerning comment 13, I have gathered yaml dumps of the permissions, application Offers, and users for the two models mentioned.

The commands I used to gather the info were:
- juju show-model stg-github-runner-mq --format yaml
- juju offers --format yaml
- juju show-controller --format yaml
- juju users

If there's missing information, could you please provide what commands I should use?

I have also added the missing TRACE logs from juju-controller-34-staging-ps6 from the 18th and earlier that you mention on comment 15.

The above has been uploaded to your private-filesharing home directory as yaml_dumps.tar.gz .

Unfortunately logs for 2024-07-22 (regarding comment #19) are no longer available.

Revision history for this message
Ian Booth (wallyworld) wrote :

Thanks for the logs etc. But they're not what has been asked for.

We need:
- TRACE logs (#cmr, #cmr-auth, #secrets)
- mongo collection dumps for permissions, applicationOffers, users collections from offering controller

While we're there, let's add another few collections from both offering and consuming controller (if offer and consumer apps are in different controllers, else just the one dump)
- remoteApplications, remoteEntities, bakeryStorageItems

You can get the collection dump from a juju backup as bson and convert those to json. Or you can use mongodump. Or you can use find().pretty() from a mongo shell.

The logs and yaml as provided just don't contain enough information.

Ideally we'd also have a juju status --format yaml for both offering and consuming models at that time as well - this will show the observed status of the offer from the consuming side and help show that it's a generic cmr issue, not secrets per se. If the issue was observed in a given hook, the time the hook ran would be good so we know where to start looking in the logs.

Revision history for this message
Paul Collins (pjdc) wrote :

Ian,

I've uploaded a couple of new files to your home directory with the requested mongodb collections and juju status --format yaml of the relevant models (bug-2065761-status.gz and bug-2065761-mongodb.gz)

The consuming model shows the following:

  mongodb:
    url: juju-controller-34-staging-ps6:admin/stg-github-runner-mq.mongodb
    endpoints:
      database:
        interface: mongodb_client
        role: provider
    application-status:
      current: error
      message: 'cannot get discharge from "https://juju-controller-34-staging-ps6.admin.canonical.com:17070/offeraccess":
        third party refused discharge: cannot discharge: permission denied'
      since: 01 Aug 2024 21:33:01Z
    relations:
      database:
      - github-runner-webhook-router

so based on the "since" field I fetched logs for that hour on all six controller units and uploaded them to bug2065761-controller-log.tar.gz.

I don't see any lines tagged "TRACE" in these files, but there seems to be plenty of activity related to CMR.

I double checked both controllers to confirm logging-config:

prod-is-controller-beta-ps6@is-bastion-ps6:~$ juju model-config -m controller logging-config
#cmr=TRACE; #cmr-auth=TRACE; #secrets=TRACE
prod-is-controller-beta-ps6@is-bastion-ps6:~$

juju-controller-34-staging-ps6@is-bastion-ps6:~$ juju model-config -m controller logging-config
#cmr=TRACE; #cmr-auth=TRACE; #secrets=TRACE
juju-controller-34-staging-ps6@is-bastion-ps6:~$ _

Revision history for this message
Ian Booth (wallyworld) wrote :

Thank you very much for the logs etc. This is what I can see.

With the lack of trace logging, that's strange. I'd need to check in detail, it may be the logging config needs to be on the offering model, not just the contrtoller model.

Anyway we can see some things. We have an incoming request to consume an offer

check macaroons with declared attrs: map[offer-uuid:3093e507-9cdf-482b-84f4-7cd151f776ba source-model-uuid:d39dcfa1-cae7-4a15-8324-deb667934b38 username:stg-github-runner-mq]
authorize cmr query ops check for bakery.Op{Entity:"", Action:""}: []bakery.Op{bakery.Op{Entity:"3093e507-9cdf-482b-84f4-7cd151f776ba", Action:"consume"}}

The user wanting to consume is stg-github-runner-mq
The offer uuid is 3093e507-9cdf-482b-84f4-7cd151f776ba which is the mongodb offer

At this point a permission check is needed so a discharge macaroon is generated

generating discharge macaroon because: invalid cmr macaroon

Looking at the permissions collection on the controller hosting the offer

{
        "_id" : "ao#3093e507-9cdf-482b-84f4-7cd151f776ba#us#admin",
        "object-global-key" : "ao#3093e507-9cdf-482b-84f4-7cd151f776ba",
        "subject-global-key" : "us#admin",
        "access" : "admin",
        "txn-revno" : 2
}
{
        "_id" : "ao#3093e507-9cdf-482b-84f4-7cd151f776ba#us#everyone@external",
        "object-global-key" : "ao#3093e507-9cdf-482b-84f4-7cd151f776ba",
        "subject-global-key" : "us#everyone@external",
        "access" : "read",
        "txn-revno" : 2
}

The stg-github-runner-mq is not shown as having consume permission hence the permission denied errors.

Revision history for this message
Paul Collins (pjdc) wrote :

> The stg-github-runner-mq is not shown as having consume permission hence the permission denied errors.

That seems to fit:

juju-controller-34-staging-ps6@is-bastion-ps6:~$ juju show-offer stg-github-runner-mq.mongodb --format yaml
juju-controller-34-staging-ps6:admin/stg-github-runner-mq.mongodb:
  description: |
    MongoDB is a general purpose distributed document database. This charm
    deploys and operates MongoDB.
  access: admin
  endpoints:
    database:
      interface: mongodb_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
juju-controller-34-staging-ps6@is-bastion-ps6:~$ _

So in this case the `stg-github-runner-mq` user should run something like `juju grant stg-github-runner-webhook-router consume admin/stg-github-runner-mq.mongodb`?

Revision history for this message
Paul Collins (pjdc) wrote (last edit ):

We have another similar-looking problem with another pair of models, stg-netbox and stg-netbox-k8s, although in this case the symptoms are a little different. `juju status` and offer-related outputs are here, and the controller logs I've already uploaded contain hits for the problem, but let me know if you need them refreshed.

https://pastebin.canonical.com/p/xCCzKDRXKw/

Unlike the pair of models above, in this case it seems the offer was successfully joined and something went wrong afterwards. Also unlike the previous case, both models are owned by the same user. I tried to grant consume, in case admin doesn't cover that (although it would seem strange for a user to have to grant itself explict access to consume its own offers) and got the following:

stg-netbox@is-bastion-ps6:~$ juju grant stg-netbox consume admin/stg-netbox.postgresql
ERROR permission denied
stg-netbox@is-bastion-ps6:~$ _

(The role account has `admin` access to the model.)

But then I ran it as the admin and it worked:

juju-controller-34-staging-ps6@is-bastion-ps6:~$ juju grant stg-netbox consume admin/stg-netbox.postgresql
juju-controller-34-staging-ps6@is-bastion-ps6:~$ juju show-offer admin/stg-netbox.postgresql --format yaml
juju-controller-34-staging-ps6:admin/stg-netbox.postgresql:
  description: |
    Charm to operate the PostgreSQL database on machines.
  access: admin
  endpoints:
    database:
      interface: postgresql_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
    stg-netbox:
      access: consume
juju-controller-34-staging-ps6@is-bastion-ps6:~$ _

And now:

stg-netbox@is-bastion-ps6:~$ juju status -m admin/stg-netbox-k8s postgresql
Model Controller Cloud/Region Version SLA Timestamp
stg-netbox-k8s juju-controller-34-staging-ps6 stg-netbox-k8s/default 3.4.5 unsupported 02:40:06Z

SAAS Status Store URL
postgresql active local admin/stg-netbox.postgresql
stg-netbox@is-bastion-ps6:~$ _

Is this expected behaviour?

Revision history for this message
Paul Collins (pjdc) wrote :

Oddly, I'm also seeing different access levels depending on who views the offer:

stg-netbox@is-bastion-ps6:~$ juju show-offer postgresql --format yaml
juju-controller-34-staging-ps6:admin/stg-netbox.postgresql:
  description: |
    Charm to operate the PostgreSQL database on machines.
  access: admin
  endpoints:
    database:
      interface: postgresql_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
    stg-netbox:
      access: admin
stg-netbox@is-bastion-ps6:~$ _

juju-controller-34-staging-ps6@is-bastion-ps6:~$ juju show-offer admin/stg-netbox.postgresql --format yaml
juju-controller-34-staging-ps6:admin/stg-netbox.postgresql:
  description: |
    Charm to operate the PostgreSQL database on machines.
  access: admin
  endpoints:
    database:
      interface: postgresql_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
    stg-netbox:
      access: consume
juju-controller-34-staging-ps6@is-bastion-ps6:~$ _

Revision history for this message
Paul Collins (pjdc) wrote :

I've redumped the collections from earlier into bug-2065761-20240806-mongodb.gz, now in your homedir on private-fileshare.

Revision history for this message
James Simpson (jsimpso) wrote :
Download full text (3.2 KiB)

Just confirmed on a seperate 3.4 model and controller, the offering user *thinks* it has admin access to the offer, but the controller superadmin doesn't show any access at all.

After explicitly granting the offering user "admin" access to the offer, they're able to successfully grant access as expected:

1) Confirming the user thinks it has admin access to the offer, and show the "permission denied" error:

prod-synapse-chat-canonical-db@is-bastion-ps6:~$ juju show-offer postgresql --format yaml
juju-controller-34-production-ps6:admin/prod-synapse-chat-canonical-db.postgresql:
  description: |
    Charm to operate the PostgreSQL database on machines.
  access: admin
  endpoints:
    database:
      interface: postgresql_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
    prod-synapse-chat-canonical-db:
      access: admin

prod-synapse-chat-canonical-db@is-bastion-ps6:~$ juju grant prod-synapse-chat-canonical-k8s consume admin/prod-synapse-chat-canonical-db.postgresql
ERROR permission denied

2) Confirm controller superadmin thinks that the previous user has no explicit access over the offer:

juju-controller-34-production-ps6@is-bastion-ps6:~$ juju show-offer admin/prod-synapse-chat-canonical-db.postgresql --format yaml
juju-controller-34-production-ps6:admin/prod-synapse-chat-canonical-db.postgresql:
  description: |
    Charm to operate the PostgreSQL database on machines.
  access: admin
  endpoints:
    database:
      interface: postgresql_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read

3) Grant the offering user "admin" access to the offer:

juju-controller-34-production-ps6@is-bastion-ps6:~$ juju grant prod-synapse-chat-canonical-db admin admin/prod-synapse-chat-canonical-db.postgresql

juju-controller-34-production-ps6@is-bastion-ps6:~$ juju show-offer admin/prod-synapse-chat-canonical-db.postgresql --format yaml
juju-controller-34-production-ps6:admin/prod-synapse-chat-canonical-db.postgresql:
  description: |
    Charm to operate the PostgreSQL database on machines.
  access: admin
  endpoints:
    database:
      interface: postgresql_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
    prod-synapse-chat-canonical-db:
      access: admin

4) Successfully grant consume access as the original user

prod-synapse-chat-canonical-db@is-bastion-ps6:~$ juju grant prod-synapse-chat-canonical-k8s consume admin/prod-synapse-chat-canonical-db.postgresql
prod-synapse-chat-canonical-db@is-bastion-ps6:~$ juju show-offer admin/prod-synapse-chat-canonical-db.postgresql --format yaml
juju-controller-34-production-ps6:admin/prod-synapse-chat-canonical-db.postgresql:
  description: |
    Charm to operate the PostgreSQL database on machines.
  access: admin
  endpoints:
    database:
      interface: postgresql_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
    prod-synapse-chat-canonical-db:
      acce...

Read more...

Revision history for this message
Paul Collins (pjdc) wrote :

Re James's comment above, I can confirm stg-github-runner-mq is in the same state (and per my pastebin from comment #26, so was stg-netbox before I ran `juju grant`):

stg-github-runner-mq@is-charms-bastion-ps6:~$ juju show-offer admin/stg-github-runner-mq.mongodb --format yaml
juju-controller-34-staging-ps6:admin/stg-github-runner-mq.mongodb:
  description: |
    MongoDB is a general purpose distributed document database. This charm
    deploys and operates MongoDB.
  access: admin
  endpoints:
    database:
      interface: mongodb_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
    stg-github-runner-mq:
      access: admin
stg-github-runner-mq@is-charms-bastion-ps6:~$ _

juju-controller-34-staging-ps6@is-bastion-ps6:~$ juju show-offer stg-github-runner-mq.mongodb --format yaml
juju-controller-34-staging-ps6:admin/stg-github-runner-mq.mongodb:
  description: |
    MongoDB is a general purpose distributed document database. This charm
    deploys and operates MongoDB.
  access: admin
  endpoints:
    database:
      interface: mongodb_client
      role: provider
  users:
    admin:
      display-name: admin
      access: admin
    everyone@external:
      access: read
juju-controller-34-staging-ps6@is-bastion-ps6:~$

Revision history for this message
Christopher Bartz (bartz) wrote :

In fact, my problems have gone away since Tom Haddon ran

juju-controller-34-staging-ps6@is-bastion-ps6:~$ juju grant stg-github-runner-mq consume admin/stg-github-runner-mq.mongodb

this morning.

Revision history for this message
Ian Booth (wallyworld) wrote :

TL;DR; seems there's a bug checking offer access for users who at not controller superusers but are model admins so for now explicit consume access needs to be granted for those users.

--

There's 2 postgresql offers in different models so it's not 100% clear to me which model is the stg-netbox one hosting the offer "admin/stg-netbox.postgresql", but based on the comment that explicit consume access was granted...

The permissions for one of the postgresql offers shows

{
        "_id" : "ao#f207d2fb-f21c-49a7-8b68-6cd45c68ba6d#us#admin",
        "object-global-key" : "ao#f207d2fb-f21c-49a7-8b68-6cd45c68ba6d",
        "subject-global-key" : "us#admin",
        "access" : "admin",
        "txn-revno" : 2
}
{
        "_id" : "ao#f207d2fb-f21c-49a7-8b68-6cd45c68ba6d#us#everyone@external",
        "object-global-key" : "ao#f207d2fb-f21c-49a7-8b68-6cd45c68ba6d",
        "subject-global-key" : "us#everyone@external",
        "access" : "read",
        "txn-revno" : 2
}
{
        "_id" : "ao#f207d2fb-f21c-49a7-8b68-6cd45c68ba6d#us#stg-netbox",
        "object-global-key" : "ao#f207d2fb-f21c-49a7-8b68-6cd45c68ba6d",
        "subject-global-key" : "us#stg-netbox",
        "access" : "consume",
        "txn-revno" : 2
}

Hence the explicit consume permission granted to user "stg-netbox" would allow access.

The model hosting the offer is abd1188b-7883-4bfa-8f57-b71722bd78f7 and the stg-netbox user does have admin on that model

{
        "_id" : "e#abd1188b-7883-4bfa-8f57-b71722bd78f7#us#stg-netbox",
        "object-global-key" : "e#abd1188b-7883-4bfa-8f57-b71722bd78f7",
        "subject-global-key" : "us#stg-netbox",
        "access" : "admin",
        "txn-revno" : 2
}

This should have been enough to allow access to the offer without needing to grant consume access explicitly. But it seems there's a bug here because looking at the code, I think the check for model admin access during macaroon discharge is done on the controller model, not the model hosting the offer. That might explain why explicit consume access is required even for model admin users.

In terms of the show-offer output, the list of users and their permissions is influenced by the logged in user who is running show-offer. The code looks at explicit offer grants and also includes "admin" access if the logged in user is a model admin. Thus

stg-netbox@is-bastion-ps6:~$ juju show-offer postgresql --format yaml

will show access of the stg-netbox logged in user based on them being a model admin

    stg-netbox:
      access: admin

juju-controller-34-staging-ps6@is-bastion-ps6:~$ juju show-offer admin/stg-netbox.postgresql --format yaml

will show

    stg-netbox:
      access: consume

since this is showing explicit access grants for users other than the logged in user.

Changed in juju:
status: Incomplete → Triaged
importance: Undecided → High
tags: added: cross-model
Changed in juju:
milestone: none → 3.5.4
Changed in juju:
milestone: 3.5.4 → 3.5.5
Harry Pidcock (hpidcock)
Changed in juju:
assignee: nobody → Yang Kelvin Liu (kelvin.liu)
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.