duplicated definition for Keystone_user[swift]

Bug #1759235 reported by Cédric Jeanneret deactivated
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
Invalid
Medium
Unassigned

Bug Description

Hello,

While doing a migration test (pike BM -> pike Containers), with ceph+rados, I encounter the following error:

"Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, Duplicate declaration: Keystone_user[swift] is already declared in file /etc/puppet/modules/ceph/manifests/rgw/keystone/auth.pp:75; cannot redeclare at /etc/puppet/modules/keystone/manifests/resource/service_identity.pp:160 at /etc/puppet/modules/keystone/manifests/resource/service_identity.pp:160:5 at /etc/puppet/modules/swift/manifests/keystone/auth.pp:292 on node lab-controller-0.cloud.camptocamp.com",

Pretty clear to read - now the question is: why? :).

I include the following env files in my upgrade process:
ENVIRONMENTS=" \
  -e ./openstack-tripleo-heat-templates/environments/major-upgrade-composable-steps.yaml
  -e ./upgrade-pike.yaml
  -e ./openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e ./openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml \
  -e ./openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml \
  -e ./openstack-tripleo-heat-templates/environments/storage/enable-ceph.yaml \
  -e ./openstack-tripleo-heat-templates/environments/ceph-radosgw.yaml \
  -e ./${1}-specifics.yaml \
  -e ./openstack-tripleo-heat-templates/environments/docker.yaml \
  -e ./openstack-tripleo-heat-templates/environments/docker-ha.yaml \
  -e ./docker_registry.yaml \
  -e ./environment.yaml \
"

The upgrade-pike.yaml contains only a yum command, while the ${1}-specifics.yaml contains only network-related things, as I have two envs (lab and prod) using the same environment/receipts in order to validate each deploy.

For the records: my goal is to move from a Pike Baremetal deploy to a Pike Containerized deploy, and I was told to start with common services before doing ceph (have to move from puppet-ceph to ceph-ansible, in containers).

Feel free to ask for more details, I'll check on my side in order to see if I can avoid such a duplicate definition.

Thank you for your concerne, time and inputs :).

Cheers,

C.

Revision history for this message
Cédric Jeanneret deactivated (cjeanneret-c2c-deactivated) wrote :

Apparently it's a value override issue:

I include docker at the end of my env listing, and there's a small conflict:

openstack-tripleo-heat-templates/environments/docker.yaml: OS::TripleO::Services::SwiftProxy: ../docker/services/swift-proxy.yaml
openstack-tripleo-heat-templates/environments/docker.yaml: OS::TripleO::Services::SwiftStorage: ../docker/services/swift-storage.yaml
openstack-tripleo-heat-templates/environments/docker.yaml: OS::TripleO::Services::SwiftRingBuilder: ../docker/services/swift-ringbuilder.yaml
openstack-tripleo-heat-templates/environments/docker.yaml: OS::TripleO::Services::SwiftDispersion: OS::Heat::None

while, in ceph parts:
openstack-tripleo-heat-templates/environments/ceph-radosgw.yaml: OS::TripleO::Services::SwiftProxy: OS::Heat::None
openstack-tripleo-heat-templates/environments/ceph-radosgw.yaml: OS::TripleO::Services::SwiftStorage: OS::Heat::None
openstack-tripleo-heat-templates/environments/ceph-radosgw.yaml: OS::TripleO::Services::SwiftRingBuilder: OS::Heat::None

Guess I'd rather push ceph-rgw *after* docker? Any cons?

Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

That totally makes sense @Cedric as Ceph with RadosGW does not require those docker.yaml'ish opinionated guesses.

Changed in tripleo:
status: New → Invalid
Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

Marked as invalid (not-a-bug), feel free to re-open if any updates

Changed in tripleo:
milestone: none → rocky-1
importance: Undecided → Medium
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.