Ceph config for radosgw doesnot get updated and persistent for admin user

Bug #1884811 reported by Yatindra Shashi
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Won't Fix
Low
Unassigned

Bug Description

Brief Description
-----------------
 Enabling Swift object storage in Openstack as per the mailing list email " http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007136.html", but failed in duplex setup but succeded in simplex.

Steps to Reproduce
------------------
Repeat the steps suggested in "http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007136.html"

Expected Behavior
------------------
Able to create swift container and upload the files..

Actual Behavior
----------------
- Fails multiple time with stx-openstack application apply but succeeded after deletign osh-openstack-ceph-rgw helm chart
- Authorization fails while trying to access Swift object storage->container from GUI

Reproducibility
---------------
Yes,

System Configuration
--------------------
Duplex, STX 3.0

Timestamp/Logs
--------------

---Sysinv.log>>
2020-06-22 16:46:49.536 2459 DEBUG armada.handlers.wait [-] [chart=openstack-ceph-rgw]: job ceph-ks-service is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258^[[00m
2020-06-22 16:46:49.536 2459 DEBUG armada.handlers.wait [-] [chart=openstack-ceph-rgw]: job swift-ks-user not ready: Waiting for job swift-ks-user to be successfully completed... handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:260^[[00m
2020-06-22 16:47:13.332 2459 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176^[[00m
2020-06-22 16:48:13.406 2459 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176^[[00m
2020-06-22 16:49:13.505 2459 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176^[[00m
2020-06-22 16:50:13.574 2459 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176^[[00m
2020-06-22 16:51:13.639 2459 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176^[[00m
2020-06-22 16:51:48.542 2459 ERROR armada.handlers.wait [-] [chart=openstack-ceph-rgw]: Timed out waiting for jobs (namespace=openstack, labels=(release_group=osh-openstack-ceph-rgw)). These jobs were not ready=['ceph-ks-endpoints', 'swift-ks-user']^[[00m

>>> Then removed osh-openstack-ceph-rgw helm chart and re-applied helm chart, stx-openstack was applied successfully but was below error.

controller-0:/home/sysadmin# tail -f /var/log/keystone/keystone-all.log
2020-06-23 09:14:59.770 1188411 WARNING keystone.auth.plugins.core [req-e352368e-7c76-494e-affd-9fc7182cf17c - - - - -] Could not find user: swift.: UserNotFound: Could not find user: swift.
2020-06-23 09:14:59.816 1188425 WARNING keystone.auth.plugins.core [req-22d63c61-29c2-4995-b5e0-fc323e42cdca - - - - -] Could not find user: swift.: UserNotFound: Could not find user: swift.
2020-06-23 09:14:59.824 1188425 WARNING keystone.server.flask.application [req-22d63c61-29c2-4995-b5e0-fc323e42cdca - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.204.2: Unauthorized: The request you have made requires authentication.
2020-06-23 09:14:59.829 1188411 WARNING keystone.server.flask.application [req-e352368e-7c76-494e-affd-9fc7182cf17c - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.204.2: Unauthorized: The request you have made requires authentication.

Workaround
----------
With the help of Austin, needed to change the /etc/ceph/ceph.conf file as:

[client.radosgw.gateway]
rgw_gc_obj_min_wait = 600
rgw_gc_processor_period = 300
rgw_keystone_api_version = 3
rgw_keystone_token_cache_size = 0
rgw_keystone_admin_domain = Default
rgw_keystone_url = http://keystone-api.openstack.svc.cluster.local:5000
rgw_s3_auth_use_keystone = true
user = root
rgw_max_put_size = 53687091200
rgw_keystone_admin_password = xxxxxxxx
rgw_gc_max_objs = 977
rgw_keystone_admin_user = admin
rgw_frontends = civetweb port=192.168.204.1:7480
log_file = /var/log/radosgw/radosgw.log
rgw_keystone_admin_project = admin
host = controller-0
rgw_keystone_accepted_roles = admin,_member_
rgw_gc_processor_max_time = 300
keyring = /etc/ceph/ceph.client.radosgw.gateway.keyring

Revision history for this message
Yatindra Shashi (yshashi) wrote :

- this configuration changed in the /etc/ceph/ceph.conf get to previous state after lock/unlock

Ghada Khalil (gkhalil)
tags: added: stx.storage
Revision history for this message
Ghada Khalil (gkhalil) wrote :

Adding the domain & release tag. This will require further investigation by the storage team.

tags: added: stx.3.0
Revision history for this message
Ghada Khalil (gkhalil) wrote :

Closing as stx.3.0 is EOL as of Dec 2020

Changed in starlingx:
importance: Undecided → Low
status: New → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.