Swift proxy gives 503 even though swift playbooks are executed

Bug #1643951 reported by SHASHANK TAVILDAR
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla
Expired
Undecided
Unassigned

Bug Description

This is for AIO and multinode deployment using stable/newton branch

Steps I used to deploy swift:
1. Create partitions and label them as KOLLA_SWIFT_DATA.
2. Create Object, Container and Account rings for partitions labeled as KOLLA_SWIFT_DATA.
3. My /etc/kolla/swift/config has the required rings and .gz files.
4. All Kolla playbooks passed successfully.

Validate swift:
-> swift stat
RESP STATUS: 503 Service Unavailable

Proxy container logs: (AIO)
-> docker logs swift-proxy-server
swift-proxy-server: Account HEAD returning 503 for [] (txn: tx119e5ab980d04a4baebe8-00582df536) (client_ip: 192.168.0.233)
swift-proxy-server: ERROR with Account server 127.0.0.1:6001/dq1 re: Trying to HEAD /v1/AUTH_2853d92119b34324a9f06819b4b2d62a: Connection refused (txn: tx22674890a7af4f22812cd-00582df537) (client_ip: 192.168.0.233)
swift-proxy-server: ERROR with Account server 127.0.0.1:6001/vdf1 re: Trying to HEAD /v1/AUTH_2853d92119b34324a9f06819b4b2d62a: Connection refused (txn: tx22674890a7af4f22812cd-00582df537) (client_ip: 192.168.0.233)
swift-proxy-server: ERROR with Account server 127.0.0.1:6001/vdb1 re: Trying to HEAD /v1/AUTH_2853d92119b34324a9f06819b4b2d62a: Connection refused (txn: tx22674890a7af4f22812cd-00582df537) (client_ip: 192.168.0.233)

Request all to look at this issue and post your thoughts here.

Revision history for this message
Paul Bourke (pauldbourke) wrote :

Can you check are any containers stuck in 'restarting' mode? Unfortunately the playbooks wont detect this currently

Changed in kolla:
status: New → Incomplete
Revision history for this message
SHASHANK TAVILDAR (shasha.tavil) wrote :

The containers are running and active. No containers in restarting/exit mode.

Changed in kolla:
status: Incomplete → Triaged
importance: Undecided → High
Changed in kolla:
milestone: none → ocata-3
Changed in kolla:
milestone: ocata-3 → ocata-rc1
Changed in kolla:
milestone: ocata-rc1 → pike-1
Changed in kolla:
milestone: pike-2 → pike-3
Changed in kolla:
milestone: pike-3 → pike-rc1
Changed in kolla:
milestone: pike-rc1 → queens-1
Changed in kolla:
milestone: queens-2 → queens-3
Changed in kolla:
milestone: queens-3 → queens-rc1
Revision history for this message
David Rabel (rabel-b1) wrote :

Is there a workaround for this?

Changed in kolla:
milestone: queens-rc1 → queens-rc2
Changed in kolla:
milestone: queens-rc2 → rocky-1
Changed in kolla:
milestone: rocky-2 → rocky-3
Revision history for this message
Jeffrey Zhang (jeffrey4l) wrote : Cleanup EOL bug report

This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: <RELEASE_NAME>"
  Only still supported release names are valid (OCATA, PIKE, QUEENS, ROCKY, ROCKY).
  Valid example: CONFIRMED FOR: OCATA

Changed in kolla:
importance: High → Undecided
status: Triaged → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.