When deleting a lot of volumes at once, some of the volumes stay in status “Deleting”
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Committed
|
High
|
Oleksiy Molchanov | ||
8.0.x |
Won't Fix
|
High
|
Unassigned | ||
Mitaka |
Fix Released
|
High
|
Oleksiy Molchanov | ||
Newton |
Fix Released
|
High
|
Oleksiy Molchanov | ||
Ocata |
Fix Committed
|
High
|
Oleksiy Molchanov |
Bug Description
Precondition
Env has 1000 volumes, Items Per Page 500
steps:
1. Check all volumes on page
2. Click button Delete.
Actual result:
After some time an error appears: “Gateway Timeout: The gateway did not receive a timely response from the upstream server or application. “
After some times some of the volumes stay in status “Deleting”.
After updating volume status on “Available” on page: Admin-System-
relevant cinder-volume logs:
2016-02-25 15:04:34.338 4925 ERROR cinder.service [req-506085d9-
2016-02-25 15:04:34.338 4925 ERROR cinder.service Traceback (most recent call last):
2016-02-25 15:04:34.338 4925 ERROR cinder.service File "/usr/lib/
2016-02-25 15:04:34.338 4925 ERROR cinder.service service_ref = objects.
2016-02-25 15:04:34.338 4925 ERROR cinder.service File "/usr/lib/
2016-02-25 15:04:34.338 4925 ERROR cinder.service result = fn(cls, context, *args, **kwargs)
2016-02-25 15:04:34.338 4925 ERROR cinder.service File "/usr/lib/
2016-02-25 15:04:34.338 4925 ERROR cinder.service db_service = db.service_
2016-02-25 15:04:34.338 4925 ERROR cinder.service File "/usr/lib/
2016-02-25 15:04:34.338 4925 ERROR cinder.service return IMPL.service_
…
…
2016-02-25 15:04:34.338 4925 ERROR cinder.service File "/usr/lib/
2016-02-25 15:04:34.338 4925 ERROR cinder.service (self.size(), self.overflow(), self._timeout))
2016-02-25 15:04:34.338 4925 ERROR cinder.service TimeoutError: QueuePool limit of size 5 overflow 5 reached, connection timed out, timeout 30
2016-02-25 15:04:34.338 4925 ERROR cinder.service
VERSION:
feature_groups:
- mirantis
production: "docker"
release: "8.0"
api: "1.0"
build_number: "569"
build_id: "569"
fuel-nailgun_sha: "558ca91a854cf2
python-
fuel-agent_sha: "658be72c4b42d3
fuel-
astute_sha: "b81577a5b7857c
fuel-library_sha: "33634ec27be77e
fuel-ostf_sha: "3bc76a63a9e7d1
fuel-mirror_sha: "fb45b80d7bee58
fuelmenu_sha: "78ffc73065a967
shotgun_sha: "63645dea384a37
network-
fuel-upgrade_sha: "616a7490ec7199
fuelmain_sha: "d605bcbabf3153
Easy steps to reproduce:
1. Deploy env with Ceph and 1 controller
2. Create 10 volumes
3. Delete 10 volumes with one command: `cinder delete <vol1_id> <vol2_id>... <vol0_id>`
tags: |
added: area-cinder removed: cinder |
tags: | removed: horizon |
tags: | added: area-build |
tags: |
added: release-notes-done removed: release-notes |
tags: | removed: area-build |
tags: | added: 10.0-reviewed |
Changed in mos: | |
status: | Confirmed → Won't Fix |
tags: | added: release-notes |
Changed in mos: | |
status: | Won't Fix → Confirmed |
milestone: | 9.2 → 10.0 |
no longer affects: | mos/10.0.x |
no longer affects: | mos/10.0.x |
no longer affects: | mos |
no longer affects: | mos/8.0.x |
no longer affects: | mos/9.x |
Changed in fuel: | |
status: | New → Fix Committed |
importance: | Undecided → High |
assignee: | nobody → Oleksiy Molchanov (omolchanov) |
Changed in fuel: | |
milestone: | none → 11.0 |
tags: | added: on-verification |
tags: | added: customer-found |
tags: | added: on-verification |
Here we actually see two problems:
Horizon gets 504 from haproxy on long-running cinder operation.
User impact of that particular issue is minimal, however:
Logs from cinder-volume indicate that it is not properly configured to process such requests, setting importance to High because of that.