Excessive checks of unmounted disk in reconstructor

Bug #1491567 reported by Caleb Tennis
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Object Storage (swift)
Confirmed
Low
Unassigned

Bug Description

When a disk goes unmounted in the system, the logs fill up with these:

Sep 2 19:45:46 localhost object-reconstructor: 172.30.3.45:6003/d17/424 policy#1 frag#10 responded as unmounted

Which is an early check in the "_get_suffixes_to_sync" method.

A handful of disks makes the logs bloat dramatically. However, it seems like it would be more performant if we knew that a disk was unmounted that we didn't continue to check for subsequent jobs but just accepted it (at least for a period of time). This would speed up the jobs as we wouldn't need to continue querying the remote server.

Tags: ec
Changed in swift:
importance: Undecided → Low
Changed in swift:
status: New → Confirmed
Revision history for this message
Bill Huber (wbhuber) wrote :

I'm unable to recreate the issue at the moment. The log message from object-reconstructor seems fixed by other patches between Sept 2 and today. I'm marking this as incomplete as further notice.

Changed in swift:
status: Confirmed → Incomplete
Revision history for this message
clayg (clay-gerrard) wrote :

I can still seem 'em

object-6020: 127.0.0.1:6030/sdb3/267 policy#1 frag#1 responded as unmounted

I think my tree is current, and I'm not sure which change would have fixed it. Pretty sure I wouldn't have missed something that added the "caching 507's" suggestion...

Changed in swift:
status: Incomplete → Confirmed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.