Updater quarantines bad asyncs to wrong policy data dir

Bug #2032958 reported by clayg
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Object Storage (swift)
New
Undecided
Unassigned

Bug Description

When working on an out-of-tree tool to inspect some unprocessible async updates we borrowed some updater quarantine code and I noticed in review that it wasn't using the policy to create the quarantine directory which seemed un-idiomatic. I think this behavior of the updater on master is wrong:

vagrant@saio:~$ echo 'test1' > test
vagrant@saio:~$ swift upload test test -H 'x-storage-policy: ec'
test
vagrant@saio:~$ swift-init stop container-server -c 1
Signal container-server pid: 36926 signal: Signals.SIGTERM
container-server (36926) appears to have stopped
vagrant@saio:~$ swift-init stop container-server -c 2
Signal container-server pid: 36927 signal: Signals.SIGTERM
container-server (36927) appears to have stopped
vagrant@saio:~$ echo 'test2' > test
vagrant@saio:~$ swift upload test test
test
vagrant@saio:~$ find /srv/node*/sdb*/async*
/srv/node2/sdb6/async_pending-1
/srv/node2/sdb6/async_pending-1/35a
/srv/node2/sdb6/async_pending-1/35a/a161102fba1710ef912af194b8d4635a-1692886411.39630
/srv/node3/sdb3/async_pending-1
/srv/node3/sdb3/async_pending-1/35a
/srv/node3/sdb3/async_pending-1/35a/a161102fba1710ef912af194b8d4635a-1692886411.39630
vagrant@saio:~$ : > /srv/node2/sdb6/async_pending-1/35a/a161102fba1710ef912af194b8d4635a-1692886411.39630
vagrant@saio:~$ swift-init container-server restart
Signal container-server pid: 36928 signal: Signals.SIGTERM
Signal container-server pid: 36929 signal: Signals.SIGTERM
container-server (36928) appears to have stopped
container-server (36929) appears to have stopped
WARNING: Unable to modify max process limit. Running as non-root?
Starting container-server...(/etc/swift/container-server/1.conf.d)
Starting container-server...(/etc/swift/container-server/2.conf.d)
Starting container-server...(/etc/swift/container-server/3.conf.d)
Starting container-server...(/etc/swift/container-server/4.conf.d)
vagrant@saio:~$ swift-init object-updater once -nv
WARNING: Unable to modify max process limit. Running as non-root?
Running object-updater once...(/etc/swift/object-server/1.conf.d)
Running object-updater once...(/etc/swift/object-server/2.conf.d)
Running object-updater once...(/etc/swift/object-server/3.conf.d)
Running object-updater once...(/etc/swift/object-server/4.conf.d)
...
object-6020: ERROR Pickle problem, quarantining /srv/node2/sdb6/async_pending-1/35a/a161102fba1710ef912af194b8d4635a-1692886411.39630: #012Traceback (most recent call last):#012 File "/vagrant/swift/swift/obj/updater.py", line 430, in _load_update#012 return pickle.load(open(update_path, 'rb'))#012EOFError: Ran out of input
...
vagrant@saio:~$ find /srv/node*/sdb*/quar*
/srv/node2/sdb6/quarantined
/srv/node2/sdb6/quarantined/objects
/srv/node2/sdb6/quarantined/objects/a161102fba1710ef912af194b8d4635a-1692886411.39630

I think I'd actually expect these files to get moved into a path under /quarantined that matches their original path on disk, i.e.

/srv/node2/sdb6/quarantined/async_pending-1/35a/a161102fba1710ef912af194b8d4635a-1692886411.39630

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.