So the problem is just that as the proxy is reading from the backend object node - once it gets to the end it doesn't really know that the EOF coming up out of the socket means the object-server closed the connection prematurely. I guess the ResummingGetter stuff only works if the failure from the backend is initiated by the proxy (i.e. a timeout) if the socket just dies proxy thinks that's the end of the object (even if it's < content-length)
here's a download of an object using my slow_download script where I kill -9'd the servicing object server:
So the problem is just that as the proxy is reading from the backend object node - once it gets to the end it doesn't really know that the EOF coming up out of the socket means the object-server closed the connection prematurely. I guess the ResummingGetter stuff only works if the failure from the backend is initiated by the proxy (i.e. a timeout) if the socket just dies proxy thinks that's the end of the object (even if it's < content-length)
here's a download of an object using my slow_download script where I kill -9'd the servicing object server:
Sep 1 21:56:24 saio proxy-server: 127.0.0.1 127.0.0.1 01/Sep/ 2016/21/ 56/24 GET /v1/AUTH_ test/test/ delete. me HTTP/1.0 200 - python- swiftclient- 3.0.1.dev72 AUTH_tk4f571562b... - 88739429 - tx527790b89a084 b478e5e7- 0057c8a388 - 127.9797 - - 1472766856. 030810118 1472766984. 010467052 0
And here's the same download using swift command line client:
Sep 1 21:58:48 saio proxy-server: 127.0.0.1 127.0.0.1 01/Sep/ 2016/21/ 58/48 GET /v1/AUTH_ test/test/ delete. me HTTP/1.0 200 - python- swiftclient- 3.0.1.dev72 AUTH_tk4f571562b... - 183797100 - tx4b89ab1148bf4 95ca4f99- 0057c8a496 - 2.7687 - - 1472767126. 060199022 1472767128. 828860998 0
It's possible this all gets easier to fix now that we're tracking backend bytes [2]
1. https:/ /gist.github. com/clayg/ 2458a4a7e1451c7 5fbc5a63fcae116 35 /review. openstack. org/#/c/ 363316/
2. https:/