2014-03-24 20:39:35 |
Gilles Gaillard |
bug |
|
|
added bug |
2014-03-24 20:39:35 |
Gilles Gaillard |
attachment added |
|
swift_logs.log https://bugs.launchpad.net/bugs/1296941/+attachment/4041459/+files/swift_logs.log |
|
2014-03-24 20:41:16 |
Gilles Gaillard |
description |
This happens in the following case:
- range GET to SLO made of inner SLOs
- http connection is reused (proxy server sends a Keep-Alive)
The following test shows the issue. The target SLO in the test is: container: "test-1", name: "1_GB_chunk_1MB_ID". It's made of
two inner SLOs "xx-slo-000001" and "xx-slo-000002".
The first one has 1000 segments (000001 to 001000), the second has 24 (001001 to 001024) . Detailed names are in attached logs.
The test is single threaded, works with connection reuse and performs:
•range GET for bytes [0-1048575]
X-Trans-Id: tx2684d6e70e094d84a27af-0053301200
This worked fine.
• range GET for bytes [1048576-2097151]
X-Trans-Id: txd6c62bf3fb7f41d7b8c7d-0053301200
Http response + Data is received
•range GET for bytes [2097152-3145727]
no http response is received
instead received data for segment 001001, i.e. the first segment in slo-000002
When I look at the swift logs for the second call, I see that for the second transaction, the
second storage server (data02), performs a GET to segment 000002 (which is correct), BUT
immediately followed by a GET to slo-000002, then to segment 001001 which may explain why
the client receives it afterward.
I had a discussion with David G. - who wrote:
So this is a bug - and a pretty weird one too as far as I can tell. I wrote a script to reproduce it:
https://gist.github.com/dpgoetz/9747591
That runs on a SAIO but you have to lower your minimum segment size with the following in your proxy server conf:
[filter:slo]
use = egg:swift#slo
min_segment_size = 1
He also said that milestone-1.10.0-rc1 should not show this issue. |
This happens in the following case:
- range GET to SLO made of inner SLOs
- http connection is reused (proxy server sends a Keep-Alive)
The following test shows the issue. The target SLO in the test is: container: "test-1", name: "1_GB_chunk_1MB_ID". It's made of
two inner SLOs "xx-slo-000001" and "xx-slo-000002".
The first one has 1000 segments (000001 to 001000), the second has 24 (001001 to 001024) . Detailed names are in attached logs.
The test is single threaded, works with connection reuse and performs:
•range GET for bytes [0-1048575]
X-Trans-Id: tx2684d6e70e094d84a27af-0053301200
This worked fine.
• range GET for bytes [1048576-2097151]
X-Trans-Id: txd6c62bf3fb7f41d7b8c7d-0053301200
Http response + Data is received
•range GET for bytes [2097152-3145727]
no http response is received
instead received data for segment 001001, i.e. the first segment in slo-000002
When I look at the swift logs for the second call, I see that for the second transaction, the
second storage server (data02), performs a GET to segment 000002 (which is correct), BUT
immediately followed by a GET to slo-000002, then to segment 001001 which may explain why
the client receives it afterward.
I had a discussion with David G. - who wrote:
So this is a bug - and a pretty weird one too as far as I can tell. I wrote a script to reproduce it:
https://gist.github.com/dpgoetz/9747591
That runs on a SAIO but you have to lower your minimum segment size with the following in your proxy server conf:
[filter:slo]
use = egg:swift#slo
min_segment_size = 1
He also said that milestone-1.10.0-rc1 should not show this issue. |
|
2014-03-24 20:42:16 |
Gilles Gaillard |
description |
This happens in the following case:
- range GET to SLO made of inner SLOs
- http connection is reused (proxy server sends a Keep-Alive)
The following test shows the issue. The target SLO in the test is: container: "test-1", name: "1_GB_chunk_1MB_ID". It's made of
two inner SLOs "xx-slo-000001" and "xx-slo-000002".
The first one has 1000 segments (000001 to 001000), the second has 24 (001001 to 001024) . Detailed names are in attached logs.
The test is single threaded, works with connection reuse and performs:
•range GET for bytes [0-1048575]
X-Trans-Id: tx2684d6e70e094d84a27af-0053301200
This worked fine.
• range GET for bytes [1048576-2097151]
X-Trans-Id: txd6c62bf3fb7f41d7b8c7d-0053301200
Http response + Data is received
•range GET for bytes [2097152-3145727]
no http response is received
instead received data for segment 001001, i.e. the first segment in slo-000002
When I look at the swift logs for the second call, I see that for the second transaction, the
second storage server (data02), performs a GET to segment 000002 (which is correct), BUT
immediately followed by a GET to slo-000002, then to segment 001001 which may explain why
the client receives it afterward.
I had a discussion with David G. - who wrote:
So this is a bug - and a pretty weird one too as far as I can tell. I wrote a script to reproduce it:
https://gist.github.com/dpgoetz/9747591
That runs on a SAIO but you have to lower your minimum segment size with the following in your proxy server conf:
[filter:slo]
use = egg:swift#slo
min_segment_size = 1
He also said that milestone-1.10.0-rc1 should not show this issue. |
This happens in the following case:
- range GET to SLO made of inner SLOs
- http connection is reused (proxy server sends a Keep-Alive)
The following test shows the issue. The target SLO in the test is: container: "test-1", name: "1_GB_chunk_1MB_ID". It's made of
two inner SLOs "xx-slo-000001" and "xx-slo-000002".
The first one has 1000 segments (000001 to 001000), the second has 24 (001001 to 001024) . Detailed names are in attached logs.
The test is single threaded, works with connection reuse and performs:
•range GET for bytes [0-1048575]
X-Trans-Id: tx2684d6e70e094d84a27af-0053301200
This worked fine.
• range GET for bytes [1048576-2097151]
X-Trans-Id: txd6c62bf3fb7f41d7b8c7d-0053301200
Http response + Data is received
•range GET for bytes [2097152-3145727]
no http response is received
instead received data for segment 001001, i.e. the first segment in slo-000002
When I look at the swift logs for the second call, I see that for the second transaction, the second storage server (data02), performs a GET to segment 000002 (which is correct), BUT immediately followed by a GET to slo-000002, then to segment 001001 which may explain why the client receives it afterward.
I had a discussion with David G. - who wrote:
So this is a bug - and a pretty weird one too as far as I can tell. I wrote a script to reproduce it:
https://gist.github.com/dpgoetz/9747591
That runs on a SAIO but you have to lower your minimum segment size with the following in your proxy server conf:
[filter:slo]
use = egg:swift#slo
min_segment_size = 1
He also said that milestone-1.10.0-rc1 should not show this issue. |
|
2014-03-24 20:42:56 |
Gilles Gaillard |
description |
This happens in the following case:
- range GET to SLO made of inner SLOs
- http connection is reused (proxy server sends a Keep-Alive)
The following test shows the issue. The target SLO in the test is: container: "test-1", name: "1_GB_chunk_1MB_ID". It's made of
two inner SLOs "xx-slo-000001" and "xx-slo-000002".
The first one has 1000 segments (000001 to 001000), the second has 24 (001001 to 001024) . Detailed names are in attached logs.
The test is single threaded, works with connection reuse and performs:
•range GET for bytes [0-1048575]
X-Trans-Id: tx2684d6e70e094d84a27af-0053301200
This worked fine.
• range GET for bytes [1048576-2097151]
X-Trans-Id: txd6c62bf3fb7f41d7b8c7d-0053301200
Http response + Data is received
•range GET for bytes [2097152-3145727]
no http response is received
instead received data for segment 001001, i.e. the first segment in slo-000002
When I look at the swift logs for the second call, I see that for the second transaction, the second storage server (data02), performs a GET to segment 000002 (which is correct), BUT immediately followed by a GET to slo-000002, then to segment 001001 which may explain why the client receives it afterward.
I had a discussion with David G. - who wrote:
So this is a bug - and a pretty weird one too as far as I can tell. I wrote a script to reproduce it:
https://gist.github.com/dpgoetz/9747591
That runs on a SAIO but you have to lower your minimum segment size with the following in your proxy server conf:
[filter:slo]
use = egg:swift#slo
min_segment_size = 1
He also said that milestone-1.10.0-rc1 should not show this issue. |
This happens in the following case:
- range GET to SLO made of inner SLOs
- http connection is reused (proxy server sends a Keep-Alive)
The following test shows the issue. The target SLO in the test is: container: "test-1", name: "1_GB_chunk_1MB_ID". It's made of
two inner SLOs "xx-slo-000001" and "xx-slo-000002".
The first one has 1000 segments (000001 to 001000), the second has 24 (001001 to 001024) . Detailed names are in attached logs.
The test is single threaded, works with connection reuse and performs:
•range GET for bytes [0-1048575]
X-Trans-Id: tx2684d6e70e094d84a27af-0053301200
This worked fine.
• range GET for bytes [1048576-2097151]
X-Trans-Id: txd6c62bf3fb7f41d7b8c7d-0053301200
Http response + Data is received
•range GET for bytes [2097152-3145727]
no http response is received
instead received data for segment 001001, i.e. the first segment in slo-000002
When I look at the swift logs for the second call, I see that for the second transaction, the second storage server (data02), performs a GET to segment 000002 (which is correct), BUT immediately followed by a GET to slo-000002, then to segment 001001 which may explain why the client receives it afterward.
I had a discussion with David G. - who wrote:
So this is a bug - and a pretty weird one too as far as I can tell. I wrote a script to reproduce it:
https://gist.github.com/dpgoetz/9747591
That runs on a SAIO but you have to lower your minimum segment size with the following in your proxy server conf:
[filter:slo]
use = egg:swift#slo
min_segment_size = 1
He also said that milestone-1.10.0-rc1 should not show this issue. |
|
2014-03-24 20:45:36 |
Gilles Gaillard |
description |
This happens in the following case:
- range GET to SLO made of inner SLOs
- http connection is reused (proxy server sends a Keep-Alive)
The following test shows the issue. The target SLO in the test is: container: "test-1", name: "1_GB_chunk_1MB_ID". It's made of
two inner SLOs "xx-slo-000001" and "xx-slo-000002".
The first one has 1000 segments (000001 to 001000), the second has 24 (001001 to 001024) . Detailed names are in attached logs.
The test is single threaded, works with connection reuse and performs:
•range GET for bytes [0-1048575]
X-Trans-Id: tx2684d6e70e094d84a27af-0053301200
This worked fine.
• range GET for bytes [1048576-2097151]
X-Trans-Id: txd6c62bf3fb7f41d7b8c7d-0053301200
Http response + Data is received
•range GET for bytes [2097152-3145727]
no http response is received
instead received data for segment 001001, i.e. the first segment in slo-000002
When I look at the swift logs for the second call, I see that for the second transaction, the second storage server (data02), performs a GET to segment 000002 (which is correct), BUT immediately followed by a GET to slo-000002, then to segment 001001 which may explain why the client receives it afterward.
I had a discussion with David G. - who wrote:
So this is a bug - and a pretty weird one too as far as I can tell. I wrote a script to reproduce it:
https://gist.github.com/dpgoetz/9747591
That runs on a SAIO but you have to lower your minimum segment size with the following in your proxy server conf:
[filter:slo]
use = egg:swift#slo
min_segment_size = 1
He also said that milestone-1.10.0-rc1 should not show this issue. |
This happens in the following case:
- swift-1.12.0
- range GET to SLO made of inner SLOs
- http connection is reused (proxy server sends a Keep-Alive)
The following test shows the issue. The target SLO in the test is: container: "test-1", name: "1_GB_chunk_1MB_ID". It's made of
two inner SLOs "xx-slo-000001" and "xx-slo-000002".
The first one has 1000 segments (000001 to 001000), the second has 24 (001001 to 001024) . Detailed names are in attached logs.
The test is single threaded, works with connection reuse and performs:
•range GET for bytes [0-1048575]
X-Trans-Id: tx2684d6e70e094d84a27af-0053301200
This worked fine.
• range GET for bytes [1048576-2097151]
X-Trans-Id: txd6c62bf3fb7f41d7b8c7d-0053301200
Http response + Data is received
•range GET for bytes [2097152-3145727]
no http response is received
instead received data for segment 001001, i.e. the first segment in slo-000002
When I look at the swift logs for the second call, I see that for the second transaction, the second storage server (data02), performs a GET to segment 000002 (which is correct), BUT immediately followed by a GET to slo-000002, then to segment 001001 which may explain why the client receives it afterward.
I had a discussion with David G. - who wrote:
So this is a bug - and a pretty weird one too as far as I can tell. I wrote a script to reproduce it:
https://gist.github.com/dpgoetz/9747591
That runs on a SAIO but you have to lower your minimum segment size with the following in your proxy server conf:
[filter:slo]
use = egg:swift#slo
min_segment_size = 1
He also said that milestone-1.10.0-rc1 should not show this issue. |
|
2014-03-25 17:34:12 |
John Dickinson |
swift: importance |
Undecided |
Critical |
|
2014-03-25 17:34:18 |
John Dickinson |
swift: milestone |
|
next-icehouse |
|
2014-03-25 18:16:18 |
John Dickinson |
swift: assignee |
|
Chuck Thier (cthier) |
|
2014-03-25 19:07:15 |
OpenStack Infra |
swift: status |
New |
In Progress |
|
2014-04-03 18:41:53 |
John Dickinson |
swift: status |
In Progress |
Fix Committed |
|
2014-04-04 08:09:42 |
Thierry Carrez |
swift: status |
Fix Committed |
Fix Released |
|
2014-04-17 07:40:57 |
Thierry Carrez |
swift: milestone |
1.13.1-rc1 |
1.13.1 |
|