2022-03-28 10:12:00 |
Giuseppe Petralia |
description |
Ceph version: 15.2.14-0ubuntu0.20.04.2
When creating a new bucket and insert an object on the Secondary this is replicated on Primary
but if when inserting an object on the Primary this is not replicated with the Secondary
and sync status shows
Secondary sync status:
# radosgw-admin sync status
realm 9db21932-5a36-4553-bb6b-526e4d704d45 (replicated)
zonegroup b94227a8-4f3b-4829-bc4e-e5325687b9a4 (myzonegroup)
zone 399fe045-b41a-41ea-97df-afefdba58523 (secondary)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is behind on 1 shards
behind shards: [23]
oldest incremental change not applied: 2022-03-25T16:36:48.631357+0000 [23]
data sync source: d0b0d796-4628-44f1-a10c-25e7198dd3af (pst-stg)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
1 shards are recovering
recovering shards: [45]
Secondary zone is full of Permission in `radosgw-admin sync error list `:
i.e.
{
"shard_id": 10,
"entries": [
{
"id": "1_1648226529.700750_24427332.1",
"section": "data",
"name": "test-20220325-12:d0b0d796-4628-44f1-a10c-25e7198dd3af.224896.1:1",
"timestamp": "2022-03-25T16:42:09.700750Z",
"info": {
"source_zone": "d0b0d796-4628-44f1-a10c-25e7198dd3af",
"error_code": 13,
"message": "failed to sync bucket instance: (13) Permission denied"
}
}
]
},
Increasing loglevel on ceph-radosgw and we can see signature mismatch error on Primary logs:
2022-03-24T13:15:56.812+0000 7f92f2ffd700 15 req 301160 0s :get_metadata server signature=AAAA$redacted # two different signatures here
2022-03-24T13:15:56.812+0000 7f92f2ffd700 15 req 301160 0s :get_metadata client signature=BBBB$redacted # two different signatures here
2022-03-24T13:15:56.812+0000 7f92f2ffd700 15 req 301160 0s :get_metadata compare=6
2022-03-24T13:15:56.812+0000 7f92f2ffd700 20 req 301160 0s :get_metadata rgw::auth::s3::LocalEngine denied with reason=-2027
2022-03-24T13:15:56.812+0000 7f92f2ffd700 20 req 301160 0s :get_metadata rgw::auth::s3::AWSAuthStrategy denied with reason=-2027
2022-03-24T13:15:56.812+0000 7f92f2ffd700 5 req 301160 0s :get_metadata Failed the auth strategy, reason=-2027
2022-03-24T13:15:56.812+0000 7f92f2ffd700 10 failed to authorize request
the above returns an http 403 error.
We have tried to remove Secondary zone, clean pools and recreate the zone, but as soon as we create a new bucket on the primary a behind shard in metadata appears on the secondary and once we create an object on Primary a recovering shard appears on Secondary data sync status output. |
Ceph version: 15.2.14-0ubuntu0.20.04.2
When creating a new bucket and insert an object on the Secondary this is replicated on Primary
but if when inserting an object on the Primary this is not replicated with the Secondary
and sync status shows
Secondary sync status:
# radosgw-admin sync status
realm 9db21932-5a36-4553-bb6b-526e4d704d45 (replicated)
zonegroup b94227a8-4f3b-4829-bc4e-e5325687b9a4 (myzonegroup)
zone 399fe045-b41a-41ea-97df-afefdba58523 (secondary)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is behind on 1 shards
behind shards: [23]
oldest incremental change not applied: 2022-03-25T16:36:48.631357+0000 [23]
data sync source: d0b0d796-4628-44f1-a10c-25e7198dd3af (primary)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
1 shards are recovering
recovering shards: [45]
Secondary zone is full of Permission in `radosgw-admin sync error list `:
i.e.
{
"shard_id": 10,
"entries": [
{
"id": "1_1648226529.700750_24427332.1",
"section": "data",
"name": "test-20220325-12:d0b0d796-4628-44f1-a10c-25e7198dd3af.224896.1:1",
"timestamp": "2022-03-25T16:42:09.700750Z",
"info": {
"source_zone": "d0b0d796-4628-44f1-a10c-25e7198dd3af",
"error_code": 13,
"message": "failed to sync bucket instance: (13) Permission denied"
}
}
]
},
Increasing loglevel on ceph-radosgw and we can see signature mismatch error on Primary logs:
2022-03-24T13:15:56.812+0000 7f92f2ffd700 15 req 301160 0s :get_metadata server signature=AAAA$redacted # two different signatures here
2022-03-24T13:15:56.812+0000 7f92f2ffd700 15 req 301160 0s :get_metadata client signature=BBBB$redacted # two different signatures here
2022-03-24T13:15:56.812+0000 7f92f2ffd700 15 req 301160 0s :get_metadata compare=6
2022-03-24T13:15:56.812+0000 7f92f2ffd700 20 req 301160 0s :get_metadata rgw::auth::s3::LocalEngine denied with reason=-2027
2022-03-24T13:15:56.812+0000 7f92f2ffd700 20 req 301160 0s :get_metadata rgw::auth::s3::AWSAuthStrategy denied with reason=-2027
2022-03-24T13:15:56.812+0000 7f92f2ffd700 5 req 301160 0s :get_metadata Failed the auth strategy, reason=-2027
2022-03-24T13:15:56.812+0000 7f92f2ffd700 10 failed to authorize request
the above returns an http 403 error.
We have tried to remove Secondary zone, clean pools and recreate the zone, but as soon as we create a new bucket on the primary a behind shard in metadata appears on the secondary and once we create an object on Primary a recovering shard appears on Secondary data sync status output. |
|