radosgw does not capture all usage information when rgw.none exists
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceilometer |
New
|
Undecided
|
Unassigned |
Bug Description
For some reason, RGW includes an "rgw.none" section in the bucket stats for only some buckets.
When this section exists, the following meters in Ceilometer do not poll the usage data and a resource of type "ceph_account" is not created in Gnocchi for this bucket:
- radosgw.objects
- radosgw.
- radosgw.
- radosgw.api.request
- radosgw.
- radosgw.
An example of the usage section in the "radosgw-admin bucket stats" output:
"usage": {
},
}
Some projects include multiple buckets, where some buckets have rgw.none, and others don't. Those that have rgw.none result in nothing being stored in Gnocchi.
Any idea why rgw.none exists? And what the Ceilometer RGW code is doing to ignore the rgw.main usage information?
Thanks!
Eric
While looking for a pattern, I figured out that there are "swift_account" resources in Gnocchi that exist for the missing "ceph_account" resources, and under the "swift_account" resources, there are "radosgw. containers. objects" and "radosgw. containers. objects. size" metrics.
We had switched between using native Swift and "Ceph+RGW+Swift API compatibility" recently, and it looks like Swift resources, that had been stored in Gnocchi, remained.
This is an example project that had an issue (project id = 5f6e2a6fcdbc427 c81cb462f45b147 d7)
gnocchi resource list --detail | grep 5f6e2a6fcdbc427 c81cb462f45b147 d7
| 5f6e2a6f- cdbc-427c- 81cb-462f45b147 d7 | swift_account | 5f6e2a6fcdbc427 c81cb462f45b147 d7 | None | 5f6e2a6fcdbc427 c81cb462f45b147 d7 | 2019-03- 25T04:21: 39.139673+ 00:00 | None | 2019-03- 25T04:21: 39.139694+ 00:00 | None | 39ef552637e74e3 2abdf25dd745423 1f:f0a94de0d618 4e10814660d4707 21c31 | 646d-5595- 8350-670ec2f41c 13 | swift_account | 5f6e2a6fcdbc427 c81cb462f45b147 d7 | None | 5f6e2a6fcdbc427 c81cb462f45b147 d7_ABC Backup | 2019-03- 25T04:21: 42.227916+ 00:00 | None | 2019-03- 25T04:21: 42.227935+ 00:00 | None | 39ef552637e74e3 2abdf25dd745423 1f:f0a94de0d618 4e10814660d4707 21c31 | 0086-5c9b- 8627-48ed48afb3 db | swift_account | 5f6e2a6fcdbc427 c81cb462f45b147 d7 | 84a7b6e6bad4422 78f4c1630f5b4e0 c7 | swift_v1_ AUTH_5f6e2a6fcd bc427c81cb462f4 5b147d7 | 2019-03- 31T16:36: 35.613956+ 00:00 | None | 2019-03- 31T16:36: 35.613975+ 00:00 | None | 39ef552637e74e3 2abdf25dd745423 1f:f0a94de0d618 4e10814660d4707 21c31 | 98c7-5ea8- 9c91-54d49961e0 56 | ceph_account | 5f6e2a6fcdbc427 c81cb462f45b147 d7 | None | 5f6e2a6fcdbc427 c81cb462f45b147 d7_ABC Backup_segments | 2019-04- 01T10:17: 18.168959+ 00:00 | None | 2019-04- 01T10:17: 18.168980+ 00:00 | None | 39ef552637e74e3 2abdf25dd745423 1f:f0a94de0d618 4e10814660d4707 21c31 |
| 00a842f1-
| 6858266a-
| cd7c570d-
So, this likely isn't a bug, but rather a misunderstanding of how resources were handled when switching from Swift to Ceph.
Eric