Too few ceph pgs on rally tests
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Won't Fix
|
Medium
|
MOS Ceph | ||
7.0.x |
Won't Fix
|
Medium
|
MOS Ceph |
Bug Description
Performed light rally tests from 2015/09/16 14:09:26 to 2015/09/16 16:12:41.
VM instances failed to spawn due to ceph filesystem reading error.
from nova-all.log: http://
root@node-3:~# rbd -p images ls -l | grep 6bb86ecf-
6bb86ecf-
6bb86ecf-
rally test log: http://
ceph status:
cluster d9646656-
health HEALTH_WARN too few pgs per osd (6 < min 20)
monmap e1: 1 mons at {node-1=
osdmap e565: 125 osds: 125 up, 125 in
pgmap v5357: 768 pgs, 12 pools, 495 MB data, 187 objects
260 GB used, 117 TB / 117 TB avail
Cluster configuration:
baremetal, ubuntu trusty, IBP, HA, neutron-vlan, no DVR, Ceph-all, Nova-debug, Nova-quotas, 7.0-288
api: '1.0'
astute_sha: a717657232721a7
auth_required: true
build_id: '288'
build_number: '288'
feature_groups:
- mirantis
fuel-agent_sha: 082a47bf014002e
fuel-library_sha: 121016a09b0e889
fuel-nailgun-
fuel-ostf_sha: 1f08e6e71021179
fuelmain_sha: 6b83d6a6a75bf7b
nailgun_sha: 93477f9b42c5a5e
openstack_version: 2015.1.0-7.0
production: docker
python-
release: '7.0
Diagnostic snapshot: http://
tags: | added: scale |
description: | updated |
affects: | mos → fuel |
Changed in fuel: | |
assignee: | nobody → MOS Ceph (mos-ceph) |
description: | updated |
Changed in fuel: | |
milestone: | none → 8.0 |
importance: | Undecided → High |
status: | New → Confirmed |
tags: | added: area-mos |
Changed in fuel: | |
importance: | High → Medium |
Changed in fuel: | |
milestone: | 8.0 → 9.0 |
Changed in fuel: | |
status: | Confirmed → Won't Fix |
milestone: | 9.0 → 10.0 |
status: | Won't Fix → Confirmed |
Changed in fuel: | |
status: | Confirmed → Won't Fix |
the issue also has been reproduced with 47 OSDs and 7.0-296 build f2f7-46e1- a6a3-29ba446df4 ae 192.168. 0.3:6789/ 0,node- 34=192. 168.0.26: 6789/0, node-35= 192.168. 0.52:6789/ 0}, election epoch 4, quorum 0,1,2 node-28, node-34, node-35
768 active+clean
cluster 8da895d3-
health HEALTH_WARN too few pgs per osd (16 < min 20)
monmap e3: 3 mons at {node-28=
osdmap e342: 47 osds: 46 up, 46 in
pgmap v82746: 768 pgs, 12 pools, 7050 MB data, 1734 objects
115 GB used, 42452 GB / 42567 GB avail
But hasn't with 3 OSDs and 7.0-288 build 321e-449f- b42f-03b91d497c a2 192.168. 0.22:6789/ 0,node- 22=192. 168.0.16: 6789/0, node-23= 192.168. 0.3:6789/ 0,node- 24=192. 168.0.19: 6789/0, node-25= 192.168. 0.20:6789/ 0}, election epoch 22, quorum 0,1,2,3,4 node-23, node-22, node-24, node-25, node-21
960 active+clean
cluster 39aa7f69-
health HEALTH_OK
monmap e5: 5 mons at {node-21=
osdmap e35: 3 osds: 3 up, 3 in
pgmap v29386: 960 pgs, 15 pools, 17334 MB data, 4619 objects
34363 MB used, 2757 GB / 2791 GB avail
client io 93 B/s wr, 0 op/s